[gpfsug-discuss] Systemd configuration to wait for mount of SS filesystem

Stephen Ulmer ulmer at ulmer.org
Fri Mar 15 02:37:18 GMT 2019


+1 — This is the best solution.

The only thing I would change would be to add:

	TimeoutStartSec=300

Or something similar.

This leaves the maintenance of starting applications where it belongs (in systems, not in GPFS). You can use the same technique for other VFS types (like NFS if you needed). You can check for any file on the file system you want, so you could just put a dotfile in the root of each waited-for file system and look for that. You an even chase your symlink if you want (removing the parameter completely).

As a recovering sysadmin, this makes me smile.

-- 
Stephen



> On Mar 14, 2019, at 5:36 PM, Trafford, Tyler <tyler.trafford at yale.edu> wrote:
> 
> I use the following:
> 
> [Unit]
> Description=Foo
> After=gpfs.service
> 
> [Service]
> ExecStartPre=/bin/bash -c 'until [ -d /gpfs/%I/apps/services/foo ]; do sleep 20s; done'
> ExecStart=/usr/sbin/runuser -u root /gpfs/%I/apps/services/foo/bin/runme
> 
> [Install]
> WantedBy=multi-user.target
> 
> 
> Then I can drop it on multiple systems (with the same app layout), and run:
> 
> systemctl enable foo at fs1
> or
> systemctl enable foo at fs2
> 
> The "%I" gets replaced by what is after that "@".
> 
> -- 
> Tyler Trafford
> tyler.trafford at yale.edu
> 
> ________________________________________
> From: gpfsug-discuss-bounces at spectrumscale.org <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Stephen R Buchanan <stephen.buchanan at us.ibm.com>
> Sent: Thursday, March 14, 2019 3:58 PM
> To: gpfsug-discuss at spectrumscale.org
> Subject: [gpfsug-discuss] Systemd configuration to wait for mount of SS filesystem
> 
> I searched the list archives with no obvious results.
> 
> I have an application that runs completely from a Spectrum Scale filesystem that I would like to start automatically on boot, obviously after the SS filesystem mounts, on multiple nodes. There are groups of nodes for dev, test, and production, (separate clusters) and the target filesystems are different between them (and are named differently, so the paths are different), but all nodes have an identical soft link from root (/) that points to the environment-specific path. (see below for details)
> 
> My first effort before I did any research was to try to simply use a directive of After=gpfs.service which anyone who has tried it will know that the gpfs.service returns as "started" far in advance (and independently of) when filesystems are actually mounted.
> 
> What I want is to be able to deploy a systemd service-unit and path-unit pair of files (that are as close to identical as possible across the environments) that wait for /appbin/builds/ to be available (/[dev|tst|prd]01/ to be mounted) and then starts the application. The problem is that systemd.path units, specifically the 'PathExists=' directive, don't follow symbolic links, so I would need to customize the path unit file for each environment with the full (real) path. There are other differences between the environments that I believe I can handle by specifying an EnvironmentFile directive -- but that would come from the SS filesystem so as to be a single reference point, so it can't help with the path unit.
> 
> Any suggestions are welcome and appreciated.
> 
> dev:(path names have been slightly generalized, but the structure is identical)
> SS filesystem: /dev01
> full path: /dev01/app-bin/user-tree/builds/
> soft link: /appbin/ -> /dev01/app-bin/user-tree/
> 
> test:
> SS filesystem: /tst01
> full path: /tst01/app-bin/user-tree/builds/
> soft link: /appbin/ -> /tst01/app-bin/user-tree/
> 
> prod:
> SS filesystem: /prd01
> full path: /prd01/app-bin/user-tree/builds/
> soft link: /appbin/ -> /prd01/app-bin/user-tree/
> 
> 
> Stephen R. Wall Buchanan
> Sr. IT Specialist
> IBM Data & AI North America Government Expert Labs
> +1 (571) 299-4601
> stephen.buchanan at us.ibm.com
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190314/dcb272a1/attachment-0002.htm>


More information about the gpfsug-discuss mailing list