[gpfsug-discuss] Systemd configuration to wait for mount of SS filesystem

Bryan Banister bbanister at jumptrading.com
Thu Mar 14 20:47:35 GMT 2019


We use a site specific systemd unit which we call gpfs_fs.service that `BindsTo=gpfs.service` and `After=gpfs.service`.  This service basically waits for GPFS to become active and then once active attempts to mount the required file systems.  The list of file systems is determined by our own system configuration software (e.g. puppet/cfengine/salt-stack/ansible).

We have also added a custom extension to gpfs.service (/usr/lib/systemd/system/gpfs.service.d/gpfs.service.conf) which adds a ExecStartPre to the IBM provided unit (we don’t want to mess with this IBM provide file). This ExecStartPre will make sure the node has the required version of GPFS installed and do some other basic checks.

We have other systemd controlled process then both `BindsTo` and `After` the gpfs_fs.service.  This works pretty well for us.

Hope that helps,
-Bryan

From: gpfsug-discuss-bounces at spectrumscale.org <gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Frederick Stock
Sent: Thursday, March 14, 2019 3:17 PM
To: gpfsug-discuss at spectrumscale.org
Cc: gpfsug-discuss at spectrumscale.org
Subject: Re: [gpfsug-discuss] Systemd configuration to wait for mount of SS filesystem

[EXTERNAL EMAIL]
It is not systemd based but you might want to look at the user callback feature in GPFS (mmaddcallback).  There is a file system mount callback you could register.

Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
stockf at us.ibm.com<mailto:stockf at us.ibm.com>


----- Original message -----
From: "Stephen R Buchanan" <stephen.buchanan at us.ibm.com<mailto:stephen.buchanan at us.ibm.com>>
Sent by: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
To: gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>
Cc:
Subject: [gpfsug-discuss] Systemd configuration to wait for mount of SS filesystem
Date: Thu, Mar 14, 2019 3:58 PM

I searched the list archives with no obvious results.

I have an application that runs completely from a Spectrum Scale filesystem that I would like to start automatically on boot, obviously after the SS filesystem mounts, on multiple nodes. There are groups of nodes for dev, test, and production, (separate clusters) and the target filesystems are different between them (and are named differently, so the paths are different), but all nodes have an identical soft link from root (/) that points to the environment-specific path. (see below for details)

My first effort before I did any research was to try to simply use a directive of After=gpfs.service which anyone who has tried it will know that the gpfs.service returns as "started" far in advance (and independently of) when filesystems are actually mounted.

What I want is to be able to deploy a systemd service-unit and path-unit pair of files (that are as close to identical as possible across the environments) that wait for /appbin/builds/ to be available (/[dev|tst|prd]01/ to be mounted) and then starts the application. The problem is that systemd.path units, specifically the 'PathExists=' directive, don't follow symbolic links, so I would need to customize the path unit file for each environment with the full (real) path. There are other differences between the environments that I believe I can handle by specifying an EnvironmentFile directive -- but that would come from the SS filesystem so as to be a single reference point, so it can't help with the path unit.

Any suggestions are welcome and appreciated.

dev:(path names have been slightly generalized, but the structure is identical)
SS filesystem: /dev01
full path: /dev01/app-bin/user-tree/builds/
soft link: /appbin/ -> /dev01/app-bin/user-tree/

test:
SS filesystem: /tst01
full path: /tst01/app-bin/user-tree/builds/
soft link: /appbin/ -> /tst01/app-bin/user-tree/

prod:
SS filesystem: /prd01
full path: /prd01/app-bin/user-tree/builds/
soft link: /appbin/ -> /prd01/app-bin/user-tree/


Stephen R. Wall Buchanan
Sr. IT Specialist
IBM Data & AI North America Government Expert Labs
+1 (571) 299-4601
stephen.buchanan at us.ibm.com<mailto:stephen.buchanan at us.ibm.com>

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cbbanister%40jumptrading.com%7C1e6d0217603b45bbe7d708d6a8ba1f79%7C11f2af738873424085a3063ce66fc61c%7C1%7C0%7C636881914702678858&sdata=AKJ3vzG0U4MVDdhOky7cBK6aC0DKmNTqk3y51s4CaeY%3D&reserved=0>



________________________________

Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential, or privileged information and/or personal data. If you are not the intended recipient, you are hereby notified that any review, dissemination, or copying of this email is strictly prohibited, and requested to notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request, or solicitation of any kind to buy, sell, subscribe, redeem, or perform any type of transaction of a financial product. Personal data, as defined by applicable data privacy laws, contained in this email may be processed by the Company, and any of its affiliated or related companies, for potential ongoing compliance and/or business-related purposes. You may have rights regarding your personal data; for information on exercising these rights or the Company’s treatment of personal data, please email datarequests at jumptrading.com.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190314/ec51e2a1/attachment-0002.htm>


More information about the gpfsug-discuss mailing list