[gpfsug-discuss] How to do multiple mounts via GPFS

Maloney, J.D. malone12 at illinois.edu
Tue Feb 22 20:21:43 GMT 2022


Our Puppet/Ansible GPFS modules/playbooks handle this sequencing for us (we use bind mounts for things like u, projects, and scratch also).  Like Skylar mentioned page pool allocation, quorum checking, and cluster arbitration have to come before a mount of the FS so that time you mentioned doesn’t seem totally off to me.  We just make the creation of the bind mounts dependent on the actual GPFS mount occurring in the configuration management tooling which has worked out well for us in that regard.

Best,

J.D. Maloney
Sr. HPC Storage Engineer | Storage Enabling Technologies Group
National Center for Supercomputing Applications (NCSA)

From: gpfsug-discuss-bounces at spectrumscale.org <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Skylar Thompson <skylar2 at uw.edu>
Date: Tuesday, February 22, 2022 at 2:13 PM
To: gpfsug-discuss at spectrumscale.org <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] How to do multiple mounts via GPFS
The problem might be that the service indicates success when mmstartup
returns rather than when the mount is actually active (requires quorum
checking, arbitration, etc.). A couple tricks I can think of would be using
ConditionPathIsMountPoint from systemd.unit[1], or maybe adding a
callback[2] that triggers on the mount condition for your filesystem that
makes the bind mount rather than systemd.

[1] https://urldefense.com/v3/__https://www.freedesktop.org/software/systemd/man/systemd.unit.html*ConditionPathIsMountPoint=__;Iw!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv4xJQwzZ$<https://urldefense.com/v3/__https:/www.freedesktop.org/software/systemd/man/systemd.unit.html*ConditionPathIsMountPoint=__;Iw!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv4xJQwzZ$>
[2] https://urldefense.com/v3/__https://www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=reference-mmaddcallback-command__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv3f90Gia$<https://urldefense.com/v3/__https:/www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=reference-mmaddcallback-command__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv3f90Gia$>

These are both on our todo list for improving our own GPFS mounting as we
have problems with our job scheduler not starting reliably on reboot, but
for us we can have Puppet start it on the next run so it just means nodes
might not return to service for 30 minutes or so.

On Tue, Feb 22, 2022 at 03:05:58PM -0500, Justin Cantrell wrote:
> This is how we're currently solving this problem, with systemd timer and
> mount. None of the requires seem to work with gpfs since it starts so late.
> I would like a better solution.
>
> Is it normal for gpfs to start so late?? I think it doesn't mount until
> after the gpfs.service starts, and even then it's 20-30 seconds.
>
>
> On 2/22/22 14:42, Skylar Thompson wrote:
> > Like Tina, we're doing bind mounts in autofs. I forgot that there might be
> > a race condition if you're doing it in fstab. If you're on system with systemd,
> > another option might be to do this directly with systemd.mount rather than
> > let the fstab generator make the systemd.mount units:
> >
> > https://urldefense.com/v3/__https://nam11.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.freedesktop.org*2Fsoftware*2Fsystemd*2Fman*2Fsystemd.mount.html&data=04*7C01*7Cjcantrell1*40gsu.edu*7C2a65cd0ddefd48cb81a308d9f63bb840*7C515ad73d8d5e4169895c9789dc742a70*7C0*7C0*7C637811559082622923*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000&sdata=*2BWWD7cCNSMeJEYwELldYT3pLdXVX3AxJj7gqZQCqUv4*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv0tqF9rU$<https://urldefense.com/v3/__https:/nam11.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.freedesktop.org*2Fsoftware*2Fsystemd*2Fman*2Fsystemd.mount.html&data=04*7C01*7Cjcantrell1*40gsu.edu*7C2a65cd0ddefd48cb81a308d9f63bb840*7C515ad73d8d5e4169895c9789dc742a70*7C0*7C0*7C637811559082622923*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000&sdata=*2BWWD7cCNSMeJEYwELldYT3pLdXVX3AxJj7gqZQCqUv4*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv0tqF9rU$>
> >
> > You could then set RequiresMountFor=gpfs1.mount in the bind mount unit.
> >
> > On Tue, Feb 22, 2022 at 02:23:53PM -0500, Justin Cantrell wrote:
> > > I tried a bind mount, but perhaps I'm doing it wrong. The system fails
> > > to boot because gpfs doesn't start until too late in the boot process.
> > > In fact, the system boots and the gpfs1 partition isn't available for a
> > > good 20-30 seconds.
> > >
> > > /gfs1/home??? /home??? none???? bind
> > > I've tried adding mount options of x-systemd-requires=gpfs1, noauto.
> > > The noauto lets it boot, but the mount is never mounted properly. Doing
> > > a manual mount -a mounts it.
> > >
> > > On 2/22/22 12:37, Skylar Thompson wrote:
> > > > Assuming this is on Linux, you ought to be able to use bind mounts for
> > > > that, something like this in fstab or equivalent:
> > > >
> > > > /home /gpfs1/home bind defaults 0 0
> > > >
> > > > On Tue, Feb 22, 2022 at 12:24:09PM -0500, Justin Cantrell wrote:
> > > > > We're trying to mount multiple mounts at boot up via gpfs.
> > > > > We can mount the main gpfs mount /gpfs1, but would like to mount things
> > > > > like:
> > > > > /home /gpfs1/home
> > > > > /other /gpfs1/other
> > > > > /stuff /gpfs1/stuff
> > > > >
> > > > > But adding that to fstab doesn't work, because from what I understand,
> > > > > that's not how gpfs works with mounts.
> > > > > What's the standard way to accomplish something like this?
> > > > > We've used systemd timers/mounts to accomplish it, but that's not ideal.
> > > > > Is there a way to do this natively with gpfs or does this have to be done
> > > > > through symlinks or gpfs over nfs?
> > > _______________________________________________
> > > gpfsug-discuss mailing list
> > > gpfsug-discuss at spectrumscale.org
> > > https://urldefense.com/v3/__https://nam11.safelinks.protection.outlook.com/?url=http*3A*2F*2Fgpfsug.org*2Fmailman*2Flistinfo*2Fgpfsug-discuss&data=04*7C01*7Cjcantrell1*40gsu.edu*7C2a65cd0ddefd48cb81a308d9f63bb840*7C515ad73d8d5e4169895c9789dc742a70*7C0*7C0*7C637811559082622923*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000&sdata=F4oXAT0zdY*2BS1mR784ZGghUt0G*2F6Ofu36MfJ9WnPsPM*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv5uX7C9S$<https://urldefense.com/v3/__https:/nam11.safelinks.protection.outlook.com/?url=http*3A*2F*2Fgpfsug.org*2Fmailman*2Flistinfo*2Fgpfsug-discuss&data=04*7C01*7Cjcantrell1*40gsu.edu*7C2a65cd0ddefd48cb81a308d9f63bb840*7C515ad73d8d5e4169895c9789dc742a70*7C0*7C0*7C637811559082622923*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000&sdata=F4oXAT0zdY*2BS1mR784ZGghUt0G*2F6Ofu36MfJ9WnPsPM*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv5uX7C9S$>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv34vkiw2$<https://urldefense.com/v3/__http:/gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv34vkiw2$>

--
-- Skylar Thompson (skylar2 at u.washington.edu)
-- Genome Sciences Department (UW Medicine), System Administrator
-- Foege Building S046, (206)-685-7354
-- Pronouns: He/Him/His
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv34vkiw2$<https://urldefense.com/v3/__http:/gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv34vkiw2$>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20220222/a796fb16/attachment-0002.htm>


More information about the gpfsug-discuss mailing list