[gpfsug-discuss] [EXTERNAL] Re: Forcing an internal mount to complete

Jordan Robertson salut4tions at gmail.com
Sun Jun 9 14:20:28 BST 2019


If there's any I/O going to the filesystem at all, GPFS has to keep it
internally mounted on at least a few nodes such as the token managers and
fs manager.

I *believe* that holds true even for remote clusters, in that they still
need to reach back to the "local" cluster when operating on the
filesystem.  I could be wrong about that though.

On Sun, Jun 9, 2019, 09:06 Oesterlin, Robert <Robert.Oesterlin at nuance.com>
wrote:

> Thanks for the suggestions - as it turns out, it was the **remote**
> mounts causing the issues - which surprises me. I wanted to do a “mmchfs”
> on the local cluster, to change the default mount point. Why would GPFS
> care if it’s remote mounted?
>
>
>
> Oh - well…
>
>
>
>
>
> Bob Oesterlin
>
> Sr Principal Storage Engineer, Nuance
>
>
>
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190609/85318137/attachment-0002.htm>


More information about the gpfsug-discuss mailing list