[gpfsug-discuss] Removing LUN from host without unconfiguring GPFS filesystem

Stephen Ulmer ulmer at ulmer.org
Mon Jan 22 03:41:58 GMT 2018


Harold,

The way I read your question, no one has actually answered it fully:

You want to put the old file system in cold storage for forensic purposes — exactly as it is. You want the NSDs to go away until and unless you need them in the future.


QUICK AND DIRTY - YOU SHOULD NOT DO THIS:

Set the old file system to NOT mount automatically. Make sure it is unmounted everywhere(!!!!!). Make sure none of those NSDs are being used as quorum tie-breakers, etc. Print your resume. Unmap the LUNs.

This will leave all of the information ABOUT the filesystem in the Spectrum Scale configuration, but it won’t be available. You’ll get a bunch of errors that the NSDs are gone. Lots of them. It will make starting up Spectrum Scale take a long time while it looks for and fails to find them. That will be mostly noise. I’m not sure how you could corrupt the file system when the LUNs are no longer accessible and there can’t be any writes pending.

I have done this recently to keep an old file system around after a migration (the customer required an "A/B" switch when installing new storage). This was with a very old version of GPFS that could not be upgraded. I would not do this unless the time frame to keep this configuration is very short — I only did it because I didn’t have a choice.


PROBABLY BETTER - READ THIS ONE TWICE:

Take a look at mmexportfs (this was already suggested). The point of this command to to be able to carry the Spectrum Scale "configuration" for a file system AND the LUNs full of data around and plug them into a cluster later. I think a misunderstanding here led to your objection about this method: It doesn’t have to be a different cluster — you can use the same one it came from.

The advantage to this approach is that it leaves you with a more hygienic cluster — no "missing" storage errors.  The ONLY disadvantage that I can think of at the moment is that the file system configuration MIGHT have some lint removed during the import/export process. I’m not sure that *any* checking is done during the export, but I’d guess that the import involves at least some validation of the imported file. I hope.

So if you think the file system *configuration* is wonky, you should call support and see if they will look at your export, or get a snap while you’ve still got the file system in your cluster.  If you think that the *structure* of the file system on disk (or maybe just the copy method you’re using) might be wonky, then learn how to use mmexportfs. Note that the learning can certainly include asking questions here.

When you want to look at the old file system again, you WILL have to re-map the LUNs, and use mmimportfs (and the file you made with mmexportfs). Then the file system will be part of your cluster again.


STANDARD DISCLAIMERS APPLY:

Your mileage may vary. Following this advice could cause nausea, vomiting, sweating, weight gain, sleeplessness, unemployment, weight loss, temporary tooth loss, redundancy, olfactory hallucinations, oozing (generalized), redundancy, uneven tire wear, excessive body oder, scoliosis, halitosis, ring worm, and intermittent acute ignorance. Good luck! :)

Liberty,

-- 
Stephen



> On Jan 21, 2018, at 6:43 PM, Andrew Beattie <abeattie at au1.ibm.com <mailto:abeattie at au1.ibm.com>> wrote:
> 
> Harold,
>  
> How big is the old file system,  Spectrum Scale is going to throw a bunch of errors if you remove the luns from the old file system while attempting to keep the file system "data" on the luns.  its likely to cause you all sorts of errors and potential data corruption.  Its not something that I would recommend.
>  
> Can you do a backup of the old filesystem so you can do a restore of data if you need to?
>  
> Regards,
> Andrew Beattie
> Software Defined Storage  - IT Specialist
> Phone: 614-2133-7927
> E-mail: abeattie at au1.ibm.com <mailto:abeattie at au1.ibm.com>
>  
>  
> ----- Original message -----
> From: Harold Morales <hmorales at optimizeit.co <mailto:hmorales at optimizeit.co>>
> Sent by: gpfsug-discuss-bounces at spectrumscale.org <mailto:gpfsug-discuss-bounces at spectrumscale.org>
> To: gpfsug-discuss at spectrumscale.org <mailto:gpfsug-discuss at spectrumscale.org>
> Cc:
> Subject: [gpfsug-discuss] Removing LUN from host without unconfiguring GPFS filesystem
> Date: Mon, Jan 22, 2018 7:20 AM
>  
> Hello,
>  
> I have a GPFS cluster with two filesystems. The disks associated to one filesystem reside on an old storage and the other filesystem disks reside on a much more modern storage system. I have successfully moved data from one fs to the other but there are questions about data integrity that still need verification so the old filesystem needs somehow to be preserved.
>  
> My question is: Can I remove the old filesystem LUNs association to the NSDs servers without removing spectrum scale filesystems, so that later on, if necessary, I could associate them back and the old filesystem would be operating as normal? If possible: what would be the general steps to achieve this?
>  
> -----
> Thank you,
>  
> Harold.
>  
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=STXkGEO2XATS_s2pRCAAh2wXtuUgwVcx1XjUX7ELNdk&m=deVdfZ4CCXfI09tUiJGGP1c17jRhhwjx2TcB12uunoc&s=y3EUXWX1ecxWLR1HG0Ohwn9xsPKHZ6Pdodxoz44HV7A&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=STXkGEO2XATS_s2pRCAAh2wXtuUgwVcx1XjUX7ELNdk&m=deVdfZ4CCXfI09tUiJGGP1c17jRhhwjx2TcB12uunoc&s=y3EUXWX1ecxWLR1HG0Ohwn9xsPKHZ6Pdodxoz44HV7A&e=>
>  
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180121/a19d521e/attachment-0002.htm>


More information about the gpfsug-discuss mailing list