[gpfsug-discuss] How to shrink GPFS on DSSG's?

Jaime Pinto pinto at scinet.utoronto.ca
Mon Jun 20 20:12:22 BST 2022


Thanks JAB and Luis

I know, there are mmdelnsd, mmdeldisk, mmrestripefs and a few other correlated mm* commands. They are very high-level in work in bulk discreet fashion (I mean, considering the number of NSDs we have, each deletion will shave 4% of the storage at once, that is too much).

Maybe I should have used the term "very gradual" instead of "gracefully" in my original email.

I'm just looking to do this in a very gradual and controlled fashion, just delete(or fail) a couple of hard drives at the time. In fact, I'd like to carefully specify which hard drives (not volumes) are removed from the pool, and in which order, and set which drives should remain in read-only mode (since they will be removed later, so no data is written to them during mmrestripefs), and so on.

I guess I'm looking for an article or a white paper on how to do this under "my absolute control", if that makes sense.

After this exercise I expect the occupancy to be at 68% with the remaining enclosures.

I'll them repurpose the left over enclosures/drives to run some experiments, and later on grow the file system again.

Thanks
Jaime


On 6/20/2022 14:40:16, Jonathan Buzzard wrote:
> On 20/06/2022 19:04, Jaime Pinto wrote:
>>
>> I'm wondering if it's possible to shrink GPFS gracefully.
> 
> Yes absolutely, been possible since at least version 2.2 and probably older.
> 
>> I've seen some references to that effect on some presentations, however I can't find detailed instructions on any formal IBM documentation on how to do it.
>>
> 
> Use mmdeldisk to remove the NSD(s) from a file system. This will take a while so I recommend in the *STRONGEST* possible terms running it in a screen or tmux session. By a while it could be days or even weeks depending on how much data needs to be moved about.
> 
> Once you have removed the NSD's from a file system then you can use mmdelnsd to wipe the NSD descriptors from the disks if necessary.
> 
> 
>> About 3 years ago we launched a new GPFS deployment with 3 DSS-G enclosures (9.6PB usable).
>> Some 1.5 years later we added 2 more enclosures, for a total of 16PB, and only 7PB occupancy so far.
>>
>> Basically I'd like to return to the original 3 enclosures, and still maintain the (8+2p) parity level.
>>
>> Any suggestions?
> 
> Not being sarky but really use Google. Say "gpfs remove nsd from file system" and select the first link!
> 
> 
> JAB.
> 

---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - www.scinet.utoronto.ca
University of Toronto
661 University Ave. (MaRS), Suite 1140
Toronto, ON, M5G1M1
P: 416-978-2755
C: 416-505-1477




More information about the gpfsug-discuss mailing list