[gpfsug-discuss] Failure Group

Jan Finnerman Load jan.finnerman at load.se
Fri Apr 1 20:16:11 BST 2016


Ok,
I checked the replication status with mmlsfs the output is: -r=1, -m=1, -R=2,-M=2, which means they don't use replication, although they could activate it. I told them that they could add the new disks to the file system with a different failure group e.g. 201
It shouldn't matter that much if they coexist with the 4001 disks, since they don't replicate. I'll follow up on Monday.

MVH
Jan Finnerman
Konsult

Kista Science Tower
164 51 Kista
Mobil: +46 (0)70 631 66 26<tel:+46%20(0)70%20631%2066%2026>
Kontor: +46 (0)8 633 66 00<tel:+46%20(0)8%20633%2066%2000>/26

1 apr. 2016 kl. 21:05 skrev Jan-Frode Myklebust <janfrode at tanso.net<mailto:janfrode at tanso.net>>:

Hi :-)

I seem to remember failure group 4001 was common at some point, but can't see why.. Maybe it was just the default when no failure group was specified ? Have you tried what happens if you use an empty failure group "::", does it default to "-1" on v3.4 -- or maybe "4001"?

You might consider changing the failure groups of the existing disks using mmchdisk if you need them to be the same.


Pro's and cons of using another failure group.. Depends a bit on if they're using any replication within the filesystem. If all other NSDs are in failure group 4001 -- they can't be doing any replication, so it doesn't matter much. Only side effect I know of is that new block allocations will first go round robin over the failure groups, then round robin within the failure group, so unless you have similar amount of disks in the two failure groups the disk load might become a bit uneven.


  -jf


On Fri, Apr 1, 2016 at 1:04 PM, Jan Finnerman Load <jan.finnerman at load.se<mailto:jan.finnerman at load.se>> wrote:
Hi,

I have a customer with GPFS 3.4.0.11 on Windows @VMware with VMware Raw Device Mapping. They just ran in to an issue with adding some nsd disks.
They claim that their current file system's nsddisks are specified with 4001 as the failure group. This is out of bounds, since the allowed range is -1>-->4000.
So, when they now try to add some new disks with mmcrnsd, with 4001 specified, they get an error message.

Customer runs this command: mmcrnsd -F D:\slask\gpfs\gpfsdisk.txt
<B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png>


His gpfsdisk.txt file looks like this.
<7A01C40C-085E-430C-BA95-D4238AFE5602.png>


A listing of current disks show all as belonging to Failure group 4001
<446525C9-567E-4B06-ACA0-34865B35B109.png>

So, Why can't he choose failure group 4001 when the existing disks are member of that group ?
If he creates a disk in an other failure group, what's the pros and cons with that ? I guess issues with replication not working as expected....

Brgds
///Jan

<E895055E-B11B-47C3-BA29-E12D29D394FA.png>
Jan Finnerman
Senior Technical consultant

<F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png>

<CertPowerSystems_sm[1].png>
Kista Science Tower
164 51 Kista
Mobil: +46 (0)70 631 66 26<tel:%2B46%20%280%2970%20631%2066%2026>
Kontor: +46 (0)8 633 66 00<tel:%2B46%20%280%298%20633%2066%2000>/26
jan.finnerman at load.se<mailto:jan.finnerman at load.se>

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160401/b5a50b20/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 446525C9-567E-4B06-ACA0-34865B35B109.png
Type: image/png
Size: 6144 bytes
Desc: 446525C9-567E-4B06-ACA0-34865B35B109.png
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160401/b5a50b20/attachment-0012.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: CertPowerSystems_sm[1].png
Type: image/png
Size: 6664 bytes
Desc: CertPowerSystems_sm[1].png
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160401/b5a50b20/attachment-0013.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: E895055E-B11B-47C3-BA29-E12D29D394FA.png
Type: image/png
Size: 8584 bytes
Desc: E895055E-B11B-47C3-BA29-E12D29D394FA.png
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160401/b5a50b20/attachment-0014.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png
Type: image/png
Size: 3320 bytes
Desc: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160401/b5a50b20/attachment-0015.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 7A01C40C-085E-430C-BA95-D4238AFE5602.png
Type: image/png
Size: 1648 bytes
Desc: 7A01C40C-085E-430C-BA95-D4238AFE5602.png
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160401/b5a50b20/attachment-0016.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png
Type: image/png
Size: 5565 bytes
Desc: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160401/b5a50b20/attachment-0017.png>


More information about the gpfsug-discuss mailing list