[gpfsug-discuss] Pool layoutMap option changes following GPFS upgrades

Caron, Paul Paul.Caron at sig.com
Mon Mar 26 16:43:24 BST 2018


By the way, the command to check the layoutMap option for your pools is "mmlspool <fs_device> all -L".  Has anyone else noticed if this option changed during your GPFS software upgrades?

Here's how our mmlspool output looked for our lab/test environment under GPFS Version 3.5.0-21:

Pool:
  name                   = system
  poolID                 = 0
  blockSize              = 1024 KB
  usage                  = metadataOnly
  maxDiskSize            = 16 TB
  layoutMap              = cluster
  allowWriteAffinity     = no
  writeAffinityDepth     = 0
  blockGroupFactor       = 1

Pool:
  name                   = writecache
  poolID                 = 65537
  blockSize              = 1024 KB
  usage                  = dataOnly
  maxDiskSize            = 8.0 TB
  layoutMap              = scatter
  allowWriteAffinity     = no
  writeAffinityDepth     = 0
  blockGroupFactor       = 1

Pool:
  name                   = data
  poolID                 = 65538
  blockSize              = 1024 KB
  usage                  = dataOnly
  maxDiskSize            = 8.2 TB
  layoutMap              = scatter
  allowWriteAffinity     = no
  writeAffinityDepth     = 0
  blockGroupFactor       = 1

Here's the mmlspool output immediately after the upgrade to 4.1.1-17:

Pool:
  name                   = system
  poolID                 = 0
  blockSize              = 1024 KB
  usage                  = metadataOnly
  maxDiskSize            = 16 TB
  layoutMap              = cluster
  allowWriteAffinity     = no
  writeAffinityDepth     = 0
  blockGroupFactor       = 1

Pool:
  name                   = writecache
  poolID                 = 65537
  blockSize              = 1024 KB
  usage                  = dataOnly
  maxDiskSize            = 8.0 TB
  layoutMap              = cluster
  allowWriteAffinity     = no
  writeAffinityDepth     = 0
  blockGroupFactor       = 1

Pool:
  name                   = data
  poolID                 = 65538
  blockSize              = 1024 KB
  usage                  = dataOnly
  maxDiskSize            = 8.2 TB
  layoutMap              = cluster
  allowWriteAffinity     = no
  writeAffinityDepth     = 0
  blockGroupFactor       = 1

We also determined the following:

*         The layoutMap option changes back to "scatter" if we revert back to 3.5.0.21.  It only changes back after the last node is downgraded.

*         Restarting GPFS under 4.1.1-17 (via mmshutdown and mmstartup) has no effect on layoutMap in the lab (as expected).  So, a simple restart doesn't fix the problem.

Our production and lab deployments are using SLES 11, SP3 (3.0.101-0.47.71-default).

Thanks,

Paul C.
SIG

From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Caron, Paul
Sent: Friday, March 23, 2018 4:10 PM
To: gpfsug-discuss at spectrumscale.org
Subject: [gpfsug-discuss] Pool layoutMap option changes following GPFS upgrades

Hi,

Has anyone run into a situation where the layoutMap option for a pool changes from "scatter" to "cluster" following a GPFS software upgrade?  We recently upgraded a file system from 3.5.0.21, to 4.1.1.17, and finally to 4.2.3.6.  We noticed that the layoutMap option for two of our pools changed following the upgrades.  We didn't recreate the file system or any of the pools.  Further lab testing has revealed that the layoutMap option change actually occurred during the first upgrade to 4.1.1.17, and it was simply carried forward to 4.2.3.6.  We have a PMR open with IBM on this problem, but they have told us that layoutMap option changes are impossible for existing pools, and that a software upgrade couldn't do this.  I sent the results of my lab testing today, so I'm hoping to get a better response.

We would rather not have to recreate all the pools, but it is starting to look like that may be the only option to fix this.  Also, it's unclear if this could happen again during future upgrades.

Here's some additional background.

*         The "-j" file system is "cluster"

*         We have a pretty option for the small cluster; just 13 nodes

*         When reproducing the problem, we noted that the layoutMap option didn't change until the final node was upgraded

*         The layoutMap option changed before running the "mmchconfig release=LATEST" and "mmchfs <fs> -V full" commands, so those don't seem to be related to the problem

Thanks,

Paul C.
SIG


________________________________

IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

________________________________

IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180326/f742b47b/attachment-0002.htm>


More information about the gpfsug-discuss mailing list