[gpfsug-discuss] How to use RHEL 7 mdadm NVMe devices with Spectrum Scale 4.2.3.10?

Greg.Lehmann at csiro.au Greg.Lehmann at csiro.au
Fri Nov 16 03:46:01 GMT 2018


Hi Lance,
	We are doing it with beegfs (mdadm and NVMe drives in the same HW.) For GPFS have you updated the nsddevices sample script to look at the mdadm devices and put it in /var/mmfs/etc?

BTW I'm interested to see how you go with that configuration.

Cheers,

Greg

-----Original Message-----
From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Lance Nakata
Sent: Friday, November 16, 2018 1:07 PM
To: gpfsug-discuss at spectrumscale.org
Cc: Jon L. Bergman <jonl at SLAC.STANFORD.EDU>
Subject: [gpfsug-discuss] How to use RHEL 7 mdadm NVMe devices with Spectrum Scale 4.2.3.10?

We have a Dell R740xd with 24 x 1TB NVMe SSDs in the internal slots.  Since PERC RAID cards don't see these devices, we are using mdadm software RAID to build NSDs.  We took 12 NVMe SSDs and used mdadm to create a 10 + 1 + 1 hot spare RAID
5 stripe named /dev/md101.  We took the other 12 NVMe SSDs and created a similar /dev/md102.

mmcrnsd worked without errors.  The problem is that Spectrum Scale does not see the /dev/md10x devices as proper NSDs; the Device and Devtype columns are blank:

host2:~> sudo mmlsnsd -X

 Disk name    NSD volume ID      Device         Devtype  Node name                Remarks
---------------------------------------------------------------------------------------------------
 nsd0001      864FD12858A36E79   /dev/sdb       generic  host1.slac.stanford.edu server node
 nsd0002      864FD12858A36E7A   /dev/sdc       generic  host1.slac.stanford.edu server node
 nsd0021      864FD1285956B0A7   /dev/sdd       generic  host1.slac.stanford.edu server node
 nsd0251a     864FD1545BD0CCDF   /dev/dm-9      dmm      host2.slac.stanford.edu server node
 nsd0251b     864FD1545BD0CCE0   /dev/dm-11     dmm      host2.slac.stanford.edu server node
 nsd0252a     864FD1545BD0CCE1   /dev/dm-10     dmm      host2.slac.stanford.edu server node
 nsd0252b     864FD1545BD0CCE2   /dev/dm-8      dmm      host2.slac.stanford.edu server node
 nsd02nvme1   864FD1545BEC5D72   -              -        host2.slac.stanford.edu (not found) server node
 nsd02nvme2   864FD1545BEC5D73   -              -        host2.slac.stanford.edu (not found) server node

I know we can access the internal NVMe devices by their individual /dev/nvmeXX paths, but non-ESS-based Spectrum Scale does not have built-in RAID functionality.  Hence, the only option in that scenario is replication, which is expensive and won't give us enough usable space.

Software Environment:
RHEL 7.6 with kernel 3.10.0-862.14.4.el7.x86_64 Spectrum Scale 4.2.3.10

Spectrum Scale Support has implied we can't use mdadm for NVMe devices.  Is that really true?  Does anyone use an mdadm-based NVMe config?  If so, did you have to do some kind of customization to get it working?

Thank you,

Lance Nakata
SLAC National Accelerator Laboratory
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



More information about the gpfsug-discuss mailing list