[gpfsug-discuss] ESS GL6

Jan-Frode Myklebust janfrode at tanso.net
Mon Jun 20 17:29:15 BST 2016


"mmlsrecoverygroup $name -L" will tell you how much raw capacity is left in
a recoverygroup.

You will then need to create a vdisk stanza file where you specify the
capacity, blocksize, raidcode, etc. for the vdisk you want. (check "man
mmcrvdisk") Then "mmcrvdisk stanzafile" to create the vdisk, and "mmcrnsd
stanzafile" to create the NSDs. From then on it's standard gpfs.



-jf
man. 20. jun. 2016 kl. 17.25 skrev Olaf Weiser <olaf.weiser at de.ibm.com>:

> Hi Damir,
> mmlsrecovergroup   --> will show your RG
>
> mmlsrecoverygroup RG -L .. will provide capacity information
>
> or .. you can use the GUI
>
> with ESS / GNR , there's no need any more to create more than one
> vdisk(=nsd) per RG for a pool
>
> a practical approach/example  for you
> so a file system  consists of
> 1 vdisk(NSD) for MetaData , RAID: 4WR , BS 1M in RG"left"
> 1 vdisk(NSD) for MetaData , Raid : 4WR, BS 1M  in RG "right"
> 1 vdisk (NSD) for data , 8+3p , BS 1...16M .. depends on your
> data/workload  in RG "left"
> 1 vdisk (NSD) for data , 8+3p , BS 1...16M .. depends on your
> data/workload  in RG "right"
>
> so 4 NSDs to provide everything you need to serve a file system ..
>
>
> the size of the vdisks can be up to half of the capacity of your RG
>
> please note: if you come from an existing environment , and the file
> system should be migrated to ESS (online) , you might hit some limitations
> like
>  - blocksize (can not be changed)
>  - disk size.. depending on the existing storage pools/disk sizes....
>
>
> have fun
> cheers
>
> Mit freundlichen Grüßen / Kind regards
>
>
> Olaf Weiser
>
> EMEA Storage Competence Center Mainz, German / IBM Systems, Storage
> Platform,
>
> -------------------------------------------------------------------------------------------------------------------------------------------
> IBM Deutschland
> IBM Allee 1
> 71139 Ehningen
> Phone: +49-170-579-44-66
> E-Mail: olaf.weiser at de.ibm.com
>
> -------------------------------------------------------------------------------------------------------------------------------------------
> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
> Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert
> Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner
> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
> HRB 14562 / WEEE-Reg.-Nr. DE 99369940
>
>
>
> From:        Damir Krstic <damir.krstic at gmail.com>
> To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date:        06/20/2016 05:10 PM
> Subject:        [gpfsug-discuss] ESS GL6
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> Couple of questions regarding Spectrum Scale 4.2 and ESS. We recently got
> our ESS delivered and are putting it in production this week. Previous to
> ESS we ran GPFS 3.5 and IBM DCS3700 storage arrays.
>
> My question about ESS and Spectrum Scale has to do with querying available
> free space and adding capacity to existing file system.
>
> In the old days of GPFS 3.5 I would create LUNs on 3700, zone them to
> appropriate hosts, and then see them as multipath devices on NSD servers.
> After that, I would create NSD disks and add them to the filesystem.
>
> With the ESS, however, I don't think the process is quite the same. IBM
> tech that was here installing the system has created all the "LUNs" or the
> equivalent in the ESS system. How do you I query what space is available to
> add to the existing filesystems, and then how do you actually add space?
>
> I am reading ESS RedBook but the answers are not obvious.
>
> Thanks,
> Damir _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160620/d426f3ee/attachment-0002.htm>


More information about the gpfsug-discuss mailing list