<div>You had it here:</div><div><br><br><br>[root@server ~]# mmlsrecoverygroup BB1RGL -L<br><br> declustered<br> recovery group arrays vdisks pdisks format version<br> ----------------- ----------- ------ ------ --------------<br> BB1RGL 3 18 119 4.2.0.1<br><br> declustered needs replace scrub background activity<br> array service vdisks pdisks spares threshold free space duration task progress priority<br> ----------- ------- ------ ------ ------ --------- ---------- -------- -------------------------<br> LOG no 1 3 0,0 1 558 GiB 14 days scrub 51% low<br> DA1 no 11 58 2,31 2 12 GiB 14 days scrub 78% low<br> DA2 no 6 58 2,31 2 4096 MiB 14 days scrub 10% low<br><br><br></div><div>12 GiB in DA1, and 4096 MiB i DA2, but effectively you'll get less when you add a raidCode to the vdisk. Best way to use it id to just don't specify a size to the vdisk, and max possible size will be used.</div><div><br><br><br> -jf<br><div class="gmail_quote"><div>søn. 9. jul. 2017 kl. 14.26 skrev atmane khiredine <<a href="mailto:a.khiredine@meteo.dz" target="_blank">a.khiredine@meteo.dz</a>>:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">thank you very much for replying. I can not find the free space<br>
<br>
Here is the output of mmlsrecoverygroup<br>
<br>
[root@server1 ~]#mmlsrecoverygroup<br>
<br>
declustered<br>
arrays with<br>
recovery group vdisks vdisks servers<br>
------------------ ----------- ------ -------<br>
BB1RGL 3 18 server1,server2<br>
BB1RGR 3 18 server2,server1<br>
--------------------------------------------------------------<br>
[root@server ~]# mmlsrecoverygroup BB1RGL -L<br>
<br>
declustered<br>
recovery group arrays vdisks pdisks format version<br>
----------------- ----------- ------ ------ --------------<br>
BB1RGL 3 18 119 4.2.0.1<br>
<br>
declustered needs replace scrub background activity<br>
array service vdisks pdisks spares threshold free space duration task progress priority<br>
----------- ------- ------ ------ ------ --------- ---------- -------- -------------------------<br>
LOG no 1 3 0,0 1 558 GiB 14 days scrub 51% low<br>
DA1 no 11 58 2,31 2 12 GiB 14 days scrub 78% low<br>
DA2 no 6 58 2,31 2 4096 MiB 14 days scrub 10% low<br>
<br>
declustered checksum<br>
vdisk RAID code array vdisk size block size granularity state remarks<br>
------------------ ------------------ ----------- ---------- ---------- ----------- ----- -------<br>
gss0_logtip 3WayReplication LOG 128 MiB 1 MiB 512 ok logTip<br>
gss0_loghome 4WayReplication DA1 40 GiB 1 MiB 512 ok log<br>
BB1RGL_GPFS4_META1 4WayReplication DA1 451 GiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS4_DATA1 8+2p DA1 5133 GiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS1_META1 4WayReplication DA1 451 GiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS1_DATA1 8+2p DA1 12 TiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS3_META1 4WayReplication DA1 451 GiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS3_DATA1 8+2p DA1 12 TiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS2_META1 4WayReplication DA1 451 GiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS2_DATA1 8+2p DA1 13 TiB 2 MiB 32 KiB ok<br>
BB1RGL_GPFS2_META2 4WayReplication DA2 451 GiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS2_DATA2 8+2p DA2 13 TiB 2 MiB 32 KiB ok<br>
BB1RGL_GPFS1_META2 4WayReplication DA2 451 GiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS1_DATA2 8+2p DA2 12 TiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS5_META1 4WayReplication DA1 750 GiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS5_DATA1 8+2p DA1 70 TiB 16 MiB 32 KiB ok<br>
BB1RGL_GPFS5_META2 4WayReplication DA2 750 GiB 1 MiB 32 KiB ok<br>
BB1RGL_GPFS5_DATA2 8+2p DA2 90 TiB 16 MiB 32 KiB ok<br>
<br>
config data declustered array VCD spares actual rebuild spare space remarks<br>
------------------ ------------------ ------------- --------------------------------- ----------------<br>
rebuild space DA1 31 34 pdisk<br>
rebuild space DA2 31 35 pdisk<br>
<br>
<br>
config data max disk group fault tolerance actual disk group fault tolerance remarks<br>
------------------ --------------------------------- --------------------------------- ----------------<br>
rg descriptor 1 enclosure + 1 drawer 1 enclosure + 1 drawer limiting fault tolerance<br>
system index 2 enclosure 1 enclosure + 1 drawer limited by rg descriptor<br>
<br>
vdisk max disk group fault tolerance actual disk group fault tolerance remarks<br>
------------------ --------------------------------- --------------------------------- ----------------<br>
gss0_logtip 2 enclosure 1 enclosure + 1 drawer limited by rg descriptor<br>
gss0_loghome 1 enclosure + 1 drawer 1 enclosure + 1 drawer<br>
BB1RGL_GPFS4_META1 1 enclosure + 1 drawer 1 enclosure + 1 drawer<br>
BB1RGL_GPFS4_DATA1 2 drawer 2 drawer<br>
BB1RGL_GPFS1_META1 1 enclosure + 1 drawer 1 enclosure + 1 drawer<br>
BB1RGL_GPFS1_DATA1 2 drawer 2 drawer<br>
BB1RGL_GPFS3_META1 1 enclosure + 1 drawer 1 enclosure + 1 drawer<br>
BB1RGL_GPFS3_DATA1 2 drawer 2 drawer<br>
BB1RGL_GPFS2_META1 1 enclosure + 1 drawer 1 enclosure + 1 drawer<br>
BB1RGL_GPFS2_DATA1 2 drawer 2 drawer<br>
BB1RGL_GPFS2_META2 3 enclosure 1 enclosure + 1 drawer limited by rg descriptor<br>
BB1RGL_GPFS2_DATA2 2 drawer 2 drawer<br>
BB1RGL_GPFS1_META2 3 enclosure 1 enclosure + 1 drawer limited by rg descriptor<br>
BB1RGL_GPFS1_DATA2 2 drawer 2 drawer<br>
BB1RGL_GPFS5_META1 1 enclosure + 1 drawer 1 enclosure + 1 drawer<br>
BB1RGL_GPFS5_DATA1 2 drawer 2 drawer<br>
BB1RGL_GPFS5_META2 3 enclosure 1 enclosure + 1 drawer limited by rg descriptor<br>
BB1RGL_GPFS5_DATA2 2 drawer 2 drawer<br>
<br>
active recovery group server servers<br>
----------------------------------------------- -------<br>
server1 server1,server2<br>
<br>
<br>
Atmane Khiredine<br>
HPC System Administrator | Office National de la Météorologie<br>
Tél : +213 21 50 73 93 # 303 | Fax : +213 21 50 79 40 | E-mail : <a href="mailto:a.khiredine@meteo.dz" target="_blank">a.khiredine@meteo.dz</a><br>
________________________________<br>
De : Laurence Horrocks-Barlow [<a href="mailto:laurence@qsplace.co.uk" target="_blank">laurence@qsplace.co.uk</a>]<br>
Envoyé : dimanche 9 juillet 2017 09:58<br>
À : gpfsug main discussion list; atmane khiredine; <a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">gpfsug-discuss@spectrumscale.org</a><br>
Objet : Re: [gpfsug-discuss] get free space in GSS<br>
<br>
You can check the recovery groups to see if there is any remaining space.<br>
<br>
I don't have access to my test system to confirm the syntax however if memory serves.<br>
<br>
Run mmlsrecoverygroup to get a list of all the recovery groups then:<br>
<br>
mmlsrecoverygroup <YOURRECOVERYGROUP> -L<br>
<br>
This will list all your declustered arrays and their free space.<br>
<br>
Their might be another method, however this way has always worked well for me.<br>
<br>
-- Lauz<br>
<br>
<br>
<br>
On 9 July 2017 09:00:07 BST, Atmane <<a href="mailto:a.khiredine@meteo.dz" target="_blank">a.khiredine@meteo.dz</a>> wrote:<br>
<br>
Dear all,<br>
<br>
My name is Khiredine Atmane and I am a HPC system administrator at the<br>
National Office of Meteorology Algeria . We have a GSS24 running<br>
gss2.5.10.3<<a href="http://2.5.10.3" rel="noreferrer" target="_blank">http://2.5.10.3</a>>-3b and gpfs-4.2.0.3<<a href="http://4.2.0.3" rel="noreferrer" target="_blank">http://4.2.0.3</a>>.<br>
<br>
GSS configuration: 4 enclosures, 6 SSDs, 1 empty slots, 239 disks total, 0<br>
NVRAM partitions<br>
<br>
disks = 3Tb<br>
SSD = 200 Gb<br>
df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
<br>
/dev/gpfs1 49T 18T 31T 38% /gpfs1<br>
/dev/gpfs2 53T 13T 40T 25% /gpfs2<br>
/dev/gpfs3 25T 4.9T 20T 21% /gpfs3<br>
/dev/gpfs4 11T 133M 11T 1% /gpfs4<br>
/dev/gpfs5 323T 34T 290T 11% /gpfs5<br>
<br>
Total Is 461 To<br>
<br>
I think we have more space<br>
Could anyone make recommendation to troubleshoot find how many free space<br>
in GSS ?<br>
How to find the available space ?<br>
Thank you!<br>
<br>
Atmane<br>
<br>
<br>
<br>
--<br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
</blockquote></div></div>