<div class="socmaildefaultfont" dir="ltr" style="font-family:Arial, Helvetica, sans-serif;font-size:10.5pt" ><div dir="ltr" >Hi Sven,</div>
<div dir="ltr" > </div>
<div dir="ltr" >the REST API v2 provides similar information to what v1 provided. See an example from my system below:<br> </div>
<div dir="ltr" >/scalemgmt/v2/filesystems/gpfs0/filesets?fields=:all:</div>
<div dir="ltr" >[...]</div>
<div dir="ltr" > "filesetName" : "fset1",<br> "filesystemName" : "gpfs0",<br> "usage" : {<br> "allocatedInodes" : 51232,<br> "inodeSpaceFreeInodes" : 51231,<br> "inodeSpaceUsedInodes" : 1,<br> "usedBytes" : 0,<br> "usedInodes" : 1<br> }<br> } ],</div>
<div dir="ltr" > </div>
<div dir="ltr" > </div>
<div dir="ltr" ><strong>In 5.0.0 there are two sources for the inode information: the first one is mmlsfileset and the second one is the data collected by Zimon.</strong> Depending on the availability of the data either one is used.<br><br>To debug what's happening on your system you can <strong>execute the FILESETS task on the GUI node</strong> manually with the --debug flag. The output is then showing the exact queries that are used to retrieve the data:<br> </div>
<div dir="ltr" ><strong>[root@os-11 ~]# /usr/lpp/mmfs/gui/cli/runtask FILESETS --debug</strong><br>debug: locale=en_US<br>debug: Running 'mmlsfileset 'gpfs0' -Y ' on node localhost<br>debug: Running zimon query: 'get -ja metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=gpfs0 group_by gpfs_fset_name last 13 bucket_size 300'<br>debug: Running 'mmlsfileset 'objfs' -Y ' on node localhost<br>debug: Running zimon query: 'get -ja metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=objfs group_by gpfs_fset_name last 13 bucket_size 300'<br>EFSSG1000I The command completed successfully.</div>
<div dir="ltr" > </div>
<div dir="ltr" ><strong>As a start I suggest running the displayed Zimon queries manually to see what's returned there, e.g.:</strong></div>
<div dir="ltr" > </div>
<div dir="ltr" ><em>(Removed -j for better readability)</em></div>
<div dir="ltr" ><br><strong>[root@os-11 ~]# echo "get -a metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=gpfs0 group_by gpfs_fset_name last 13 bucket_size 300" | /opt/IBM/zimon/zc 127.0.0.1</strong><br>1: gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|.audit_log|gpfs_fset_maxInodes<br>2: gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|fset1|gpfs_fset_maxInodes<br>3: gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|root|gpfs_fset_maxInodes<br>4: gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|.audit_log|gpfs_fset_freeInodes<br>5: gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|fset1|gpfs_fset_freeInodes<br>6: gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|root|gpfs_fset_freeInodes<br>7: gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|.audit_log|gpfs_fset_allocInodes<br>8: gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|fset1|gpfs_fset_allocInodes<br>9: gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|root|gpfs_fset_allocInodes<br>Row Timestamp max(gpfs_fset_maxInodes) max(gpfs_fset_maxInodes) max(gpfs_fset_maxInodes) max(gpfs_fset_freeInodes) max(gpfs_fset_freeInodes) max(gpfs_fset_freeInodes) max(gpfs_fset_allocInodes) max(gpfs_fset_allocInodes) max(gpfs_fset_allocInodes) <br>1 2018-09-05 10:10:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>2 2018-09-05 10:15:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>3 2018-09-05 10:20:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>4 2018-09-05 10:25:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>5 2018-09-05 10:30:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>6 2018-09-05 10:35:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>7 2018-09-05 10:40:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>8 2018-09-05 10:45:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>9 2018-09-05 10:50:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>10 2018-09-05 10:55:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>11 2018-09-05 11:00:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>12 2018-09-05 11:05:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>13 2018-09-05 11:10:00 100000 620640 65792 65795 51231 61749 65824 51232 65792<br>.</div>
<div dir="ltr" ><div class="socmaildefaultfont" dir="ltr" style="font-family:Arial, Helvetica, sans-serif;font-size:10.5pt" ><div class="socmaildefaultfont" dir="ltr" style="font-family:Arial, Helvetica, sans-serif;font-size:10.5pt" ><div class="socmaildefaultfont" dir="ltr" style="font-family:Arial, Helvetica, sans-serif;font-size:10.5pt" ><div class="socmaildefaultfont" dir="ltr" style="font-family:Arial, Helvetica, sans-serif;font-size:10.5pt" ><div class="socmaildefaultfont" dir="ltr" style="font-family:Arial;font-size:10.5pt" ><div dir="ltr" ><br>Mit freundlichen Grüßen / Kind regards<br><br>Andreas Koeninger<br>Scrum Master and Software Developer / Spectrum Scale GUI and REST API<br>IBM Systems &Technology Group, Integrated Systems Development / M069<br>-------------------------------------------------------------------------------------------------------------------------------------------<br>IBM Deutschland<br>Am Weiher 24<br>65451 Kelsterbach<br>Phone: +49-7034-643-0867<br>Mobile: +49-7034-643-0867<br>E-Mail: andreas.koeninger@de.ibm.com<br>-------------------------------------------------------------------------------------------------------------------------------------------<br>IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz<br>Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294</div></div></div></div></div></div></div>
<div dir="ltr" > </div>
<div dir="ltr" > </div>
<blockquote data-history-content-modified="1" dir="ltr" style="border-left:solid #aaaaaa 2px; margin-left:5px; padding-left:5px; direction:ltr; margin-right:0px" >----- Original message -----<br>From: Sven Siebler <sven.siebler@urz.uni-heidelberg.de><br>Sent by: gpfsug-discuss-bounces@spectrumscale.org<br>To: gpfsug-discuss@spectrumscale.org<br>Cc:<br>Subject: [gpfsug-discuss] Getting inode information with REST API V2<br>Date: Wed, Sep 5, 2018 9:37 AM<br>
<div><font size="2" face="Default Monospace,Courier New,Courier,monospace" >Hi all,<br><br>i just started to use the REST API for our monitoring and my question is<br>concerning about how can i get information about allocated inodes with<br>REST API V2 ?<br><br>Up to now i use "mmlsfileset" directly, which gives me information on<br>maximum and allocated inodes (mmdf for total/free/allocated inodes of<br>the filesystem)<br><br>If i use the REST API V2 with<br>"filesystems/<filesystem_name>/filesets?fields=:all:", i get all<br>information except the allocated inodes.<br><br>On the documentation<br>(<a href="https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_apiv2getfilesystemfilesets.htm" target="_blank" >https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_apiv2getfilesystemfilesets.htm</a>)<br>i found:<br><br> > "inodeSpace": "Inodes"<br> > The number of inodes that are allocated for use by the fileset.<br><br>but for me the inodeSpace looks more like the ID of the inodespace,<br>instead of the number of allocated inodes.<br><br>In the documentation example the API can give output like this:<br><br>"filesetName" : "root",<br> "filesystemName" : "gpfs0",<br> "usage" : {<br> "allocatedInodes" : 100000,<br> "inodeSpaceFreeInodes" : 95962,<br> "inodeSpaceUsedInodes" : 4038,<br> "usedBytes" : 0,<br> "usedInodes" : 4038<br>}<br><br>but i could not retrieve such usage-fields in my queries.<br><br>The only way for me to get inode information with REST is the usage of V1:<br><br><a href="https://REST_API_host:port/scalemgmt/v1/filesets?filesystemName=FileSystemName" target="_blank" >https://REST_API_host:port/scalemgmt/v1/filesets?filesystemName=FileSystemName</a><br><br>which gives exact the information of "mmlsfileset".<br><br>But because V1 is deprecated i want to use V2 for rewriting our tools...<br><br>Thanks,<br><br>Sven<br><br><br>--<br>Sven Siebler<br>Servicebereich Future IT - Research & Education (FIRE)<br><br>Tel. +49 6221 54 20032<br>sven.siebler@urz.uni-heidelberg.de<br>Universität Heidelberg<br>Universitätsrechenzentrum (URZ)<br>Im Neuenheimer Feld 293, D-69120 Heidelberg<br><a href="http://www.urz.uni-heidelberg.de" target="_blank" >http://www.urz.uni-heidelberg.de</a></font><br><br> </div>
<div id="MIMEAttachInfoDiv" style="display:none" title="octet-stream|smime.p7s" > </div>
<div><font size="2" face="Default Monospace,Courier New,Courier,monospace" >_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target="_blank" >http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></font></div></blockquote>
<div dir="ltr" > </div></div><BR>