<font size=2 face="Arial">Hi,</font><br><br><font size=2 face="Arial">>>I was wondering if there are any
good performance sizing guides for a spectrum scale shared nothing architecture
(FPO)?<br>>> I don't have any production experience using spectrum scale in
a "shared nothing configuration " and was hoping for bandwidth
/ throughput sizing guidance. <br></font><br><font size=2 face="Arial">Please ensure that all the recommended FPO
settings (e.g. allowWriteAffinity=yes in the FPO storage pool, readReplicaPolicy=local,
restripeOnDiskFailure=yes) are set properly. Please find the FPO
Best practices/tunings, in the links below: </font><br><a href="https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Big%20Data%20Best%20practices"><font size=2 color=blue face="Arial">https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Big%20Data%20Best%20practices</font></a><br><a href="https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/fa32927c-e904-49cc-a4cc-870bcc8e307c/page/ab5c2792-feef-4a3a-a21b-d22c6f5d728a/attachment/80d5c300-7b39-4d6e-9596-84934fcc4638/media/Deploying_a_big_data_solution_using_IBM_Spectrum_Scale_v1.7.5.pdf"><font size=2 color=blue face="Arial">https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/fa32927c-e904-49cc-a4cc-870bcc8e307c/page/ab5c2792-feef-4a3a-a21b-d22c6f5d728a/attachment/80d5c300-7b39-4d6e-9596-84934fcc4638/media/Deploying_a_big_data_solution_using_IBM_Spectrum_Scale_v1.7.5.pdf</font></a><br><br><font size=2 face="Arial">>> For example, each node might consist
of 24x storage drives (locally attached JBOD, no RAID array).</font><br><font size=2 face="Arial">>> Given a particular node configuration
I want to be in a position to calculate the maximum bandwidth / throughput.</font><br><br><font size=2 face="Arial">With FPO, GPFS metadata (-m) and data replication
(-r) needs to be enabled. The Write-affinity-Depth (WAD) setting
defines the policy for directing writes. It indicates that the node writing
the data directs the write to disks on its own node for the first copy
and to the disks on other nodes for the second and third copies (if specified).
readReplicaPolicy=local will enable the policy to read replicas from local
disks.</font><br><br><font size=2 face="Arial">At the minimum, ensure that the networking
used for GPFS is sized properly and has bandwidth 2X or 3X that of the
local disk speeds to ensure FPO write bandwidth is not being constrained
by GPFS replication over the network. </font><br><br><font size=2 face="Arial">For example, if 24 x Drives in RAID-0 results
in ~4.8 GB/s (assuming ~200MB/s per drive) and GPFS metadata/data replication
is set to 3 (-m 3 -r 3) then for optimal FPO write bandwidth, we need to
ensure the network-interconnect between the FPO nodes is non-blocking/high-speed
and can sustain ~14.4 GB/s ( data_replication_factor * local_storage_bandwidth).
One possibility, is minimum of 2 x EDR Infiniband (configure GPFS
verbsRdma/verbsPorts) or bonded 40GigE between the FPO nodes (for GPFS
daemon-to-daemon communication). Application reads requiring FPO reads
from remote GPFS node would as well benefit from high-speed network-interconnect
between the FPO nodes. </font><br><br><font size=2 face="Arial">Regards,</font><br><font size=2 face="Arial">-Kums</font><br><br><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Evan Koutsandreou <evan.koutsandreou@adventone.com></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">"gpfsug-discuss@spectrumscale.org"
<gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">08/20/2017 11:06 PM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">[gpfsug-discuss]
Shared nothing (FPO) throughout / bandwidth sizing</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><tt><font size=2>Hi -<br><br>I was wondering if there are any good performance sizing guides for a spectrum
scale shared nothing architecture (FPO)?<br><br>For example, each node might consist of 24x storage drives (locally attached
JBOD, no RAID array).<br><br>I don't have any production experience using spectrum scale in a "shared
nothing configuration " and was hoping for bandwidth / throughput
sizing guidance. <br><br>Given a particular node configuration I want to be in a position to calculate
the maximum bandwidth / throughput.<br><br>Thank you <br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br><br></font></tt><br><BR>