<HTML><BODY><FONT style='white-space:pre-wrap;font-family: Helvetica Neue, Helvetica, Arial, sans-serif;margin: 1em 0;'>Sure... as long we assume that really all physical disk are used .. the fact that was told 1/2 or 1/4 might turn out that one / two complet enclosures 're eliminated ... ? ..that s why I was asking for more details .. <br><br>I dont see this degration in my environments. . as long the vdisks are big enough to span over all pdisks ( which should be the case for capacity in a range of TB ) ... the performance stays the same <br><br>Gesendet von IBM Verse</FONT><br><br><div class="domino-section" dir="ltr"><div class="domino-section-head"><span class="domino-section-title"><font color="#424282">Jan-Frode Myklebust --- Re: [gpfsug-discuss] Write performances and filesystem size --- </font></span></div><div class="domino-section-body"><br><table width="100%" border="0" cellspacing="0" cellpadding="0"><tr valign="top"><td width="1%" style="width: 96px;"><font size="2" color="#5F5F5F">Von:</font></td><td width="100%" style="width: auto;"><font size="2">"Jan-Frode Myklebust" <janfrode@tanso.net></font></td></tr><tr valign="top"><td width="1%" style="width: 96px;"><font size="2" color="#5F5F5F">An:</font></td><td width="100%" style="width: auto;"><font size="2">"gpfsug main discussion list" <gpfsug-discuss@spectrumscale.org></font></td></tr><tr valign="top"><td width="1%" style="width: 96px;"><font size="2" color="#5F5F5F">Datum:</font></td><td width="100%" style="width: auto;"><font size="2">Mi. 15.11.2017 21:35</font></td></tr><tr valign="top"><td width="1%" style="width: 96px;"><font size="2" color="#5F5F5F">Betreff:</font></td><td width="100%" style="width: auto;"><font size="2">Re: [gpfsug-discuss] Write performances and filesystem size</font></td></tr></table><hr width="100%" size="2" align="left" noshade style="color:#8091A5; "><br></div><div><div dir="auto">Olaf, this looks like a Lenovo «ESS GLxS» version. Should be using same number of spindles for any size filesystem, so I would also expect them to perform the same.</div></div><div><br><br><br> -jf</div><div dir="auto"><br></div><div dir="auto"><br><div class="gmail_quote" dir="auto"><div>ons. 15. nov. 2017 kl. 11:26 skrev Olaf Weiser <<a href="mailto:olaf.weiser@de.ibm.com" target="_blank">olaf.weiser@de.ibm.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><font size="2" face="sans-serif"> to add a comment ... .. very
simply... depending on how you allocate the physical block storage ....
if you - simply - using less physical resources when reducing the capacity
(in the same ratio) .. you get , what you see.... </font><br><br><font size="2" face="sans-serif">so you need to tell us, how you allocate
your block-storage .. (Do you using RAID controllers , where are your LUNs
coming from, are then less RAID groups involved, when reducing the capacity
?...) </font><br><br><font size="2" face="sans-serif">GPFS can be configured to give you pretty
as much as what the hardware can deliver.. if you reduce resource.. ...
you'll get less , if you enhance your hardware .. you get more... almost
regardless of the total capacity in #blocks .. </font><br><br><br><br><br><br><br><font size="1" color="#5f5f5f" face="sans-serif">From:
</font><font size="1" face="sans-serif">"Kumaran Rajaram"
<<a href="mailto:kums@us.ibm.com" target="_blank">kums@us.ibm.com</a>></font><br><font size="1" color="#5f5f5f" face="sans-serif">To:
</font><font size="1" face="sans-serif">gpfsug main discussion
list <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">gpfsug-discuss@spectrumscale.org</a>></font><br><font size="1" color="#5f5f5f" face="sans-serif">Date:
</font><font size="1" face="sans-serif">11/15/2017 11:56 AM</font><br><font size="1" color="#5f5f5f" face="sans-serif">Subject:
</font><font size="1" face="sans-serif">Re: [gpfsug-discuss]
Write performances and filesystem size</font><br><font size="1" color="#5f5f5f" face="sans-serif">Sent by:
</font><font size="1" face="sans-serif"><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">gpfsug-discuss-bounces@spectrumscale.org</a></font><br><hr noshade><br><br><br><font size="2" face="Arial">Hi,</font><font size="3" face="sans-serif"><br></font><font size="2" color="red" face="Arial"><br>>>Am I missing something? Is this an expected behaviour and someone
has an explanation for this?</font><font size="3" face="sans-serif"><br></font><font size="2" face="Arial"><br>Based on your scenario, write degradation as the file-system is populated
is possible if you had formatted the file-system with "-j cluster".
</font><font size="3" face="sans-serif"><br></font><font size="2" face="Arial"><br>For consistent file-system performance, we recommend <b>mmcrfs "-j
scatter" layoutMap.</b> Also, we need to ensure the mmcrfs
"-n" is set properly.</font><font size="3" face="sans-serif"><br></font><font size="2" face="Arial"><br>[snip from mmcrfs]</font><font size="2" color="blue" face="Arial"><i><br># mmlsfs <fs> | egrep 'Block allocation| Estimated number'<br> -j scatter
Block allocation
type<br> -n 128
Estimated
number of nodes that will mount file system</i></font><font size="2" face="Arial"><br>[/snip]</font><font size="3" face="sans-serif"><br><br></font><font size="2" face="Arial"><br>[snip from man mmcrfs]</font><font size="2" color="blue" face="Arial"><i><br> <b>layoutMap={scatter|</b></i><i> <b>cluster}</b></i><i><br> Specifies
the block allocation map type. When<br> allocating
blocks for a given file, GPFS first<br> uses a round‐robin
algorithm to spread the data<br> across all
disks in the storage pool. After a<br> disk is
selected, the location of the data<br> block on
the disk is determined by the block<br> allocation
map type<b>. If cluster is<br> specified,
GPFS attempts to allocate blocks in<br> clusters.
Blocks that belong to a particular<br> file are
kept adjacent to each other within<br> each cluster.
If scatter is specified,<br> the location
of the block is chosen randomly.</b></i></font><font size="3" face="sans-serif"><br></font><font size="2" color="blue" face="Arial"><i><br> <b> The cluster
allocation method may provide<br> better disk
performance for some disk<br> subsystems
in relatively small installations.<br> The benefits
of clustered block allocation<br> diminish
when the number of nodes in the<br> cluster
or the number of disks in a file system<br> increases,
or when the file system’s free space<br> becomes
fragmented. </b></i><i>The <b>cluster</b></i><i><br> allocation
method is the default for GPFS<br> clusters
with eight or fewer nodes and for file<br> systems
with eight or fewer disks.</i></font><font size="3" face="sans-serif"><br></font><font size="2" color="blue" face="Arial"><i><br> <b>The scatter
allocation method provides<br> more consistent
file system performance by<br> averaging
out performance variations due to<br> block location
(for many disk subsystems, the<br> location
of the data relative to the disk edge<br> has a substantial
effect on performance).</b></i><i>This<br> allocation
method is appropriate in most cases<br> and is the
default for GPFS clusters with more<br> than eight
nodes or file systems with more than<br> eight disks.</i></font><font size="3" face="sans-serif"><br></font><font size="2" color="blue" face="Arial"><i><br> The block
allocation map type cannot be changed<br> after the
storage pool has been created.</i></font><font size="3" face="sans-serif"><br><br></font><font size="2" color="blue" face="Arial"><b><i><br>-n</i></b><i> <b>NumNodes</b></i><i><br> The estimated number of nodes that will mount
the file<br> system in the local cluster and all remote
clusters.<br> This is used as a best guess for the initial
size of<br> some file system data structures. The default
is 32.<br> This value can be changed after the file system
has been<br> created but it does not change the existing
data<br> structures. Only the newly created data structure
is<br> affected by the new value. For example, new
storage<br> pool.</i></font><font size="3" face="sans-serif"><br></font><font size="2" color="blue" face="Arial"><i><br> When you create a GPFS file system, you might
want to<br> overestimate the number of nodes that will
mount the<br> file system. GPFS uses this information for
creating<br> data structures that are essential for achieving
maximum<br> parallelism in file system operations (For
more<br> information, see GPFS architecture in IBM
Spectrum<br> Scale: Concepts, Planning, and Installation
Guide ). If<br> you are sure there will never be more than
64 nodes,<br> allow the default value to be applied. If
you are<br> planning to add nodes to your system, you
should specify<br> a number larger than the default.</i></font><font size="3" face="sans-serif"><br></font><font size="2" face="Arial"><br>[/snip from man mmcrfs]</font><font size="3" face="sans-serif"><br></font><font size="2" face="Arial"><br>Regards,<br>-Kums</font><font size="3" face="sans-serif"><br><br><br><br><br></font><font size="1" color="#5f5f5f" face="sans-serif"><br>From: </font><font size="1" face="sans-serif">Ivano
Talamo <<a href="mailto:Ivano.Talamo@psi.ch" target="_blank">Ivano.Talamo@psi.ch</a>></font><font size="1" color="#5f5f5f" face="sans-serif"><br>To: </font><font size="1" face="sans-serif"><<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">gpfsug-discuss@spectrumscale.org</a>></font><font size="1" color="#5f5f5f" face="sans-serif"><br>Date: </font><font size="1" face="sans-serif">11/15/2017
11:25 AM</font><font size="1" color="#5f5f5f" face="sans-serif"><br>Subject: </font><font size="1" face="sans-serif">[gpfsug-discuss]
Write performances and filesystem size</font><font size="1" color="#5f5f5f" face="sans-serif"><br>Sent by: </font><font size="1" face="sans-serif"><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">gpfsug-discuss-bounces@spectrumscale.org</a></font><font size="3" face="sans-serif"><br></font><hr noshade><font size="3" face="sans-serif"><br><br></font><tt><font size="2"><br>Hello everybody,<br><br>together with my colleagues we are actually running some tests on a new
<br>DSS G220 system and we see some unexpected behaviour.<br><br>What we actually see is that write performances (we did not test read <br>yet) decreases with the decrease of filesystem size.<br><br>I will not go into the details of the tests, but here are some numbers:<br><br>- with a filesystem using the full 1.2 PB space we get 14 GB/s as the <br>sum of the disk activity on the two IO servers;<br>- with a filesystem using half of the space we get 10 GB/s;<br>- with a filesystem using 1/4 of the space we get 5 GB/s.<br><br>We also saw that performances are not affected by the vdisks layout, ie.
<br>taking the full space with one big vdisk or 2 half-size vdisks per RG <br>gives the same performances.<br><br>To our understanding the IO should be spread evenly across all the <br>pdisks in the declustered array, and looking at iostat all disks seem to
<br>be accessed. But so there must be some other element that affects <br>performances.<br><br>Am I missing something? Is this an expected behaviour and someone has an
<br>explanation for this?<br><br>Thank you,<br>Ivano<br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at <a href="http://spectrumscale.org" target="_blank">spectrumscale.org</a></font></tt><font size="3" color="blue" face="sans-serif"><u><br></u></font><a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=McIf98wfiVqHU8ZygezLrQ&m=py_FGl3hi9yQsby94NZdpBFPwcUU0FREyMSSvuK_10U&s=Bq1J9eIXxadn5yrjXPHmKEht0CDBwfKJNH72p--T-6s&e=" target="_blank"><tt><font size="2" color="blue"><u>https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=McIf98wfiVqHU8ZygezLrQ&m=py_FGl3hi9yQsby94NZdpBFPwcUU0FREyMSSvuK_10U&s=Bq1J9eIXxadn5yrjXPHmKEht0CDBwfKJNH72p--T-6s&e=</u></font></tt></a><tt><font size="2"><br></font></tt><font size="3" face="sans-serif"><br><br></font><tt><font size="2">_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at <a href="http://spectrumscale.org" target="_blank">spectrumscale.org</a><br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target="_blank"><tt><font size="2">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size="2"><br></font></tt><br><br><br>
_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br></blockquote></div></div><BR>
</BODY></HTML>