<html><body><p><tt>Hi Danny,<br><br>can you be a bit more specific, which resources get exhausted ?<br>are you talking about operating system or Spectrum Scale resources<br>(filecache or pagepool) ?<br><br>when you migrate the files ( i assume policy engine) did you specify which<br>nodes do the migration ( -N hostnames) or did you just run mmapplypolicy<br>without anything ?<br><br>can you post either your entire mmlsconfig or at least output of :<br><br>for i in maxFilesToCache pagepool maxStatCache nsdMinWorkerThreads<br>nsdMaxWorkerThreads worker1Threads; do mmlsconfig $i ; done<br><br>mmlsfs , mmlsnsd and mmlscluster output might be useful too..</tt><br><br><br>Hi Sven<br><br>Sure.<br><br><br>the resorces that are <tt>exhausted </tt> are CPU and RAM, i can note that when the system is make the Pool migration the SMB service is down ( very slow equal to down) <br><br>when i migrate the files i was make probes with one node, two nodes (nsd nodes) and all nodes ( Nsd and protocol nodes) <br><br>there is the output <br><br>maxFilesToCache 1000000<br>pagepool 20G<br>maxStatCache 1000<br>nsdMinWorkerThreads 8<br>nsdMinWorkerThreads 1 [cesNodes]<br>nsdMaxWorkerThreads 512<br>nsdMaxWorkerThreads 2 [cesNodes]<br>worker1Threads 48<br>worker1Threads 800 [cesNodes]<br><br><tt>mmlsfs ( there is two file system one for CES Shared Root and another for data ) </tt><br><br><br>le system attributes for /dev/datafs:<br>=======================================<br>flag value description<br>------------------- ------------------------ -----------------------------------<br> -f 8192 Minimum fragment size in bytes<br> -i 4096 Inode size in bytes<br> -I 16384 Indirect block size in bytes<br> -m 1 Default number of metadata replicas<br> -M 2 Maximum number of metadata replicas<br> -r 1 Default number of data replicas<br> -R 2 Maximum number of data replicas<br> -j cluster Block allocation type<br> -D nfs4 File locking semantics in effect<br> -k nfs4 ACL semantics in effect<br> -n 32 Estimated number of nodes that will mount file system<br> -B 262144 Block size<br> -Q none Quotas accounting enabled<br> none Quotas enforced<br> none Default quotas enabled<br> --perfileset-quota No Per-fileset quota enforcement<br> --filesetdf No Fileset df enabled?<br> -V 15.01 (4.2.0.0) File system version<br> --create-time Wed Dec 23 09:31:07 2015 File system creation time<br> -z No Is DMAPI enabled?<br> -L 4194304 Logfile size<br> -E Yes Exact mtime mount option<br> -S No Suppress atime mount option<br> -K whenpossible Strict replica allocation option<br> --fastea Yes Fast external attributes enabled?<br> --encryption No Encryption enabled?<br> --inode-limit 55325440 Maximum number of inodes in all inode spaces<br> --log-replicas 0 Number of log replicas<br> --is4KAligned Yes is4KAligned?<br> --rapid-repair Yes rapidRepair enabled?<br> --write-cache-threshold 0 HAWC Threshold (max 65536)<br> -P system;T12TB;T26TB Disk storage pools in file system<br> -d nsd2;nsd3;nsd4;nsd5;nsd6;nsd7;nsd8;nsd9;nsd16;nsd17;nsd18;nsd19;nsd20;nsd15;nsd21;nsd10;nsd11;nsd12;nsd13;nsd14 Disks in file system<br> -A yes Automatic mount option<br> -o none Additional mount options<br> -T /datafs Default mount point<br> --mount-priority 0 Mount priority<br><br>File system attributes for /dev/sharerfs:<br>=========================================<br>flag value description<br>------------------- ------------------------ -----------------------------------<br> -f 8192 Minimum fragment size in bytes<br> -i 4096 Inode size in bytes<br> -I 16384 Indirect block size in bytes<br> -m 1 Default number of metadata replicas<br> -M 2 Maximum number of metadata replicas<br> -r 1 Default number of data replicas<br> -R 2 Maximum number of data replicas<br> -j scatter Block allocation type<br> -D nfs4 File locking semantics in effect<br> -k nfs4 ACL semantics in effect<br> -n 100 Estimated number of nodes that will mount file system<br> -B 262144 Block size<br> -Q none Quotas accounting enabled<br> none Quotas enforced<br> none Default quotas enabled<br> --perfileset-quota No Per-fileset quota enforcement<br> --filesetdf No Fileset df enabled?<br> -V 15.01 (4.2.0.0) File system version<br> --create-time Tue Dec 22 17:19:33 2015 File system creation time<br> -z No Is DMAPI enabled?<br> -L 4194304 Logfile size<br> -E Yes Exact mtime mount option<br> -S No Suppress atime mount option<br> -K whenpossible Strict replica allocation option<br> --fastea Yes Fast external attributes enabled?<br> --encryption No Encryption enabled?<br> --inode-limit 102656 Maximum number of inodes<br> --log-replicas 0 Number of log replicas<br> --is4KAligned Yes is4KAligned?<br> --rapid-repair Yes rapidRepair enabled?<br> --write-cache-threshold 0 HAWC Threshold (max 65536)<br> -P system Disk storage pools in file system<br> -d nsd1 Disks in file system<br> -A yes Automatic mount option<br> -o none Additional mount options<br> -T /sharedr Default mount point<br> --mount-priority 0 Mount priority<br><br><br><br>mmlsnsd <br><br><br> File system Disk name NSD servers<br>---------------------------------------------------------------------------<br> datafs nsd2 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd3 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd4 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd5 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd6 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd7 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd8 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd9 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd15 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd16 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd17 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd18 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd19 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd20 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd21 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd10 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd11 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd12 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd13 NSDSERV01_Daemon,NSDSERV02_Daemon<br> datafs nsd14 NSDSERV01_Daemon,NSDSERV02_Daemon<br> sharerfs nsd1 NSDSERV01_Daemon,NSDSERV02_Daemon<br><br><br>mmlscluster<br><br>GPFS cluster information<br>========================<br> GPFS cluster name: spectrum_syc.localdomain<br> GPFS cluster id: 2719632319013564592<br> GPFS UID domain: spectrum_syc.localdomain<br> Remote shell command: /usr/bin/ssh<br> Remote file copy command: /usr/bin/scp<br> Repository type: CCR<br><br> Node Daemon node name IP address Admin node name Designation<br>-----------------------------------------------------------------------<br> 1 NSDSERV01_Daemon 172.19.20.61 NSDSERV01_Daemon quorum-manager-perfmon<br> 2 NSDSERV02_Daemon 172.19.20.62 NSDSERV02_Daemon quorum-manager-perfmon<br> 3 PROTSERV01_Daemon 172.19.20.63 PROTSERV01_Daemon quorum-manager-perfmon<br> 4 PROTSERV02_Daemon 172.19.20.64 PROTSERV02_Daemon manager-perfmon<br><br><br><br><br>and a mmdf <br><br>disk disk size failure holds holds free GB free GB<br>name in GB group metadata data in full blocks in fragments<br>--------------- ------------- -------- -------- ----- -------------------- -------------------<br>Disks in storage pool: system (Maximum disk size allowed is 4.1 TB)<br>nsd2 400 1 Yes Yes 390 ( 97%) 1 ( 0%)<br>nsd3 400 1 Yes Yes 390 ( 97%) 1 ( 0%)<br>nsd4 400 1 Yes Yes 390 ( 97%) 1 ( 0%)<br>nsd5 400 1 Yes Yes 390 ( 97%) 1 ( 0%)<br>nsd6 400 1 Yes Yes 390 ( 97%) 1 ( 0%)<br>nsd7 400 1 Yes Yes 390 ( 97%) 1 ( 0%)<br>nsd8 400 1 Yes Yes 390 ( 97%) 1 ( 0%)<br>nsd9 400 1 Yes Yes 390 ( 97%) 1 ( 0%)<br> ------------- -------------------- -------------------<br>(pool total) 3200 3120 ( 97%) 1 ( 0%)<br><br>Disks in storage pool: T12TB (Maximum disk size allowed is 4.1 TB)<br>nsd14 500 2 No Yes 496 ( 99%) 1 ( 0%)<br>nsd13 500 2 No Yes 496 ( 99%) 1 ( 0%)<br>nsd12 500 2 No Yes 496 ( 99%) 1 ( 0%)<br>nsd11 500 2 No Yes 496 ( 99%) 1 ( 0%)<br>nsd10 500 2 No Yes 496 ( 99%) 1 ( 0%)<br>nsd15 500 2 No Yes 496 ( 99%) 1 ( 0%)<br> ------------- -------------------- -------------------<br>(pool total) 3000 2974 ( 99%) 1 ( 0%)<br><br>Disks in storage pool: T26TB (Maximum disk size allowed is 8.2 TB)<br>nsd21 500 3 No Yes 500 (100%) 1 ( 0%)<br>nsd20 500 3 No Yes 500 (100%) 1 ( 0%)<br>nsd19 500 3 No Yes 500 (100%) 1 ( 0%)<br>nsd18 500 3 No Yes 500 (100%) 1 ( 0%)<br>nsd17 500 3 No Yes 500 (100%) 1 ( 0%)<br>nsd16 500 3 No Yes 500 (100%) 1 ( 0%)<br> ------------- -------------------- -------------------<br>(pool total) 3000 3000 (100%) 1 ( 0%)<br><br> ============= ==================== ===================<br>(data) 9200 9093 ( 99%) 2 ( 0%)<br>(metadata) 3200 3120 ( 97%) 1 ( 0%)<br> ============= ==================== ===================<br>(total) 9200 9093 ( 99%) 2 ( 0%)<br><br>Inode Information<br>-----------------<br>Total number of used inodes in all Inode spaces: 284090<br>Total number of free inodes in all Inode spaces: 20318278<br>Total number of allocated inodes in all Inode spaces: 20602368<br>Total of Maximum number of inodes in all Inode spaces: 55325440<br><br><br>Thanks<br><br><br>
<table class="MsoNormalTable" border="0" cellspacing="0" cellpadding="0"><tr valign="top"><td width="284" valign="middle"><b><font size="4" face="Arial">Danny Alexander Calderon R</font></b></td></tr>
<tr valign="top"><td width="284" valign="middle"><b><font size="4" face="Arial">Client Technical Specialist - CTS</font></b><br><b><font size="4" face="Arial">Storage</font></b><br><b><font size="4" face="Arial">STG Colombia</font></b><br><br><font size="4" color="#696969" face="Arial">Phone: 57-1-</font><font size="4" color="#5F5F5F" face="Arial">6281956</font></td></tr>
<tr valign="top"><td width="284"><font size="4" color="#696969" face="Arial">Mobile: 57- 318 352 9258</font><br><br><font size="4" color="#696969" face="Arial">Carrera 53 Número 100-25</font></td></tr>
<tr valign="top"><td width="284"><font size="4" color="#696969" face="Arial"> Bogotá, Colombia</font></td></tr>
<tr valign="top"><td width="284"><font size="4" color="#696969" face="Arial"> </font><img src="cid:1__=8ABBF5A3DFC87A9B8f9e8a93df938690918c8AB@" width="83" height="30"></td></tr></table><br><br><img width="16" height="16" src="cid:2__=8ABBF5A3DFC87A9B8f9e8a93df938690918c8AB@" border="0" alt="Inactive hide details for gpfsug-discuss-request---01/03/2016 05:18:44 PM---Send gpfsug-discuss mailing list submissions to gp"><font color="#424282">gpfsug-discuss-request---01/03/2016 05:18:44 PM---Send gpfsug-discuss mailing list submissions to gpfsug-discuss@spectrumscale.org</font><br><br><font size="2" color="#5F5F5F">From: </font><font size="2">gpfsug-discuss-request@spectrumscale.org</font><br><font size="2" color="#5F5F5F">To: </font><font size="2">gpfsug-discuss@spectrumscale.org</font><br><font size="2" color="#5F5F5F">Date: </font><font size="2">01/03/2016 05:18 PM</font><br><font size="2" color="#5F5F5F">Subject: </font><font size="2">gpfsug-discuss Digest, Vol 48, Issue 2</font><br><font size="2" color="#5F5F5F">Sent by: </font><font size="2">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr width="100%" size="2" align="left" noshade style="color:#8091A5; "><br><br><br><tt>Send gpfsug-discuss mailing list submissions to<br> gpfsug-discuss@spectrumscale.org<br><br>To subscribe or unsubscribe via the World Wide Web, visit<br> </tt><tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></tt><tt><br>or, via email, send a message with subject or body 'help' to<br> gpfsug-discuss-request@spectrumscale.org<br><br>You can reach the person managing the list at<br> gpfsug-discuss-owner@spectrumscale.org<br><br>When replying, please edit your Subject line so it is more specific<br>than "Re: Contents of gpfsug-discuss digest..."<br><br><br>Today's Topics:<br><br> 1. Resource exhausted by Pool Migration<br> (Danny Alexander Calderon Rodriguez)<br> 2. Re: Resource exhausted by Pool Migration (Sven Oehme)<br> 3. metadata replication question<br> (Simon Thompson (Research Computing - IT Services))<br> 4. Re: metadata replication question (Barry Evans)<br> 5. Re: metadata replication question<br> (Simon Thompson (Research Computing - IT Services))<br><br><br>----------------------------------------------------------------------<br><br>Message: 1<br>Date: Sun, 3 Jan 2016 15:55:59 +0000<br>From: "Danny Alexander Calderon Rodriguez" <dacalder@co.ibm.com><br>To: gpfsug-discuss@spectrumscale.org<br>Subject: [gpfsug-discuss] Resource exhausted by Pool Migration<br>Message-ID: <201601031556.u03Futfw007019@d24av01.br.ibm.com><br>Content-Type: text/plain; charset="utf-8"<br><br>HI All <br><br>Actually I have a 4.2 Spectrum Scale cluster with protocol service, we are managing small files (32K to 140K), when I try to migrate some files (120.000 files ) the system resources of all nodes is exhausted and the protocol nodes don't get services to client.<br><br>I wan to ask if there is any way to limit the resources consuming at the migration time?<br><br><br>Thanks to all <br><br><br><br>Enviado desde IBM Verse<br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <</tt><tt><a href="http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160103/cfa1d022/attachment-0001.html">http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160103/cfa1d022/attachment-0001.html</a></tt><tt>><br><br>------------------------------<br><br>Message: 2<br>Date: Sun, 3 Jan 2016 08:42:27 -0800<br>From: Sven Oehme <oehmes@gmail.com><br>To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org><br>Subject: Re: [gpfsug-discuss] Resource exhausted by Pool Migration<br>Message-ID:<br> <CALssuR2z4nzTxcTN0nJTF0u0OXSiCekcQ+DHUzMfTnGUnLU58g@mail.gmail.com><br>Content-Type: text/plain; charset="utf-8"<br><br>Hi Danny,<br><br>can you be a bit more specific, which resources get exhausted ?<br>are you talking about operating system or Spectrum Scale resources<br>(filecache or pagepool) ?<br><br>when you migrate the files ( i assume policy engine) did you specify which<br>nodes do the migration ( -N hostnames) or did you just run mmapplypolicy<br>without anything ?<br><br>can you post either your entire mmlsconfig or at least output of :<br><br>for i in maxFilesToCache pagepool maxStatCache nsdMinWorkerThreads<br>nsdMaxWorkerThreads worker1Threads; do mmlsconfig $i ; done<br><br>mmlsfs , mmlsnsd and mmlscluster output might be useful too..<br><br>sven<br><br><br>On Sun, Jan 3, 2016 at 7:55 AM, Danny Alexander Calderon Rodriguez <<br>dacalder@co.ibm.com> wrote:<br><br>> HI All<br>><br>> Actually I have a 4.2 Spectrum Scale cluster with protocol service, we are managing small files (32K to 140K), when I try to migrate some files (120.000 files ) the system resources of all nodes is exhausted and the protocol nodes don't get services to client.<br>><br>> I wan to ask if there is any way to limit the resources consuming at the migration time?<br>><br>><br>> Thanks to all<br>><br>><br>><br>> Enviado desde IBM Verse<br>><br>><br>><br>> _______________________________________________<br>> gpfsug-discuss mailing list<br>> gpfsug-discuss at spectrumscale.org<br>> </tt><tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></tt><tt><br>><br>><br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <</tt><tt><a href="http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160103/d5f4999c/attachment-0001.html">http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160103/d5f4999c/attachment-0001.html</a></tt><tt>><br><br>------------------------------<br><br>Message: 3<br>Date: Sun, 3 Jan 2016 21:56:26 +0000<br>From: "Simon Thompson (Research Computing - IT Services)"<br> <S.J.Thompson@bham.ac.uk><br>To: "gpfsug-discuss@spectrumscale.org"<br> <gpfsug-discuss@spectrumscale.org><br>Subject: [gpfsug-discuss] metadata replication question<br>Message-ID:<br> <CF45EE16DEF2FE4B9AA7FF2B6EE26545D16BD321@EX13.adf.bham.ac.uk><br>Content-Type: text/plain; charset="us-ascii"<br><br>I currently have 4 NSD servers in a cluster, two pairs in two data centres. Data and metadata replication is currently set to 2 with metadata sitting on sas drivers in a storewise array. I also have a vm floating between the two data centres to guarantee quorum in one only in the event of split brain.<br><br>Id like to add some ssd for metadata.<br><br>Should I:<br><br>Add raid1 ssd to the storewise?<br><br>Add local ssd to the nsd servers?<br><br>If I did the second, should I <br> add ssd to each nsd server (not raid 1) and set each in a different failure group and make metadata replication of 4.<br> add ssd to each nsd server as raid 1, use the same failure group for each data centre pair?<br> add ssd to each nsd server not raid 1, use the dame failure group for each data centre pair?<br><br>Or something else entirely?<br><br>What I want so survive is a split data centre situation or failure of a single nsd server at any point...<br><br>Thoughts? Comments?<br><br>I'm thinking the first of the nsd local options uses 4 writes as does the second, but each nsd server then has a local copy of the metatdata locally and ssd fails, in which case it should be able to get it from its local partner pair anyway (with readlocalreplica)?<br><br>Id like a cost competitive solution that gives faster performance than the current sas drives.<br><br>Was also thinking I might add an ssd to each nsd server for system.log pool for hawc as well...<br><br>Thanks<br><br>Simon<br><br>------------------------------<br><br>Message: 4<br>Date: Sun, 3 Jan 2016 22:10:21 +0000<br>From: Barry Evans <bevans@pixitmedia.com><br>To: gpfsug-discuss@spectrumscale.org<br>Subject: Re: [gpfsug-discuss] metadata replication question<br>Message-ID: <56899C4D.4050907@pixitmedia.com><br>Content-Type: text/plain; charset="windows-1252"; Format="flowed"<br><br>Can all 4 NSD servers see all existing storwize arrays across both DC's?<br><br>Cheers,<br>Barry<br><br><br>On 03/01/2016 21:56, Simon Thompson (Research Computing - IT Services) <br>wrote:<br>> I currently have 4 NSD servers in a cluster, two pairs in two data centres. Data and metadata replication is currently set to 2 with metadata sitting on sas drivers in a storewise array. I also have a vm floating between the two data centres to guarantee quorum in one only in the event of split brain.<br>><br>> Id like to add some ssd for metadata.<br>><br>> Should I:<br>><br>> Add raid1 ssd to the storewise?<br>><br>> Add local ssd to the nsd servers?<br>><br>> If I did the second, should I<br>> add ssd to each nsd server (not raid 1) and set each in a different failure group and make metadata replication of 4.<br>> add ssd to each nsd server as raid 1, use the same failure group for each data centre pair?<br>> add ssd to each nsd server not raid 1, use the dame failure group for each data centre pair?<br>><br>> Or something else entirely?<br>><br>> What I want so survive is a split data centre situation or failure of a single nsd server at any point...<br>><br>> Thoughts? Comments?<br>><br>> I'm thinking the first of the nsd local options uses 4 writes as does the second, but each nsd server then has a local copy of the metatdata locally and ssd fails, in which case it should be able to get it from its local partner pair anyway (with readlocalreplica)?<br>><br>> Id like a cost competitive solution that gives faster performance than the current sas drives.<br>><br>> Was also thinking I might add an ssd to each nsd server for system.log pool for hawc as well...<br>><br>> Thanks<br>><br>> Simon<br>> _______________________________________________<br>> gpfsug-discuss mailing list<br>> gpfsug-discuss at spectrumscale.org<br>> </tt><tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></tt><tt><br><br>-- <br><br>Barry Evans<br>Technical Director & Co-Founder<br>Pixit Media<br>Mobile: +44 (0)7950 666 248<br></tt><tt><a href="http://www.pixitmedia.com">http://www.pixitmedia.com</a></tt><tt><br><br><br>-- <br><br>This email is confidential in that it is intended for the exclusive <br>attention of the addressee(s) indicated. If you are not the intended <br>recipient, this email should not be read or disclosed to any other person. <br>Please notify the sender immediately and delete this email from your <br>computer system. Any opinions expressed are not necessarily those of the <br>company from which this email was sent and, whilst to the best of our <br>knowledge no viruses or defects exist, no responsibility can be accepted <br>for any loss or damage arising from its receipt or subsequent use of this <br>email.<br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <</tt><tt><a href="http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160103/5d463c6d/attachment-0001.html">http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160103/5d463c6d/attachment-0001.html</a></tt><tt>><br><br>------------------------------<br><br>Message: 5<br>Date: Sun, 3 Jan 2016 22:18:24 +0000<br>From: "Simon Thompson (Research Computing - IT Services)"<br> <S.J.Thompson@bham.ac.uk><br>To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org><br>Subject: Re: [gpfsug-discuss] metadata replication question<br>Message-ID:<br> <CF45EE16DEF2FE4B9AA7FF2B6EE26545D16BD363@EX13.adf.bham.ac.uk><br>Content-Type: text/plain; charset="us-ascii"<br><br><br>Yes there is extended san in place. The failure groups for the storage are different in each dc so we guarantee that the data replication has 1 copy per dc.<br><br>Simon<br>________________________________________<br>From: gpfsug-discuss-bounces@spectrumscale.org [gpfsug-discuss-bounces@spectrumscale.org] on behalf of Barry Evans [bevans@pixitmedia.com]<br>Sent: 03 January 2016 22:10<br>To: gpfsug-discuss@spectrumscale.org<br>Subject: Re: [gpfsug-discuss] metadata replication question<br><br>Can all 4 NSD servers see all existing storwize arrays across both DC's?<br><br>Cheers,<br>Barry<br><br><br>On 03/01/2016 21:56, Simon Thompson (Research Computing - IT Services) wrote:<br><br>I currently have 4 NSD servers in a cluster, two pairs in two data centres. Data and metadata replication is currently set to 2 with metadata sitting on sas drivers in a storewise array. I also have a vm floating between the two data centres to guarantee quorum in one only in the event of split brain.<br><br>Id like to add some ssd for metadata.<br><br>Should I:<br><br>Add raid1 ssd to the storewise?<br><br>Add local ssd to the nsd servers?<br><br>If I did the second, should I<br> add ssd to each nsd server (not raid 1) and set each in a different failure group and make metadata replication of 4.<br> add ssd to each nsd server as raid 1, use the same failure group for each data centre pair?<br> add ssd to each nsd server not raid 1, use the dame failure group for each data centre pair?<br><br>Or something else entirely?<br><br>What I want so survive is a split data centre situation or failure of a single nsd server at any point...<br><br>Thoughts? Comments?<br><br>I'm thinking the first of the nsd local options uses 4 writes as does the second, but each nsd server then has a local copy of the metatdata locally and ssd fails, in which case it should be able to get it from its local partner pair anyway (with readlocalreplica)?<br><br>Id like a cost competitive solution that gives faster performance than the current sas drives.<br><br>Was also thinking I might add an ssd to each nsd server for system.log pool for hawc as well...<br><br>Thanks<br><br>Simon<br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></tt><tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></tt><tt><br><br><br>--<br><br>Barry Evans<br>Technical Director & Co-Founder<br>Pixit Media<br>Mobile: +44 (0)7950 666 248<br></tt><tt><a href="http://www.pixitmedia.com">http://www.pixitmedia.com</a></tt><tt><br><br>[</tt><tt><a href="http://www.pixitmedia.com/sig/sig-cio.jpg">http://www.pixitmedia.com/sig/sig-cio.jpg</a></tt><tt>]<br>This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email.<br><br><br>------------------------------<br><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></tt><tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></tt><tt><br><br><br>End of gpfsug-discuss Digest, Vol 48, Issue 2<br>*********************************************<br><br></tt><br><BR>
</body></html>