[gpfsug-discuss] gpfsug-discuss Digest, Vol 101, Issue 1
Prasad Surampudi
prasad.surampudi at theatsgroup.com
Mon Jun 1 17:33:05 BST 2020
So, if cluster_A is running Spec Scale 4.3.2 and Cluster_B is running 5.0.4, then would I be able to mount the filesystem from Cluster_A in Cluster_B as a remote filesystem?
And if cluster_B nodes have direct SAN access to the remote cluster_A filesystem, would they be sending all filesystem I/O directly to the disk via Fiber Channel?
I am assuming that this should work based on IBM link below. Can anyone from IBM support please confirm this?
https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1adv_admmcch.htm
On 6/1/20, 4:45 AM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org" <gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org> wrote:
Send gpfsug-discuss mailing list submissions to
gpfsug-discuss at spectrumscale.org
To subscribe or unsubscribe via the World Wide Web, visit
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
gpfsug-discuss-request at spectrumscale.org
You can reach the person managing the list at
gpfsug-discuss-owner at spectrumscale.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."
Today's Topics:
1. Re: Multi-cluster question (was Re: gpfsug-discuss Digest,
Vol 100, Issue 32) (Jan-Frode Myklebust)
2. Re: Multi-cluster question (was Re: gpfsug-discuss Digest,
Vol 100, Issue 32) (Avila, Geoffrey)
3. Re: gpfsug-discuss Digest, Vol 100, Issue 32
(Valdis Kl=?utf-8?Q?=c4=93?=tnieks)
4. Re: Multi-cluster question (was Re: gpfsug-discuss Digest,
Vol 100, Issue 32) (Jonathan Buzzard)
----------------------------------------------------------------------
Message: 1
Date: Sun, 31 May 2020 18:47:40 +0200
From: Jan-Frode Myklebust <janfrode at tanso.net>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Multi-cluster question (was Re:
gpfsug-discuss Digest, Vol 100, Issue 32)
Message-ID:
<CAHwPathww+ixE026Ss7=JYbdRJcFS_F05NzKGTNExHpEpqJShA at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
No, this is a common misconception. You don?t need any NSD servers. NSD
servers are only needed if you have nodes without direct block access.
Remote cluster or not, disk access will be over local block device (without
involving NSD servers in any way), or NSD server if local access isn?t
available. NSD-servers are not ?arbitrators? over access to a disk, they?re
just stupid proxies of IO commands.
-jf
s?n. 31. mai 2020 kl. 11:31 skrev Jonathan Buzzard <
jonathan.buzzard at strath.ac.uk>:
> On 29/05/2020 20:55, Stephen Ulmer wrote:
> > I have a question about multi-cluster, but it is related to this thread
> > (it would be solving the same problem).
> >
> > Let?s say we have two clusters A and B, both clusters are normally
> > shared-everything with no NSD servers defined.
>
> Er, even in a shared-everything all nodes fibre channel attached you
> still have to define NSD servers. That is a given NSD has a server (or
> ideally a list of servers) that arbitrate the disk. Unless it has
> changed since 3.x days. Never run a 4.x or later with all the disks SAN
> attached on all the nodes.
>
> > We want cluster B to be
> > able to use a file system in cluster A. If I zone the SAN such that
> > cluster B can see all of cluster A?s disks, can I then define a
> > multi-cluster relationship between them and mount a file system from A
> on B?
> >
> > To state it another way, must B's I/O for the foreign file system pass
> > though NSD servers in A, or can B?s nodes discover that they have
> > FibreChannel paths to those disks and use them?
> >
>
> My understanding is that remote cluster mounts have to pass through the
> NSD servers.
>
>
> JAB.
>
> --
> Jonathan A. Buzzard Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20200531/d703b178/attachment-0001.html>
------------------------------
Message: 2
Date: Sun, 31 May 2020 21:44:12 -0400
From: "Avila, Geoffrey" <geoffrey_avila at brown.edu>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Multi-cluster question (was Re:
gpfsug-discuss Digest, Vol 100, Issue 32)
Message-ID:
<CAKuHoVw6nAHu4WV2D+EdjRW9ZX26xnGBbjJJ34B1bzEvV=n50g at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
The local-block-device method of I/O is what is usually termed "SAN mode";
right?
On Sun, May 31, 2020 at 12:47 PM Jan-Frode Myklebust <janfrode at tanso.net>
wrote:
>
> No, this is a common misconception. You don?t need any NSD servers. NSD
> servers are only needed if you have nodes without direct block access.
>
> Remote cluster or not, disk access will be over local block device
> (without involving NSD servers in any way), or NSD server if local access
> isn?t available. NSD-servers are not ?arbitrators? over access to a disk,
> they?re just stupid proxies of IO commands.
>
>
> -jf
>
> s?n. 31. mai 2020 kl. 11:31 skrev Jonathan Buzzard <
> jonathan.buzzard at strath.ac.uk>:
>
>> On 29/05/2020 20:55, Stephen Ulmer wrote:
>> > I have a question about multi-cluster, but it is related to this thread
>> > (it would be solving the same problem).
>> >
>> > Let?s say we have two clusters A and B, both clusters are normally
>> > shared-everything with no NSD servers defined.
>>
>> Er, even in a shared-everything all nodes fibre channel attached you
>> still have to define NSD servers. That is a given NSD has a server (or
>> ideally a list of servers) that arbitrate the disk. Unless it has
>> changed since 3.x days. Never run a 4.x or later with all the disks SAN
>> attached on all the nodes.
>>
>> > We want cluster B to be
>> > able to use a file system in cluster A. If I zone the SAN such that
>> > cluster B can see all of cluster A?s disks, can I then define a
>> > multi-cluster relationship between them and mount a file system from A
>> on B?
>> >
>> > To state it another way, must B's I/O for the foreign file system pass
>> > though NSD servers in A, or can B?s nodes discover that they have
>> > FibreChannel paths to those disks and use them?
>> >
>>
>> My understanding is that remote cluster mounts have to pass through the
>> NSD servers.
>>
>>
>> JAB.
>>
>> --
>> Jonathan A. Buzzard Tel: +44141-5483420
>> HPC System Administrator, ARCHIE-WeSt.
>> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20200531/8f5266a0/attachment-0001.html>
------------------------------
Message: 3
Date: Sun, 31 May 2020 22:54:11 -0400
From: "Valdis Kl=?utf-8?Q?=c4=93?=tnieks" <valdis.kletnieks at vt.edu>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 100, Issue 32
Message-ID: <83255.1590980051 at turing-police>
Content-Type: text/plain; charset="us-ascii"
On Fri, 29 May 2020 22:30:08 +0100, Jonathan Buzzard said:
> Ethernet goes *very* fast these days you know :-) In fact *much* faster
> than fibre channel.
Yes, but the justification, purchase, and installation of 40G or 100G Ethernet
interfaces in the machines involved, plus the routers/switches along the way,
can go very slowly indeed.
So finding a way to replace 10G Ether with 16G FC can be a win.....
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20200531/a8d198d9/attachment-0001.sig>
------------------------------
Message: 4
Date: Mon, 1 Jun 2020 09:45:25 +0100
From: Jonathan Buzzard <jonathan.buzzard at strath.ac.uk>
To: gpfsug-discuss at spectrumscale.org
Subject: Re: [gpfsug-discuss] Multi-cluster question (was Re:
gpfsug-discuss Digest, Vol 100, Issue 32)
Message-ID: <bfcc1ac7-aaa4-bc64-3d2e-d9812ddf3d54 at strath.ac.uk>
Content-Type: text/plain; charset=utf-8; format=flowed
On 31/05/2020 17:47, Jan-Frode Myklebust wrote:
>
> No, this is a common misconception.? You don?t need any NSD servers. NSD
> servers are only needed if you have nodes without direct block access.
>
I see that has changed then. In the past mmcrnsd would simply fail
without a server list passed to it.
If you have been a long term GPFS user (I started with 2.2 on a site
that had been running since 1.x days) then we are not always aware of
things that have changed.
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
------------------------------
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
End of gpfsug-discuss Digest, Vol 101, Issue 1
**********************************************
More information about the gpfsug-discuss
mailing list