[gpfsug-discuss] Strategies - servers with local SAS disks
Carl Zetie
carlz at us.ibm.com
Wed Dec 7 17:47:52 GMT 2016
We don't allow mixing of different licensing models (i.e. socket and
capacity) within a single cluster*. As we worked through the implications,
we realized it would be just too complicated to determine how to license
any non-NSD nodes (management, CES, clients, etc.). In the socket model
they are chargeable, in the capacity model they are not, and while we
could have made up some rules, they would have added even more complexity
to Scale licensing.
This in turn is why we "grandfathered in" those customers already on
Advanced Edition, so that they don't have to convert existing clusters to
the new metric unless or until they want to. They can continue to buy
Advanced Edition.
The other thing we wanted to do with the capacity metric was to make the
licensing more friendly to architectural best practices or design choices.
So now you can have whatever management, gateway, etc. servers you need
without paying for additional server licenses. In particular, client-only
clusters cost nothing, and you don't have to keep track of clients if you
have a virtual environment where clients come and go rapidly.
I'm always happy to answer other questions about licensing.
regards,
Carl Zetie
*OK, there is one exception involving future ESS models and existing
clusters. If this is you, please have a conversation with your account
team.
Carl Zetie
Program Director, OM for Spectrum Scale, IBM
(540) 882 9353 ][ 15750 Brookhill Ct, Waterford VA 20197
carlz at us.ibm.com
From: gpfsug-discuss-request at spectrumscale.org
To: gpfsug-discuss at spectrumscale.org
Date: 12/07/2016 09:59 AM
Subject: gpfsug-discuss Digest, Vol 59, Issue 20
Sent by: gpfsug-discuss-bounces at spectrumscale.org
Send gpfsug-discuss mailing list submissions to
gpfsug-discuss at spectrumscale.org
To subscribe or unsubscribe via the World Wide Web, visit
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
gpfsug-discuss-request at spectrumscale.org
You can reach the person managing the list at
gpfsug-discuss-owner at spectrumscale.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."
Today's Topics:
1. Re: Any experience running native GPFS 4.2.1 on Xeon Phi node
booted with Centos 7.3? (Felipe Knop)
2. Re: Any experience running native GPFS 4.2.1 on Xeon Phi
node
booted with Centos 7.3? (David D. Johnson)
3. Re: Strategies - servers with local SAS disks
(Simon Thompson (Research Computing - IT Services))
----------------------------------------------------------------------
Message: 1
Date: Wed, 7 Dec 2016 09:37:15 -0500
From: "Felipe Knop" <knop at us.ibm.com>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Any experience running native GPFS 4.2.1
on Xeon Phi node booted with Centos 7.3?
Message-ID:
<OF76EE3FB4.4DB6D687-ON85258082.00502817-85258082.005050A1 at notes.na.collabserv.com>
Content-Type: text/plain; charset="us-ascii"
All,
The SMAP issue has been addressed in GPFS in 4.2.1.1.
See http://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html
Q2.4.
Felipe
----
Felipe Knop knop at us.ibm.com
GPFS Development and Security
IBM Systems
IBM Building 008
2455 South Rd, Poughkeepsie, NY 12601
(845) 433-9314 T/L 293-9314
From: Aaron Knister <aaron.knister at gmail.com>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 12/07/2016 09:25 AM
Subject: Re: [gpfsug-discuss] Any experience running native GPFS
4.2.1 on Xeon Phi node booted with Centos 7.3?
Sent by: gpfsug-discuss-bounces at spectrumscale.org
I don't know if this applies her but I seem to recall an issue with CentOS
7 (newer 3.X and on kernels), Broadwell processors and GPFS where GPFS
upset SMAP and would eventually get the node expelled. I think this may be
fixed in newer GPFS releases but the fix is to boot the kernel with the
nosmap parameter. Might be worth a try. I'm not clear on whether SMAP is
supported by the Xeon Phi's.
-Aaron
On Wed, Dec 7, 2016 at 5:34 AM <david_johnson at brown.edu> wrote:
IBM says it should work ok, we are not so sure. We had node expels that
stopped when we turned off gpfs on that node. Has anyone had better luck?
-- ddj
Dave Johnson
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20161207/48aa0319/attachment-0001.html
>
------------------------------
Message: 2
Date: Wed, 7 Dec 2016 09:47:46 -0500
From: "David D. Johnson" <david_johnson at brown.edu>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Any experience running native GPFS 4.2.1
on Xeon Phi node booted with Centos 7.3?
Message-ID: <5FBAC3AE-39F2-453D-8A9D-5FDE90BADD38 at brown.edu>
Content-Type: text/plain; charset="utf-8"
Yes, we saw the SMAP issue on earlier releases, added the kernel command
line option to disable it.
That is not the issue for this node. The Phi processors do not support
that cpu feature.
? ddj
> On Dec 7, 2016, at 9:37 AM, Felipe Knop <knop at us.ibm.com> wrote:
>
> All,
>
> The SMAP issue has been addressed in GPFS in 4.2.1.1.
>
> See
http://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html <
http://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html>
>
> Q2.4.
>
> Felipe
>
> ----
> Felipe Knop knop at us.ibm.com
> GPFS Development and Security
> IBM Systems
> IBM Building 008
> 2455 South Rd, Poughkeepsie, NY 12601
> (845) 433-9314 T/L 293-9314
>
>
>
>
>
> From: Aaron Knister <aaron.knister at gmail.com>
> To: gpfsug main discussion list
<gpfsug-discuss at spectrumscale.org>
> Date: 12/07/2016 09:25 AM
> Subject: Re: [gpfsug-discuss] Any experience running native GPFS
4.2.1 on Xeon Phi node booted with Centos 7.3?
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
>
>
>
> I don't know if this applies her but I seem to recall an issue with
CentOS 7 (newer 3.X and on kernels), Broadwell processors and GPFS where
GPFS upset SMAP and would eventually get the node expelled. I think this
may be fixed in newer GPFS releases but the fix is to boot the kernel with
the nosmap parameter. Might be worth a try. I'm not clear on whether SMAP
is supported by the Xeon Phi's.
>
> -Aaron
>
> On Wed, Dec 7, 2016 at 5:34 AM <david_johnson at brown.edu <
mailto:david_johnson at brown.edu>> wrote:
> IBM says it should work ok, we are not so sure. We had node expels that
stopped when we turned off gpfs on that node. Has anyone had better luck?
>
> -- ddj
> Dave Johnson
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss <
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>_______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss <
http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20161207/92819f21/attachment-0001.html
>
------------------------------
Message: 3
Date: Wed, 7 Dec 2016 14:58:39 +0000
From: "Simon Thompson (Research Computing - IT Services)"
<S.J.Thompson at bham.ac.uk>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Strategies - servers with local SAS
disks
Message-ID: <D46DD3A5.33E50%s.j.thompson at bham.ac.uk>
Content-Type: text/plain; charset="us-ascii"
I was going to ask about this, I recall it being mentioned about
"grandfathering" and also having mixed deployments.
Would that mean you could per TB license one set of NSD servers (hosting
only 1 FS) that co-existed in a cluster with other traditionally licensed
systems?
I would see having NSDs with different license models hosting the same FS
being problematic, but if it were a different file-system?
Simon
From: <gpfsug-discuss-bounces at spectrumscale.org<
mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of Daniel
Kidger <daniel.kidger at uk.ibm.com<mailto:daniel.kidger at uk.ibm.com>>
Reply-To: "gpfsug-discuss at spectrumscale.org<
mailto:gpfsug-discuss at spectrumscale.org>"
<gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org
>>
Date: Wednesday, 7 December 2016 at 12:36
To: "gpfsug-discuss at spectrumscale.org<
mailto:gpfsug-discuss at spectrumscale.org>"
<gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org
>>
Cc: "gpfsug-discuss at spectrumscale.org<
mailto:gpfsug-discuss at spectrumscale.org>"
<gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org
>>
Subject: Re: [gpfsug-discuss] Strategies - servers with local SAS disks
The new volume based licensing option is I agree quite pricey per TB at
first sight, but it could make some configuration choice, a lot cheaper
than they used to be under the Client:FPO:Server model.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20161207/51c1a2ea/attachment.html
>
------------------------------
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
End of gpfsug-discuss Digest, Vol 59, Issue 20
**********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161207/92c2accf/attachment-0002.htm>
More information about the gpfsug-discuss
mailing list