[gpfsug-discuss] Configuration advice

IBM Spectrum Scale scale at us.ibm.com
Mon Feb 19 14:00:49 GMT 2018


As I think you understand we can only provide general guidance as regards 
your questions.  If you want a detailed examination of your requirements 
and a proposal for a solution you will need to engage the appropriate IBM 
services team.

My personal recommendation is to use as few file systems as possible, 
preferably just one.  The reason is that makes general administration, and 
storage management, easier.  If you do use filesets I suggest you use 
independent filesets because they offer more administrative control than 
dependent filesets.  As for the number of nodes in the cluster that 
depends on your requirements for performance and availability.  If you do 
have only 2 then you will need a tiebreaker disk to resolve quorum issues 
should the network between the nodes have problems.  If you intend to 
continue to use HSM I would suggest you use the GPFS policy engine to 
drive the migrations because it should be more efficient than using HSM 
directly.

Regards, The Spectrum Scale (GPFS) team

------------------------------------------------------------------------------------------------------------------
If you feel that your question can benefit other users of  Spectrum Scale 
(GPFS), then please post it to the public IBM developerWroks Forum at 
https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479
. 

If your query concerns a potential software error in Spectrum Scale (GPFS) 
and you have an IBM software maintenance contract please contact 
1-800-237-5511 in the United States or your local IBM Service Center in 
other countries. 

The forum is informally monitored as time permits and should not be used 
for priority messages to the Spectrum Scale (GPFS) team.



From:   Pawel Dziekonski <dzieko at wcss.pl>
To:     gpfsug-discuss at spectrumscale.org
Date:   02/12/2018 10:18 AM
Subject:        [gpfsug-discuss] Configuration advice
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Hi All,

I inherited from previous admin 2 separate gpfs machines.
All hardware+software is old so I want to switch to new
servers, new disk arrays, new gpfs version and new gpfs
"design". 

Each machine has 4 gpfs filesystems and runs a TSM HSM
client that migrates data to tapes using separate TSM
servers:
GPFS+HSM no 1 -> TSM server no 1 -> tapes
GPFS+HSM no 2 -> TSM server no 2 -> tapes

Migration is done by HSM (not GPFS policies).

All filesystems are used for archiving results from HPC
system and other files (a kind of backup - don't ask...).
Data is written by users via nfs shares. There are 8 nfs
mount points corresponding to 8 gpfs filesystems, but there
is no real reason for that.

4 filesystems are large and heavily used, 4 remaining 
are almost not used.

The question is how to configure new gpfs infrastructure?
My initial impression is that I should create a GPFS cluster
of 2+ nodes and export NFS using CES.  The most important
question is how many filesystem do I need? Maybe just 2 and
8 filesets?
Or how to do that in a flexible way and not to lock myself
in stupid configuration?

any hints?
thanks, Pawel

ps. I will recall all data and copy it to new
infrastructure.  Yes, that's the way I want
to do that. :)

-- 
Pawel Dziekonski <pawel.dziekonski at wcss.pl>, 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.wcss.pl&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=-wyO42O-5SDJQfYoGpqeObZNSlFzduC9mlXhsZb65HI&s=__3QSrBGRtG4Rja-QzbpqALX2o8l-67gtrqePi0NrfE&e=

Wroclaw Centre for Networking & Supercomputing, HPC Department
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=-wyO42O-5SDJQfYoGpqeObZNSlFzduC9mlXhsZb65HI&s=32gAuk8HDIPkjMjY4L7DB1tFqmJxeaP4ZWIYA_Ya3ts&e=






-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180219/1b8b25c3/attachment-0002.htm>


More information about the gpfsug-discuss mailing list