[gpfsug-discuss] Spectrum Protect and disk pools

IBM Spectrum Scale scale at us.ibm.com
Mon Jan 4 13:37:50 GMT 2021


Hi Diane,

Can you help Simon with the below query. Or else would you know who would
be the best person to be contacted here.


Regards, The Spectrum Scale (GPFS) team

------------------------------------------------------------------------------------------------------------------

If you feel that your question can benefit other users of  Spectrum Scale
(GPFS), then please post it to the public IBM developerWroks Forum at
https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479.


If your query concerns a potential software error in Spectrum Scale (GPFS)
and you have an IBM software maintenance contract please contact
1-800-237-5511 in the United States or your local IBM Service Center in
other countries.

The forum is informally monitored as time permits and should not be used
for priority messages to the Spectrum Scale (GPFS) team.



From:	Simon Thompson <S.J.Thompson at bham.ac.uk>
To:	"gpfsug-discuss at spectrumscale.org"
            <gpfsug-discuss at spectrumscale.org>
Date:	04-01-2021 05.51 PM
Subject:	[EXTERNAL] [gpfsug-discuss] Spectrum Protect and disk pools
Sent by:	gpfsug-discuss-bounces at spectrumscale.org



Hi All,

We use Spectrum Protect (TSM) to backup our Scale filesystems. We have the
backup setup to use multiple nodes with the PROXY node function turned on
(and to some extent also use multiple target servers).

This all feels like it is nice and parallel, on the TSM servers, we have
disk pools for any “small” files to drop into (I think we set anything
smaller than 20GB) to prevent lots of small files stalling tape drive
writes.

Whilst digging into why we have slow backups at times, we found that the
disk pool empties with a single thread (one drive). And looking at the
docs:
https://www.ibm.com/support/pages/concurrent-migration-processes-and-constraints

This implies that we are limited to the number of client nodes stored in
the pool. i.e. because we have one node and PROXY nodes, we are essentially
limited to a single thread streaming out of the disk pool when full.

Have we understood this correctly as if so, this appears to make the whole
purpose of PROXY nodes sort of pointless if you have lots of small files.
Or is there some other setting we should be looking at to increase the
number of threads when the disk pool is emptying? (The disk pool itself has
Migration Processes: 6)

Thanks

Simon_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210104/eb4dc92f/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210104/eb4dc92f/attachment-0002.gif>


More information about the gpfsug-discuss mailing list