[gpfsug-discuss] Spectrum Protect and disk pools

Simon Thompson S.J.Thompson at bham.ac.uk
Mon Jan 4 13:52:05 GMT 2021


Hi Jordi,

Thanks, yes it is a disk pool:

Protect: TSM01>q stg BACKUP_DISK f=d

                     Storage Pool Name: BACKUP_DISK
                     Storage Pool Type: Primary
                     Device Class Name: DISK
                     Storage Type: DEVCLASS
…
                     Next Storage Pool: BACKUP_ONSTAPE

So it is a disk pool … though it is made up of multiple disk files …

/tsmdisk/stgpool/tsmins-     BACKUP_DISK     DISK             200.0 G       0.0     On-Line
t3/bkup_diskvol01.dsm
/tsmdisk/stgpool/tsmins-     BACKUP_DISK     DISK             200.0 G       0.0     On-Line
t3/bkup_diskvol02.dsm
/tsmdisk/stgpool/tsmins-     BACKUP_DISK     DISK             200.0 G       0.0     On-Line
t3/bkup_diskvol03.dsm

Will look into the FILE pool as this sounds like it might be less single threaded than now 😊

Thanks

Simon

From: <gpfsug-discuss-bounces at spectrumscale.org> on behalf of "jordi.caubet at es.ibm.com" <jordi.caubet at es.ibm.com>
Reply to: "gpfsug-discuss at spectrumscale.org" <gpfsug-discuss at spectrumscale.org>
Date: Monday, 4 January 2021 at 13:36
To: "gpfsug-discuss at spectrumscale.org" <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Spectrum Protect and disk pools

Simon,

which kind of storage pool are you using, DISK or FILE ? I understand DISK pool from your mail. DISK pool does not behave the same as FILE pool.

DISK pool is limited by the number of nodes or MIGProcess setting (the minimum of both) as the document states. Using proxy helps you backup in parallel from multiple nodes to the stg pool but from Protect perspective it is a single node. Even multiple nodes are sending they run "asnodename" so single node from Protect perspective.

If using FILE pool, you can define the number of volumes within the FILE pool and when migrating to tape, it will migrate each volume in parallel with the limit of MIGProcess setting. So it would be the minimum of #volumes and MIGProcess value.

I know more deep technical skills in Protect are on this mailing list so feel free to add something or correct me.

Best Regards,
--
Jordi Caubet Serrabou
IBM Storage Client Technical Specialist (IBM Spain)
Ext. Phone: (+34) 679.79.17.84 (internal 55834)
E-mail: jordi.caubet at es.ibm.com<mailto:jordi.caubet at es.ibm.com>


-----gpfsug-discuss-bounces at spectrumscale.org<mailto:-----gpfsug-discuss-bounces at spectrumscale.org> wrote: -----
To: "gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>" <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
From: Simon Thompson
Sent by: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
Date: 01/04/2021 01:21PM
Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Protect and disk pools


Hi All,


We use Spectrum Protect (TSM) to backup our Scale filesystems. We have the backup setup to use multiple nodes with the PROXY node function turned on (and to some extent also use multiple target servers).


This all feels like it is nice and parallel, on the TSM servers, we have disk pools for any “small” files to drop into (I think we set anything smaller than 20GB) to prevent lots of small files stalling tape drive writes.


Whilst digging into why we have slow backups at times, we found that the disk pool empties with a single thread (one drive). And looking at the docs:
https://www.ibm.com/support/pages/concurrent-migration-processes-and-constraints


This implies that we are limited to the number of client nodes stored in the pool. i.e. because we have one node and PROXY nodes, we are essentially limited to a single thread streaming out of the disk pool when full.


Have we understood this correctly as if so, this appears to make the whole purpose of PROXY nodes sort of pointless if you have lots of small files. Or is there some other setting we should be looking at to increase the number of threads when the disk pool is emptying? (The disk pool itself has Migration Processes: 6)


Thanks


Simon
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Salvo indicado de otro modo más arriba / Unless stated otherwise above:
International Business Machines, S.A.
Santa Hortensia, 26-28, 28002 Madrid
Registro Mercantil de Madrid; Folio 1; Tomo 1525; Hoja M-28146
CIF A28-010791
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210104/21c75db8/attachment-0002.htm>


More information about the gpfsug-discuss mailing list