[gpfsug-discuss] GPFS+TSM+HSM: staging vs. migration priority

Dominic Mueller-Wicke01 dominic.mueller at de.ibm.com
Tue Mar 8 17:46:11 GMT 2016



Hi,

in all cases a recall request will be handled transparent for the user at
the time a migrated files is accessed. This can't be prevented and has two
down sides: a) the space used in the file system increases and b) random
access to storage media in the Spectrum Protect server happens. With newer
versions of Spectrum Protect for Space Management a so called tape
optimized recall method is available that can reduce the impact to the
system (especially Spectrum Protect server).
If the problem was that the file system went out of space at the time the
recalls came in I would recommend to reduce the threshold settings for the
file system and increase the number of premigrated files. This will allow
to free space very quickly if needed. If you didn't use the policy based
threshold migration so far I recommend to use it. This method is
significant faster compared to the classical HSM based threshold migration
approach.

Greetings, Dominic.

______________________________________________________________________________________________________________

Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical Lead |
+49 7034 64 32794 | dominic.mueller at de.ibm.com

Vorsitzende des Aufsichtsrats: Martina Koederitz; Geschäftsführung: Dirk
Wittkopp
Sitz der Gesellschaft: Böblingen; Registergericht: Amtsgericht Stuttgart,
HRB 243294
----- Forwarded by Dominic Mueller-Wicke01/Germany/IBM on 08.03.2016 18:21
-----

From:	Jaime Pinto <pinto at scinet.utoronto.ca>
To:	gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:	08.03.2016 17:36
Subject:	[gpfsug-discuss] GPFS+TSM+HSM: staging vs. migration priority
Sent by:	gpfsug-discuss-bounces at spectrumscale.org



I'm wondering whether the new version of the "Spectrum Suite" will
allow us set the priority of the HSM migration to be higher than
staging.


I ask this because back in 2011 when we were still using Tivoli HSM
with GPFS, during mixed requests for migration and staging operations,
we had a very annoying behavior in which the staging would always take
precedence over migration. The end-result was that the GPFS would fill
up to 100% and induce a deadlock on the cluster, unless we identified
all the user driven stage requests in time, and killed them all. We
contacted IBM support a few times asking for a way fix this, and were
told it was built into TSM. Back then we gave up IBM's HSM primarily
for this reason, although performance was also a consideration (more
to this on another post).

We are now reconsidering HSM for a new deployment, however only if
this issue has been resolved (among a few others).

What has been some of the experience out there?

Thanks
Jaime




---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
C: 416-505-1477

----------------------------------------------------------------
This message was sent using IMP at SciNet Consortium, University of
Toronto.


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160308/fdce9c16/attachment-0002.htm>


More information about the gpfsug-discuss mailing list