[gpfsug-discuss] mmbackup ‐g GlobalWorkDirectory not being followed

Marc A Kaplan makaplan at us.ibm.com
Mon Nov 4 01:24:35 GMT 2019


Please show us the 2 or 3 mmbackup commands that you would like to run
concurrently.

Peeking into the script, I find:

if [[ $scope == "inode-space" ]]
then
  deviceSuffix="${deviceName}.${filesetName}"
else
  deviceSuffix="${deviceName}"


I believe mmbackup is designed to allow concurrent backup of different
independent filesets within the same filesystem,  Or different
filesystems...

And a single mmbackup instance can drive several TSM servers, which can be
named with an option  or in the dsm.sys file:

#    --tsm-servers TSMserver[,TSMserver...]
#        List of TSM servers to use instead of the servers in the dsm.sys
file.





From:	Jaime Pinto <pinto at scinet.utoronto.ca>
To:	gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:	11/01/2019 07:40 PM
Subject:	[EXTERNAL] [gpfsug-discuss] mmbackup ‐g GlobalWorkDirectory not
            being followed
Sent by:	gpfsug-discuss-bounces at spectrumscale.org



How can I force secondary processes to use the folder instructed by the -g
option?

I started a mmbackup with ‐g /gpfs/fs1/home/.mmbackupCfg1 and another with
‐g /gpfs/fs1/home/.mmbackupCfg2 (and another with
‐g /gpfs/fs1/home/.mmbackupCfg3 ...)

However I'm still seeing transient files being worked into a
"/gpfs/fs1/home/.mmbackupCfg" folder (created by magic !!!). This
absolutely can not happen, since it's mixing up workfiles from multiple
mmbackup instances for different target TSM servers.

See below the "-f /gpfs/fs1/home/.mmbackupCfg/prepFiles" created by
mmapplypolicy (forked by mmbackup):

DEBUGtsbackup33: /usr/lpp/mmfs/bin/mmapplypolicy "/gpfs/fs1/home"
-g /gpfs/fs1/home/.mmbackupCfg2 -N tapenode3-ib -s /dev/shm -L 2 --qos
maintenance -a 8  -P /var/mmfs/mmbackup/.mmbackupRules.fs1.home -I prepare
-f /gpfs/fs1/home/.mmbackupCfg/prepFiles --irule0 --sort-buffer-size=5%
--scope inodespace


Basically, I don't want a "/gpfs/fs1/home/.mmbackupCfg" folder to ever
exist. Otherwise I'll be forced to serialize these backups, to avoid the
different mmbackup instances tripping over each other. The serializing is
very undesirable.

Thanks
Jaime



          ************************************
           TELL US ABOUT YOUR SUCCESS STORIES

https://urldefense.proofpoint.com/v2/url?u=http-3A__www.scinethpc.ca_testimonials&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=8D_-g0uvUjmBxM3FoCM3Jru71qum_AlcQYOe2gaC9Iw&s=rKNspohw_8ulLhO5Epvqly_vRyiBLxylBWPNKkea2eU&e=

          ************************************
---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.ca
University of Toronto
661 University Ave. (MaRS), Suite 1140
Toronto, ON, M5G1M1
P: 416-978-2755
C: 416-505-1477
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=8D_-g0uvUjmBxM3FoCM3Jru71qum_AlcQYOe2gaC9Iw&s=Yahjw-3p5PGqhgawsVqB2LuwZ151Fov398camDm4XwY&e=




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20191103/47442d74/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20191103/47442d74/attachment-0002.gif>


More information about the gpfsug-discuss mailing list