[gpfsug-discuss] mmbackup [--tsm-servers TSMServer[, TSMServer...]]

Jaime Pinto pinto at scinet.utoronto.ca
Tue Feb 11 21:44:07 GMT 2020


Hi Mark,
Just a follow up to your suggestion few months ago.

I finally got to a point where I do 2 independent backups of the same path to 2 servers, and they are pretty even, finishing within 4 hours each, when 
serialized.

I now just would like to use one mmbackup instance to 2 servers at the same time, with the --tsm-servers option, however it's not being 
accepted/recognized (see below).

So, what is the proper syntax for this option?

Thanks
Jaime

# /usr/lpp/mmfs/bin/mmbackup /gpfs/fs1/home -N tapenode3-ib ‐‐tsm‐servers TAPENODE3,TAPENODE4 -s /dev/shm --tsm-errorlog $tmpDir/home-tsm-errorlog 
--scope inodespace -v -a 8 -L 2
mmbackup: Incorrect extra argument: ‐‐tsm‐servers
Usage:
   mmbackup {Device | Directory} [-t {full | incremental}]
            [-N {Node[,Node...] | NodeFile | NodeClass}]
            [-g GlobalWorkDirectory] [-s LocalWorkDirectory]
            [-S SnapshotName] [-f] [-q] [-v] [-d]
            [-a IscanThreads] [-n DirThreadLevel]
            [-m ExecThreads | [[--expire-threads ExpireThreads] [--backup-threads BackupThreads]]]
            [-B MaxFiles | [[--max-backup-count MaxBackupCount] [--max-expire-count MaxExpireCount]]]
            [--max-backup-size MaxBackupSize] [--qos QosClass] [--quote | --noquote]
            [--rebuild] [--scope {filesystem | inodespace}]
            [--backup-migrated | --skip-migrated] [--tsm-servers TSMServer[,TSMServer...]]
            [--tsm-errorlog TSMErrorLogFile] [-L n] [-P PolicyFile]

Changing the order of the options/arguments makes no difference.

Even when I explicitly specify only one server, mmbackup still doesn't seem to recognize the ‐‐tsm‐servers option (it thinks it's some kind of argument):

# /usr/lpp/mmfs/bin/mmbackup /gpfs/fs1/home -N tapenode3-ib ‐‐tsm‐servers TAPENODE3 -s /dev/shm --tsm-errorlog $tmpDir/home-tsm-errorlog --scope 
inodespace -v -a 8 -L 2
mmbackup: Incorrect extra argument: ‐‐tsm‐servers
Usage:
   mmbackup {Device | Directory} [-t {full | incremental}]
            [-N {Node[,Node...] | NodeFile | NodeClass}]
            [-g GlobalWorkDirectory] [-s LocalWorkDirectory]
            [-S SnapshotName] [-f] [-q] [-v] [-d]
            [-a IscanThreads] [-n DirThreadLevel]
            [-m ExecThreads | [[--expire-threads ExpireThreads] [--backup-threads BackupThreads]]]
            [-B MaxFiles | [[--max-backup-count MaxBackupCount] [--max-expire-count MaxExpireCount]]]
            [--max-backup-size MaxBackupSize] [--qos QosClass] [--quote | --noquote]
            [--rebuild] [--scope {filesystem | inodespace}]
            [--backup-migrated | --skip-migrated] [--tsm-servers TSMServer[,TSMServer...]]
            [--tsm-errorlog TSMErrorLogFile] [-L n] [-P PolicyFile]



I defined the 2 servers stanzas as follows:

# cat dsm.sys
SERVERNAME TAPENODE3
	SCHEDMODE		PROMPTED
	ERRORLOGRETENTION 	0 D
	TCPSERVERADDRESS 	10.20.205.51
	NODENAME		home	
	COMMMETHOD		TCPIP
	TCPPort			1500
	PASSWORDACCESS		GENERATE
	TXNBYTELIMIT		1048576	
	
SERVERNAME TAPENODE4
         SCHEDMODE               PROMPTED
         ERRORLOGRETENTION       0 D
         TCPSERVERADDRESS     	192.168.94.128
         NODENAME                home
         COMMMETHOD              TCPIP
         TCPPort                 1500
         PASSWORDACCESS          GENERATE
         TXNBYTELIMIT            1048576
	TCPBuffsize		512







On 2019-11-03 8:56 p.m., Jaime Pinto wrote:
> 
> 
> On 11/3/2019 20:24:35, Marc A Kaplan wrote:
>> Please show us the 2 or 3 mmbackup commands that you would like to run concurrently.
> 
> Hey Marc,
> They would be pretty similar, with the only different being the target TSM server, determined by sourcing a different dsmenv1(2 or 3) prior to the 
> start of each instance, each with its own dsm.sys (3 wrappers).
> (source dsmenv1; /usr/lpp/mmfs/bin/mmbackup /gpfs/fs1/home -N tapenode3-ib -s /dev/shm --tsm-errorlog $tmpDir/home-tsm-errorlog  -g 
> /gpfs/fs1/home/.mmbackupCfg1  --scope inodespace -v -a 8 -L 2)
> (source dsmenv3; /usr/lpp/mmfs/bin/mmbackup /gpfs/fs1/home -N tapenode3-ib -s /dev/shm --tsm-errorlog $tmpDir/home-tsm-errorlog  -g 
> /gpfs/fs1/home/.mmbackupCfg2  --scope inodespace -v -a 8 -L 2)
> (source dsmenv3; /usr/lpp/mmfs/bin/mmbackup /gpfs/fs1/home -N tapenode3-ib -s /dev/shm --tsm-errorlog $tmpDir/home-tsm-errorlog  -g 
> /gpfs/fs1/home/.mmbackupCfg3  --scope inodespace -v -a 8 -L 2)
> 
> I was playing with the -L (to control the policy), but you bring up a very good point I had not experimented with, such as a single traverse for 
> multiple target servers. It may be just what I need. I'll try this next.
> 
> Thank you very much,
> Jaime
> 
>>
>> Peeking into the script, I find:
>>
>> if [[ $scope == "inode-space" ]]
>> then
>> deviceSuffix="${deviceName}.${filesetName}"
>> else
>> deviceSuffix="${deviceName}"
>>
>>
>> I believe mmbackup is designed to allow concurrent backup of different independent filesets within the same filesystem, Or different filesystems...
>>
>> And a single mmbackup instance can drive several TSM servers, which can be named with an option or in the dsm.sys file:
>>
>> # --tsm-servers TSMserver[,TSMserver...]
>> # List of TSM servers to use instead of the servers in the dsm.sys file.
>>
>>
>>
>> Inactive hide details for Jaime Pinto ---11/01/2019 07:40:47 PM---How can I force secondary processes to use the folder instrucJaime Pinto 
>> ---11/01/2019 07:40:47 PM---How can I force secondary processes to use the folder instructed by the -g option? I started a mmbac
>>
>> From: Jaime Pinto <pinto at scinet.utoronto.ca>
>> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
>> Date: 11/01/2019 07:40 PM
>> Subject: [EXTERNAL] [gpfsug-discuss] mmbackup ‐g GlobalWorkDirectory not being followed
>> Sent by: gpfsug-discuss-bounces at spectrumscale.org
>>
>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 
>>
>>
>>
>>
>> How can I force secondary processes to use the folder instructed by the -g option?
>>
>> I started a mmbackup with ‐g /gpfs/fs1/home/.mmbackupCfg1 and another with ‐g /gpfs/fs1/home/.mmbackupCfg2 (and another with ‐g 
>> /gpfs/fs1/home/.mmbackupCfg3 ...)
>>
>> However I'm still seeing transient files being worked into a "/gpfs/fs1/home/.mmbackupCfg" folder (created by magic !!!). This absolutely can not 
>> happen, since it's mixing up workfiles from multiple mmbackup instances for different target TSM servers.
>>
>> See below the "-f /gpfs/fs1/home/.mmbackupCfg/prepFiles" created by mmapplypolicy (forked by mmbackup):
>>
>> DEBUGtsbackup33: /usr/lpp/mmfs/bin/mmapplypolicy "/gpfs/fs1/home" -g /gpfs/fs1/home/.mmbackupCfg2 -N tapenode3-ib -s /dev/shm -L 2 --qos maintenance 
>> -a 8  -P /var/mmfs/mmbackup/.mmbackupRules.fs1.home -I prepare -f /gpfs/fs1/home/.mmbackupCfg/prepFiles --irule0 --sort-buffer-size=5% --scope 
>> inodespace
>>
>>
>> Basically, I don't want a "/gpfs/fs1/home/.mmbackupCfg" folder to ever exist. Otherwise I'll be forced to serialize these backups, to avoid the 
>> different mmbackup instances tripping over each other. The serializing is very undesirable.
>>
>> Thanks
>> Jaime
>>
>>
>>




          ************************************
           TELL US ABOUT YOUR SUCCESS STORIES
          http://www.scinethpc.ca/testimonials
          ************************************
---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.ca
University of Toronto
661 University Ave. (MaRS), Suite 1140
Toronto, ON, M5G1M1
P: 416-978-2755
C: 416-505-1477


More information about the gpfsug-discuss mailing list