[gpfsug-discuss] ILM and Backup Question
Stephan Graf
st.graf at fz-juelich.de
Tue Nov 10 07:53:19 GMT 2015
Hi Wayne.
Just to come back to the mmbackup performance. Here the way we call it
and the performance results:
MTHREADS=1
QOPT="" # we check the lust run and set this to '-q' if required'
/usr/lpp/mmfs/bin/mmbackup /$FS -S $SNAPFILE -g /work/root/mmbackup -a 4
$QOPT -m $MTHREADS -B 1000 -N justt
sms04c1 --noquote --tsm-servers home -v
--------------------------------------------------------
mmbackup: Backup of /homeb begins at Mon Nov 9 00:03:30 MEZ 2015.
--------------------------------------------------------
...
Mon Nov 9 00:03:35 2015 mmbackup:Scanning file system homeb
Mon Nov 9 03:07:17 2015 mmbackup:File system scan of homeb is complete.
Mon Nov 9 03:07:17 2015 mmbackup:Calculating backup and expire lists
for server home
Mon Nov 9 03:07:17 2015 mmbackup:Determining file system changes for
homeb [home].
Mon Nov 9 03:44:33 2015 mmbackup:changed=126305, expired=10086,
unsupported=0 for server [home]
Mon Nov 9 03:44:33 2015 mmbackup:Finished calculating lists [126305
changed, 10086 expired] for server home.
Mon Nov 9 03:44:33 2015 mmbackup:Sending files to the TSM server
[126305 changed, 10086 expired].
Mon Nov 9 03:44:33 2015 mmbackup:Performing expire operations
Mon Nov 9 03:45:32 2015 mmbackup:Completed policy expiry run with 0
policy errors, 0 files failed, 0 severe errors, returning r
c=0.
Mon Nov 9 03:45:32 2015 mmbackup:Policy for expiry returned 0 Highest
TSM error 0
Mon Nov 9 03:45:32 2015 mmbackup:Performing backup operations
Mon Nov 9 04:54:29 2015 mmbackup:Completed policy backup run with 0
policy errors, 0 files failed, 0 severe errors, returning r
c=0.
Mon Nov 9 04:54:29 2015 mmbackup:Policy for backup returned 0 Highest
TSM error 0
Total number of objects inspected: 137562
Total number of objects backed up: 127476
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects expired: 10086
Total number of objects failed: 0
Total number of bytes transferred: 427 GB
Total number of objects encrypted: 0
Total number of bytes inspected: 459986708656
Total number of bytes transferred: 459989351070
Mon Nov 9 04:54:29 2015 mmbackup:analyzing: results from home.
Mon Nov 9 04:54:29 2015 mmbackup:Analyzing audit log file
/homeb/mmbackup.audit.homeb.home
Mon Nov 9 05:02:46 2015 mmbackup:updating /homeb/.mmbackupShadow.1.home
with /homeb/.mmbackupCfg/tmpfile2.mmbackup.homeb
Mon Nov 9 05:02:46 2015 mmbackup:Copying updated shadow file to the TSM
server
Mon Nov 9 05:03:51 2015 mmbackup:Done working with files for TSM
Server: home.
Mon Nov 9 05:03:51 2015 mmbackup:Completed backup and expire jobs.
Mon Nov 9 05:03:51 2015 mmbackup:TSM server home
had 0 failures or excluded paths and returned 0.
Its shadow database has been updated. Shadow DB state:updated
Mon Nov 9 05:03:51 2015 mmbackup:Completed successfully. exit 0
----------------------------------------------------------
mmbackup: Backup of /homeb completed successfully at Mon Nov 9 05:03:51
MEZ 2015.
----------------------------------------------------------
Stephan
On 10/28/15 14:36, Wayne Sawdon wrote:
>
> You have to use both options even if -N is only the local node. Sorry,
>
> -Wayne
>
>
>
> Inactive hide details for Stephan Graf ---10/28/2015 01:06:36 AM---Hi
> Wayne! We are using -g, and we only want to run it on oneStephan Graf
> ---10/28/2015 01:06:36 AM---Hi Wayne! We are using -g, and we only
> want to run it on one node, so we don't use the -N option.
>
> From: Stephan Graf <st.graf at fz-juelich.de>
> To: <gpfsug-discuss at spectrumscale.org>
> Date: 10/28/2015 01:06 AM
> Subject: Re: [gpfsug-discuss] ILM and Backup Question
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
>
> ------------------------------------------------------------------------
>
>
>
> Hi Wayne!
>
> We are using -g, and we only want to run it on one node, so we don't
> use the -N option.
>
> Stephan
>
> On 10/27/15 16:25, Wayne Sawdon wrote:
>
>
> > From: Stephan Graf _<st.graf at fz-juelich.de>_
> <mailto:st.graf at fz-juelich.de>
>
> > We are running the mmbackup on an AIX system
> > oslevel -s
> > 6100-07-10-1415
> > Current GPFS build: "4.1.0.8 ".
> >
> > So we only use one node for the policy run.
> >
>
> Even on one node you should see a speedup using -g and -N.
>
> -Wayne
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_
>
>
>
>
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20151110/346465e6/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20151110/346465e6/attachment-0001.gif>
More information about the gpfsug-discuss
mailing list