[gpfsug-discuss] mmbackup feature request

Alec anacreo at gmail.com
Wed Sep 6 18:59:24 BST 2023


I'll chime in with we don't use mmbackup at all... We use NetBackup
accelerated backups to backup.  It has worked well and we are able to do
daily incrementals and weekend fulls with 100s of TBs in one filesystem and
100s of filesets.  The biggest challenge we face is defining the numerous
streams in NetBackup policy to keep things parallel.  But we just do good
record keeping in our change management.

When we have to do a 'rescan' or BCP fail over it is a chore to reach
equilibrium again.

Our synthetic accelerated full backup runs at about 700GB/hr for our
weekend fulls...  so we finish well before most other smaller traditional
fileserver clients.

Best part is this is one place where we are nearly just a regular old
commodity client with ridiculously impressive stats.

Alec

On Wed, Sep 6, 2023, 10:19 AM Wahl, Edward <ewahl at osc.edu> wrote:

>
>
>   We have about 760-ish independent filesets.  As was mentioned before in
> a reply,  this allows for individual fileset snapshotting, and running on
> different TSM servers.  We maintain a puppet-managed list that we use to
> divide up the filesets, .   automation helps us round-robin new filesets
> across the 4 backup servers as they are added to attempt to balance things
> somewhat.   We maintain 7 days of snapshots on the filesystem we backup,
> and no snapshots or backups on our scratch space.
>
>
>
> We hand out the mmbackups to 4 individual TSM backup clients which do both
> our daily mmbackup, and NetApp snappdiff backups for our user home
> directories as well.  Those feed to another 4 TSM servers doing the tape
> migrations.
>
> We’re sitting on about ~20P of disk at this time and we’re (very) roughly
> 50% occupied.
>
>
>
> One of our challenges recently was re-balancing all this for remote
> Disaster Recovery/Replication.  We ended up using colocation groups of the
> filesets in Spectrum Protect/TSM.  While scaling backup infrastructure can
> be hard, balancing hundreds of Wildly differing filesets can be just as
> challenging.
>
>
>
> I’m happy to talk about these kinds of things here, or offline.  Drop me a
> line if you have additional questions.
>
>
>
> Ed Wahl
>
> Ohio Supercomputer Center
>
>
>
> *From:* gpfsug-discuss <gpfsug-discuss-bounces at gpfsug.org> *On Behalf Of *Christian
> Petersson
> *Sent:* Wednesday, September 6, 2023 5:45 AM
> *To:* gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
> *Subject:* Re: [gpfsug-discuss] mmbackup feature request
>
>
>
> Just a follow up question, how do you backup multiple filesets?  We have a
> 50 filesets to backup, at the moment do we have a text file that contains
> all of them and we run a for loop. But that is not at all scalable.   Is it
> any other ways that
>
> Just a follow up question, how do you backup multiple filesets?
>
> We have a 50 filesets to backup, at the moment do we have a text file that
> contains all of them and we run a for loop. But that is not at all
> scalable.
>
>
>
> Is it any other ways that are much better?
>
>
>
> /Christian
>
>
>
> ons 6 sep. 2023 kl. 11:35 skrev Marcus Koenig <marcus at koenighome.de>:
>
> I'm using this one liner to get the progress
>
>
>
> grep 'mmbackup:Backup job finished'|cut -d ":" -f 6|awk '{print $1}'|awk
> '{s+=$1}END{print s}'
>
>
>
> That can be compared to the files identified during the scan.
>
>
>
> On Wed, 6 Sept 2023, 21:29 Stephan Graf, <st.graf at fz-juelich.de> wrote:
>
> Hi
>
> I think it should be possible because mmbackup know, how many files are
> to be backed up, which have been already processed and how many are
> still to go.
>
> BTW it would also be nice to have an option in mmbackup to generate
> machine readable log file like JSON or CSV.
>
> But the right way to ask for a new feature or to look if there is
> already a request open is the IBM IDEA portal:
>
> https://ideas.ibm.com
> <https://urldefense.com/v3/__https:/ideas.ibm.com__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHIMIKyXsk$>
>
> Stephan
>
> On 9/6/23 11:02, Jonathan Buzzard wrote:
> >
> > Would it be possible to have the mmbackup output display the percentage
> > output progress when backing up files?
> >
> > So at the top we you see something like this
> >
> > Tue Sep  5 23:13:35 2023 mmbackup:changed=747204, expired=427702,
> > unsupported=0 for server [XXXX]
> >
> > Then after it does the expiration you see during the backup lines like
> >
> > Wed Sep  6 02:43:53 2023 mmbackup:Backing up files: 527024 backed up,
> > 426018 expired, 4408 failed. (Backup job exit with 4)
> >
> > It would IMHO be helpful if it looked like
> >
> > Wed Sep  6 02:43:53 2023 mmbackup:Backing up files: 527024 (70.5%)
> > backed up, 426018 (100%) expired, 4408 failed. (Backup job exit with 4)
> >
> > Just based on the number of files. Though as I look at it now I am
> > curious about the discrepancy in the number of files expired, given that
> > the expiration stage allegedly concluded with no errors?
> >
> > Tue Sep  5 23:21:49 2023 mmbackup:Completed policy expiry run with 0
> > policy errors, 0 files failed, 0 severe errors, returning rc=0.
> > Tue Sep  5 23:21:49 2023 mmbackup:Policy for expiry returned 0 Highest
> > TSM error 0
> >
> >
> >
> > JAB.
> >
>
> --
> Stephan Graf
> Juelich Supercomputing Centre
>
> Phone:  +49-2461-61-6578
> Fax:    +49-2461-61-6656
> E-mail: st.graf at fz-juelich.de
> WWW:    http://www.fz-juelich.de/jsc/
> <https://urldefense.com/v3/__http:/www.fz-juelich.de/jsc/__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHIoADZPpY$>
>
> ---------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Volker Rieke
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Dr. Astrid Lambrecht,
> Prof. Dr. Frauke Melchior
>
> ---------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> <https://urldefense.com/v3/__http:/gpfsug.org__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHIftznjDE$>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
> <https://urldefense.com/v3/__http:/gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHI-cdFpMc$>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> <https://urldefense.com/v3/__http:/gpfsug.org__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHIftznjDE$>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
> <https://urldefense.com/v3/__http:/gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHI-cdFpMc$>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20230906/a18e63b6/attachment.htm>


More information about the gpfsug-discuss mailing list