<div dir="auto">I'll chime in with we don't use mmbackup at all... We use NetBackup accelerated backups to backup. It has worked well and we are able to do daily incrementals and weekend fulls with 100s of TBs in one filesystem and 100s of filesets. The biggest challenge we face is defining the numerous streams in NetBackup policy to keep things parallel. But we just do good record keeping in our change management.<div dir="auto"><br></div><div dir="auto">When we have to do a 'rescan' or BCP fail over it is a chore to reach equilibrium again.</div><div dir="auto"><br></div><div dir="auto">Our synthetic accelerated full backup runs at about 700GB/hr for our weekend fulls... so we finish well before most other smaller traditional fileserver clients.</div><div dir="auto"><br></div><div dir="auto">Best part is this is one place where we are nearly just a regular old commodity client with ridiculously impressive stats.</div><div dir="auto"><br></div><div dir="auto">Alec</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Sep 6, 2023, 10:19 AM Wahl, Edward <<a href="mailto:ewahl@osc.edu">ewahl@osc.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div lang="EN-US" link="blue" vlink="purple" style="word-wrap:break-word">
<div class="m_1828974525508560904WordSection1">
<p class="MsoNormal">Â <u></u><u></u></p>
<p class="MsoNormal">  We have about 760-ish independent filesets. As was mentioned before in a reply,  this allows for individual fileset snapshotting, and running on different TSM servers. We maintain a puppet-managed list that we use to divide up the filesets,
.   automation helps us round-robin new filesets across the 4 backup servers as they are added to attempt to balance things somewhat.  We maintain 7 days of snapshots on the filesystem we backup, and no snapshots or backups on our scratch space.
<u></u><u></u></p>
<p class="MsoNormal"><u></u>Â <u></u></p>
<p class="MsoNormal">We hand out the mmbackups to 4 individual TSM backup clients which do both our daily mmbackup, and NetApp snappdiff backups for our user home directories as well. Those feed to another 4 TSM servers doing the tape migrations.Â
<u></u><u></u></p>
<p class="MsoNormal">We’re sitting on about ~20P of disk at this time and we’re (very) roughly 50% occupied.  <u></u><u></u></p>
<p class="MsoNormal"><u></u>Â <u></u></p>
<p class="MsoNormal">One of our challenges recently was re-balancing all this for remote Disaster Recovery/Replication. We ended up using colocation groups of the filesets in Spectrum Protect/TSM. While scaling backup infrastructure can be hard, balancing
hundreds of Wildly differing filesets can be just as challenging. <u></u><u></u></p>
<p class="MsoNormal"><u></u>Â <u></u></p>
<p class="MsoNormal">I’m happy to talk about these kinds of things here, or offline. Drop me a line if you have additional questions.
<u></u><u></u></p>
<p class="MsoNormal"><u></u>Â <u></u></p>
<p class="MsoNormal">Ed Wahl<u></u><u></u></p>
<p class="MsoNormal">Ohio Supercomputer Center<u></u><u></u></p>
<p class="MsoNormal"><u></u>Â <u></u></p>
<div style="border:none;border-top:solid #e1e1e1 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b>From:</b> gpfsug-discuss <<a href="mailto:gpfsug-discuss-bounces@gpfsug.org" target="_blank" rel="noreferrer">gpfsug-discuss-bounces@gpfsug.org</a>>
<b>On Behalf Of </b>Christian Petersson<br>
<b>Sent:</b> Wednesday, September 6, 2023 5:45 AM<br>
<b>To:</b> gpfsug main discussion list <<a href="mailto:gpfsug-discuss@gpfsug.org" target="_blank" rel="noreferrer">gpfsug-discuss@gpfsug.org</a>><br>
<b>Subject:</b> Re: [gpfsug-discuss] mmbackup feature request<u></u><u></u></p>
</div>
<p class="MsoNormal"><u></u>Â <u></u></p>
<div>
<p class="MsoNormal"><span style="font-size:1.0pt;color:white">Just a follow up question, how do you backup multiple filesets? We have a 50 filesets to backup, at the moment do we have a text file that contains all of them
and we run a for loop. But that is not at all scalable.  Is it any other ways that
<u></u><u></u></span></p>
</div>
<div>
<p class="MsoNormal"><span style="font-size:1.0pt;color:white"><u></u><u></u></span></p>
</div>
<div>
<p class="MsoNormal">Just a follow up question, how do you backup multiple filesets? <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">We have a 50 filesets to backup, at the moment do we have a text file that contains all of them and we run a for loop. But that is not at all scalable. <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u>Â <u></u></p>
</div>
<div>
<p class="MsoNormal">Is it any other ways that are much better?<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u>Â <u></u></p>
</div>
<div>
<p class="MsoNormal">/Christian <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u>Â <u></u></p>
<div>
<div>
<p class="MsoNormal">ons 6 sep. 2023 kl. 11:35 skrev Marcus Koenig <<a href="mailto:marcus@koenighome.de" target="_blank" rel="noreferrer">marcus@koenighome.de</a>>:<u></u><u></u></p>
</div>
<blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in">
<div>
<p class="MsoNormal">I'm using this one liner to get the progress<u></u><u></u></p>
<div>
<p class="MsoNormal"><u></u>Â <u></u></p>
</div>
<div>
<p class="MsoNormal">grep 'mmbackup:Backup job finished'|cut -d ":" -f 6|awk '{print $1}'|awk '{s+=$1}END{print s}'<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u>Â <u></u></p>
</div>
<div>
<p class="MsoNormal">That can be compared to the files identified during the scan.<u></u><u></u></p>
</div>
</div>
<p class="MsoNormal"><u></u>Â <u></u></p>
<div>
<div>
<p class="MsoNormal">On Wed, 6 Sept 2023, 21:29 Stephan Graf, <<a href="mailto:st.graf@fz-juelich.de" target="_blank" rel="noreferrer">st.graf@fz-juelich.de</a>> wrote:<u></u><u></u></p>
</div>
<blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in">
<p class="MsoNormal">Hi<br>
<br>
I think it should be possible because mmbackup know, how many files are <br>
to be backed up, which have been already processed and how many are <br>
still to go.<br>
<br>
BTW it would also be nice to have an option in mmbackup to generate <br>
machine readable log file like JSON or CSV.<br>
<br>
But the right way to ask for a new feature or to look if there is <br>
already a request open is the IBM IDEA portal:<br>
<br>
<a href="https://urldefense.com/v3/__https:/ideas.ibm.com__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHIMIKyXsk$" target="_blank" rel="noreferrer">https://ideas.ibm.com</a><br>
<br>
Stephan<br>
<br>
On 9/6/23 11:02, Jonathan Buzzard wrote:<br>
> <br>
> Would it be possible to have the mmbackup output display the percentage <br>
> output progress when backing up files?<br>
> <br>
> So at the top we you see something like this<br>
> <br>
> Tue Sep 5 23:13:35 2023 mmbackup:changed=747204, expired=427702, <br>
> unsupported=0 for server [XXXX]<br>
> <br>
> Then after it does the expiration you see during the backup lines like<br>
> <br>
> Wed Sep 6 02:43:53 2023 mmbackup:Backing up files: 527024 backed up, <br>
> 426018 expired, 4408 failed. (Backup job exit with 4)<br>
> <br>
> It would IMHO be helpful if it looked like<br>
> <br>
> Wed Sep 6 02:43:53 2023 mmbackup:Backing up files: 527024 (70.5%) <br>
> backed up, 426018 (100%) expired, 4408 failed. (Backup job exit with 4)<br>
> <br>
> Just based on the number of files. Though as I look at it now I am <br>
> curious about the discrepancy in the number of files expired, given that <br>
> the expiration stage allegedly concluded with no errors?<br>
> <br>
> Tue Sep 5 23:21:49 2023 mmbackup:Completed policy expiry run with 0 <br>
> policy errors, 0 files failed, 0 severe errors, returning rc=0.<br>
> Tue Sep 5 23:21:49 2023 mmbackup:Policy for expiry returned 0 Highest <br>
> TSM error 0<br>
> <br>
> <br>
> <br>
> JAB.<br>
> <br>
<br>
-- <br>
Stephan Graf<br>
Juelich Supercomputing Centre<br>
<br>
Phone:Â +49-2461-61-6578<br>
Fax:Â Â +49-2461-61-6656<br>
E-mail: <a href="mailto:st.graf@fz-juelich.de" target="_blank" rel="noreferrer">st.graf@fz-juelich.de</a><br>
WWW:Â Â <a href="https://urldefense.com/v3/__http:/www.fz-juelich.de/jsc/__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHIoADZPpY$" target="_blank" rel="noreferrer">
http://www.fz-juelich.de/jsc/</a><br>
---------------------------------------------------------------------------------------------<br>
---------------------------------------------------------------------------------------------<br>
Forschungszentrum Juelich GmbH<br>
52425 Juelich<br>
Sitz der Gesellschaft: Juelich<br>
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498<br>
Vorsitzender des Aufsichtsrats: MinDir Volker Rieke<br>
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),<br>
Karsten Beneke (stellv. Vorsitzender), Dr. Astrid Lambrecht,<br>
Prof. Dr. Frauke Melchior<br>
---------------------------------------------------------------------------------------------<br>
---------------------------------------------------------------------------------------------<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="https://urldefense.com/v3/__http:/gpfsug.org__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHIftznjDE$" target="_blank" rel="noreferrer">
gpfsug.org</a><br>
<a href="https://urldefense.com/v3/__http:/gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHI-cdFpMc$" target="_blank" rel="noreferrer">http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org</a><u></u><u></u></p>
</blockquote>
</div>
<p class="MsoNormal">_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="https://urldefense.com/v3/__http:/gpfsug.org__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHIftznjDE$" target="_blank" rel="noreferrer">
gpfsug.org</a><br>
<a href="https://urldefense.com/v3/__http:/gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org__;!!KGKeukY!03ECSdA7lzhWsifFSkK9t1YtNYvDB89pvj-eJrh4gWV9IYpYH61rCBiaASdvtsHkekKlW5pqQriFk_mv-wHI-cdFpMc$" target="_blank" rel="noreferrer">http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org</a><u></u><u></u></p>
</blockquote>
</div>
</div>
</div>
</div>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://gpfsug.org" rel="noreferrer noreferrer" target="_blank">gpfsug.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org" rel="noreferrer noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org</a><br>
</blockquote></div>