[gpfsug-discuss] AFM convertToPrimary

Ashish Pandey5 aspandem at in.ibm.com
Fri Mar 31 14:54:23 BST 2023


Hi Christoph,

We can start IO once first AFM snapshot (psnap0) is created at the primary after starting changeSecondary command. We might get psnap0 creation failure with heavy workload.

This might not be needed if we have fileset level quiesce feature.




Thanks & Regards,
Ashish
________________________________
From: gpfsug-discuss <gpfsug-discuss-bounces at gpfsug.org> on behalf of gpfsug-discuss-request at gpfsug.org <gpfsug-discuss-request at gpfsug.org>
Sent: Friday, March 31, 2023 4:30 PM
To: gpfsug-discuss at gpfsug.org <gpfsug-discuss at gpfsug.org>
Subject: [EXTERNAL] gpfsug-discuss Digest, Vol 132, Issue 25

Send gpfsug-discuss mailing list submissions to
        gpfsug-discuss at gpfsug.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
or, via email, send a message with subject or body 'help' to
        gpfsug-discuss-request at gpfsug.org

You can reach the person managing the list at
        gpfsug-discuss-owner at gpfsug.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. AFM convertToPrimary (Christoph Martin)
   2. Call for Submissions IO500 ISC23 (IO500 Committee)


----------------------------------------------------------------------

Message: 1
Date: Thu, 30 Mar 2023 18:44:18 +0200
From: Christoph Martin <martin at uni-mainz.de>
To: gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
Subject: [gpfsug-discuss] AFM convertToPrimary
Message-ID: <436de125-ffed-3d10-bdea-1eb005923964 at uni-mainz.de>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

Hi all,

we want to convert two GPFS independent fileset to primaries for AFM-DR.
Most of the data from fileset1 we have already mirrored via rsync to the
secondary fileset. fileset2 on the secondary server is still empty.

I now want to issue for both filesets something like:

mmafmctl storage1 convertToPrimary -j afmdrtest2 --afmtarget
gpfs:///gpfs/storage2/afmdrtest2/ --inband --check-metadata

The help pages say, that you have to stop all IO on the primary fileset
for the conversion. I think this might mean read and write IO.

The question is: When can we restart IO? After the mmafmctl command
finishes or after the synchronisation of the filesets finishes.

The second can take days, especially for the second fileset, which is
empty on the secondary site.

What happens, if read (or write) IO happens during the execution time of
mmafmctl or the synchronisation?

Any ideas?

Regards
Christoph
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20230330/11417e0a/attachment-0001.sig >

------------------------------

Message: 2
Date: Thu, 30 Mar 2023 17:48:11 -0600
From: IO500 Committee <committee at io500.org>
To: BeeGFS <fhgfs-user at googlegroups.com>, Ceph <ceph-users at ceph.io>,
        GPFS <gpfsug-discuss at spectrumscale.org>, HPC Announce
        <hpc-announce at mcs.anl.gov>, IO500 <io-500 at vi4io.org>, Lustre
        <lustre-discuss at lists.lustre.org>, OrangeFS
        <users at lists.orangefs.org>, Storage Research List
        <storage-research-list at ece.cmu.edu>
Subject: [gpfsug-discuss] Call for Submissions IO500 ISC23
Message-ID: <1ceb433e1ae046c5dcd515a69a807e26 at io500.org>
Content-Type: text/plain; charset=US-ASCII; format=flowed

Stabilization Period: Monday, April 3rd - Friday, April 14th, 2023
Submission Deadline: Tuesday, May 16st, 2023 AoE

The IO500 is now accepting and encouraging submissions for the upcoming
12th semi-annual IO500 list, in conjunction with ISC23. Once again, we
are also accepting submissions to the 10 Client Node Challenge to
encourage the submission of small scale results. The new ranked lists
will be announced at the ISC23  BoF [1]. We hope to see many new
results.

What's New
1. Creation of Production and Research Lists - Starting with ISC'22, we
proposed a separation of the list into separate Production and Research
lists.  This better reflects the important distinction between storage
systems that run in production environments and those that may use more
experimental hardware and software configurations.  At ISC23, we will
formally create these two lists and users will be able to submit to
either of the two lists (and their 10 client-node counterparts).  Please
see the requirements for each list on the IO500 rules page [3].
2. New Submission Tool - There is now a new IO500 submission tool that
improves the overall submission experience.  Users can create accounts
and then update and manage all of their submissions through that
account.  As part of this new tool, we have improved the submission
fields that describe the hardware and software of the system under test.
For reproducibility and analysis reasons, we now made the easily
obtainable fields mandatory - data from storage servers are for users
often difficult to obtain, therefore, most remain optional. As a new
system, there may be quirks, please reach out on Slack or the mailing
list if you see any issues.  Further details will be released on the
submission page [2].
3. Reproducibility - Every submission will now receive a reproducibility
score based upon the provided system details and the reproducibility
questionnaire. This score will inform the community on the amount of
details provided in the submission and the obtainability of the storage
system. Further, this score will be used to evaluate if a submission is
eligible for the Production list.
4. New Phases - We are continuing to evaluate the inclusion of optional
test phases for additional key workloads - split easy/hard find phases,
4KB and 1MB random read/write phases, and concurrent metadata
operations. This is called an extended run. At the moment, we collect
the information to verify that additional phases do not significantly
impact the results of a standard run and an extended run to facilitate
comparisons between the existing and new benchmark phases. In a future
release, we may include some or all of these results as part of the
standard benchmark. The extended results are not currently included in
the scoring of any ranked list.

Background
The benchmark suite is designed to be easy to run and the community has
multiple active support channels to help with any questions. Please note
that submissions of all sizes are welcome; the site has customizable
sorting, so it is possible to submit on a small system and still get a
very good per-client score, for example. Additionally, the list is about
much more than just the raw rank; all submissions help the community by
collecting and publishing a wider corpus of data. More details below.

Following the success of the Top500 in collecting and analyzing
historical trends in supercomputer technology and evolution, the IO500
was created in 2017, published its first list at SC17, and has grown
continually since then. The need for such an initiative has long been
known within High-Performance Computing; however, defining appropriate
benchmarks has long been challenging. Despite this challenge, the
community, after long and spirited discussion, finally reached consensus
on a suite of benchmarks and a metric for resolving the scores into a
single ranking.

The multi-fold goals of the benchmark suite are as follows:
1. Maximizing simplicity in running the benchmark suite
2. Encouraging optimization and documentation of tuning parameters for
performance
3. Allowing submitters to highlight their "hero run" performance numbers
4. Forcing submitters to simultaneously report performance for
challenging IO patterns.

Specifically, the benchmark suite includes a hero-run of both IOR and
mdtest configured however possible to maximize performance and establish
an upper-bound for performance. It also includes an IOR and mdtest run
with highly prescribed parameters in an attempt to determine a lower
performance bound. Finally, it includes a namespace search as this has
been determined to be a highly sought-after feature in HPC storage
systems that has historically not been well-measured. Submitters are
encouraged to share their tuning insights for publication.

The goals of the community are also multi-fold:
1. Gather historical data for the sake of analysis and to aid
predictions of storage futures
2. Collect tuning information to share valuable performance
optimizations across the community
3. Encourage vendors and designers to optimize for workloads beyond
"hero runs"
4. Establish bounded expectations for users, procurers, and
administrators

The IO500 follows a two-staged approach. First, there will be a two-week
stabilization period during which we encourage the community to verify
that the benchmark runs properly on a variety of storage systems. During
this period the benchmark may be updated based upon feedback from the
community. The final benchmark will then be released. We expect that
runs compliant with the rules made during the stabilization period will
be valid as a final submission unless a significant defect is found.

10 Client Node I/O Challenge
The 10 Client Node Challenge is conducted using the regular IO500
benchmark, however, with the rule that exactly 10 client nodes must be
used to run the benchmark. You may use any shared storage with any
number of servers. We will announce the results in the Production and
Research lists as well as in separate derived lists.

Birds-of-a-Feather
Once again, we encourage you to submit [2] to join our community, and to
attend the ISC23 BoF [1], where we will announce the new IO500
Production and Research lists and their 10 client node counterparts. The
current list includes results from twenty different storage system types
and 70 institutions. We hope that the upcoming list grows even more.

[1] https://io500.org/pages/bof-isc23
[2] https://io500.org/submission
[3] https://io500.org/rules-submission

--
The IO500 Committee



------------------------------

Subject: Digest Footer

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org


------------------------------

End of gpfsug-discuss Digest, Vol 132, Issue 25
***********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20230331/a50117b4/attachment-0001.htm>


More information about the gpfsug-discuss mailing list