[gpfsug-discuss] snapshots causing filesystem quiesce

Talamo Ivano Giuseppe (PSI) ivano.talamo at psi.ch
Wed Feb 2 10:45:26 GMT 2022


Hello Andrew,


Thanks for your questions.


We're not experiencing any other issue/slowness during normal activity.

The storage is a Lenovo DSS appliance with a dedicated SSD enclosure/pool for metadata only.


The two NSD servers have 750GB of RAM and 618 are configured as pagepool.


The issue we see is happening on both the two filesystems we have:


- perf filesystem:

 - 1.8 PB size (71% in use)

 - 570 milions of inodes (24% in use)


- tiered filesystem:

 - 400 TB size (34% in use)

 - 230 Milions of files (60% in use)


Cheers,

Ivano


__________________________________________
Paul Scherrer Institut
Ivano Talamo
WHGA/038
Forschungsstrasse 111
5232 Villigen PSI
Schweiz

Telefon: +41 56 310 47 11
E-Mail: ivano.talamo at psi.ch



________________________________
From: gpfsug-discuss-bounces at spectrumscale.org <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Andrew Beattie <abeattie at au1.ibm.com>
Sent: Wednesday, February 2, 2022 10:33 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] snapshots causing filesystem quiesce

Ivano,

How big is the filesystem in terms of number of files?
How big is the filesystem in terms of capacity?
Is the Metadata on Flash or Spinning disk?
Do you see issues when users do an LS of the filesystem or only when you are doing snapshots.

How much memory do the NSD servers have?
How much is allocated to the OS / Spectrum
 Scale  Pagepool

Regards

Andrew Beattie
Technical Specialist - Storage for Big Data & AI
IBM Technology Group
IBM Australia & New Zealand
P. +61 421 337 927
E. abeattie at au1.IBM.com



On 2 Feb 2022, at 19:14, Talamo Ivano Giuseppe (PSI) <Ivano.Talamo at psi.ch> wrote:



Dear all,

Since a while we are experiencing an issue when dealing with snapshots.
Basically what happens is that when deleting a fileset snapshot (and maybe also when creating new ones) the filesystem becomes inaccessible on the clients for the duration of the operation (can take a few minutes).

The clients and the storage are on two different clusters, using remote cluster mount for the access.

On the log files many lines like the following appear (on both clusters):
Snapshot whole quiesce of SG perf from xbldssio1 on this node lasted 60166 msec

By looking around I see we're not the first one. I am wondering if that's considered an unavoidable part of the snapshotting and if there's any tunable that can improve the situation. Since when this occurs all the clients are stuck and users are very quick to complain.

If it can help, the clients are running GPFS 5.1.2-1 while the storage cluster is on 5.1.1-0.

Thanks,
Ivano


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20220202/783522d3/attachment-0002.htm>


More information about the gpfsug-discuss mailing list