[gpfsug-discuss] AFM does too small NFS writes, and I don't see parallel writes

Billich Heinrich Rainer (ID SD) heinrich.billich at id.ethz.ch
Tue Nov 23 17:59:12 GMT 2021


Hello,

 

We currently move data to a new AFM fileset and I see poor performance and ask for advice and insight:

 

The migration to afm home seems slow. I note:

 
Afm writes a whole file of ~100MB in much too many small chunks 
 

My assumption: The many small writes reduce performance as we have 100km between the sites and a higher latency.  The writes are not fully sequentially, but they aren’t done heavily parallel, either (like 10-100 outstanding writes at each time).

 

I the afm queue I see

 

8100214 Write [563636091.563636091] inflight (0 @ 0) chunks 2938 bytes 170872410 vIdx 1 thread_id 67862

 

I guess this means afm will write 170’872’410 bytes in 2’938chunks resulting in an average write size of 58k to inode 563636091.

 

So if I’m right my question is: 

 

What can I change to make afm  write less and larger chunks per file? 

Does it depend on how we copy data? We write through ganesha/nfs, hence even if we write sequentially ganesha may still do it differently?

 

Another question – is there a way to dump the  afm in-memory queue for a fileset? That would make it easier to see what’s going on when we do changes. I could grep for the inode of a testfile …

 

We don’t do parallel writes across afm gateways, the files are too small, our limit is 1GB.

We configured two mounts from two ces servers at home for each filesets. Hence AFM could do writes in parallel to both mounts on the single gateway? 

A short tcpdump suggests: afm writes to a single ces server only and writes to a single inode at a time. But at each time a few writes (2-5) may overlap.

 

Kind regards,

 

Heiner

 

 

Just to illustrate – what I see on the afm gateway – too many reads and writes. There are almost no open/close hence its all to the same few files

 

------------nfs3-client------------ --------gpfs-file-operations------- --gpfs-i/o- -net/total-

 read  writ  rdir  inod   fs   cmmt| open  clos  read  writ  rdir  inod| read write| recv  send

   0  1295     0     0     0     0 |   0     0  1294     0     0     0 |89.8M    0 | 451k   94M

   0  1248     0     0     0     0 |   0     0  1248     0     0     8 |86.2M    0 | 432k   91M

   0  1394     0     0     0     0 |   0     0  1394     0     0     0 |96.8M    0 | 498k  101M

   0  1583     0     0     0     0 |   0     0  1582     0     0     1 | 110M    0 | 560k  115M

   0  1543     0     1     0     0 |   0     0  1544     0     0     0 | 107M    0 | 540k  112M

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20211123/8325de0d/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5254 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20211123/8325de0d/attachment-0001.bin>


More information about the gpfsug-discuss mailing list