[gpfsug-discuss] gpfs 4.2.1 and samba export
Lukas Hejtmanek
xhejtman at ics.muni.cz
Mon Sep 12 15:57:55 BST 2016
Hello,
I have GPFS version 4.2.1 on Centos 7.2 (kernel 3.10.0-327.22.2.el7.x86_64)
and I have got some weird behavior of samba. Windows clients get stucked for
almost 1 minute when copying files. I traced down the problematic syscall:
27887 16:39:28.000401 utimensat(AT_FDCWD, "000000-My_Documents/Windows/InfusedApps/Packages/Microsoft.Messaging_1.10.22012.0_x86__8wekyb3d8bbwe/SkypeApp/View/HomePage.xaml", {{1473691167, 940424000}, {1473691168, 295355}}, 0) = 0 <74.999775>
[...]
27887 16:44:24.000310 utimensat(AT_FDCWD, "000000-My_Documents/Windows/InfusedApps/Packages/Microsoft.Windows.Photos_15.1001.16470.0_x64__8wekyb3d8bbwe/Assets/PhotosAppList.contrast-white_targetsize-16.png", {{1473691463, 931319000}, {1473691464, 96608}}, 0) = 0 <74.999841>
[...]
27887 16:50:34.002274 utimensat(AT_FDCWD, "000000-My_Documents/Windows/InfusedApps/Packages/Microsoft.XboxApp_9.9.30030.0_x64__8wekyb3d8bbwe/_Resources/50.rsrc", {{1473691833, 952166000}, {1473691834, 2166223}}, 0) = 0 <74.997877>
[...]
27887 16:53:11.000240 utimensat(AT_FDCWD, "000000-My_Documents/Windows/InfusedApps/Packages/Microsoft.ZuneVideo_3.6.13251.0_x64__8wekyb3d8bbwe/Styles/CommonBrushes.xbf", {{1473691990, 948668000}, {1473691991, 131221}}, 0) = 0 <74.999540>
it seems that from time to time, utimensat(2) call takes over 70 (!!) seconds.
Normal utimensat syscall looks like:
27887 16:55:16.238132 utimensat(AT_FDCWD, "000000-My_Documents/Windows/Installer/$PatchCache$/Managed/00004109210000000000000000F01FEC/14.0.7015/ACEODDBS.DLL", {{1473692116, 196458000}, {1351702318, 0}}, 0) = 0 <0.000065>
At the same time, there is untar running. When samba freezes at utimensat
call, untar continues to write data to GPFS (same fs as samba), so it does not
seem to me as buffers flush.
When the syscall is stucked, I/O utilization of all GPFS disks is below 10 %.
mmfsadm dump waiters shows nothing waiting and any cluster node.
So any ideas? Or should I just fire PMR?
This is cluster config:
clusterId 2745894253048382857
autoload no
dmapiFileHandleSize 32
minReleaseLevel 4.2.1.0
ccrEnabled yes
maxMBpS 20000
maxblocksize 8M
cipherList AUTHONLY
maxFilesToCache 10000
nsdSmallThreadRatio 1
nsdMaxWorkerThreads 480
ignorePrefetchLUNCount yes
pagepool 48G
prefetchThreads 320
worker1Threads 320
writebehindThreshhold 10485760
cifsBypassShareLocksOnRename yes
cifsBypassTraversalChecking yes
allowWriteWithDeleteChild yes
adminMode central
And this is file system config:
flag value description
------------------- ------------------------ -----------------------------------
-f 65536 Minimum fragment size in bytes
-i 4096 Inode size in bytes
-I 32768 Indirect block size in bytes
-m 1 Default number of metadata replicas
-M 2 Maximum number of metadata replicas
-r 1 Default number of data replicas
-R 2 Maximum number of data replicas
-j cluster Block allocation type
-D nfs4 File locking semantics in effect
-k all ACL semantics in effect
-n 32 Estimated number of nodes that will mount file system
-B 2097152 Block size
-Q user;group;fileset Quotas accounting enabled
user;group;fileset Quotas enforced
none Default quotas enabled
--perfileset-quota Yes Per-fileset quota enforcement
--filesetdf Yes Fileset df enabled?
-V 15.01 (4.2.0.0) File system version
--create-time Wed Aug 24 17:38:40 2016 File system creation time
-z No Is DMAPI enabled?
-L 4194304 Logfile size
-E Yes Exact mtime mount option
-S No Suppress atime mount option
-K whenpossible Strict replica allocation option
--fastea Yes Fast external attributes enabled?
--encryption No Encryption enabled?
--inode-limit 402653184 Maximum number of inodes in all inode spaces
--log-replicas 0 Number of log replicas
--is4KAligned Yes is4KAligned?
--rapid-repair Yes rapidRepair enabled?
--write-cache-threshold 0 HAWC Threshold (max 65536)
-P system Disk storage pools in file system
-d nsd_A_m;nsd_B_m;nsd_C_m;nsd_D_m;nsd_A_LV1_d;nsd_A_LV2_d;nsd_A_LV3_d;nsd_A_LV4_d;nsd_B_LV1_d;nsd_B_LV2_d;nsd_B_LV3_d;nsd_B_LV4_d;nsd_C_LV1_d;nsd_C_LV2_d;nsd_C_LV3_d;
-d nsd_C_LV4_d;nsd_D_LV1_d;nsd_D_LV2_d;nsd_D_LV3_d;nsd_D_LV4_d Disks in file system
-A yes Automatic mount option
-o none Additional mount options
-T /gpfs/vol1 Default mount point
--mount-priority 1 Mount priority
--
Lukáš Hejtmánek
More information about the gpfsug-discuss
mailing list