[gpfsug-discuss] strange waiters + filesystem deadlock
Aaron Knister
aaron.s.knister at nasa.gov
Fri Mar 24 17:58:18 GMT 2017
I feel a little awkward about posting wlists of IP's and hostnames on
the mailing list (even though they're all internal) but I'm happy to
send to you directly. I've attached both an lsfs and an mmdf output of
the fs in question here since that may be useful for others to see. Just
a note about disk d23_02_021-- it's been evacuated for several weeks now
due to a hardware issue in the disk enclosure.
The fs is rather full percentage wise (93%) but in terms of capacity
there's a good amount free. 93% full of a 7PB filesystem still leaves
551T. Metadata, as you'll see, is 31% free (roughly 800GB).
The fs has 40M inodes allocated and 12M free.
-Aaron
On 3/24/17 1:41 PM, Sven Oehme wrote:
> ok, that seems a different problem then i was thinking.
> can you send output of mmlscluster, mmlsconfig, mmlsfs all ?
> also are you getting close to fill grade on inodes or capacity on any of
> the filesystems ?
>
> sven
>
>
> On Fri, Mar 24, 2017 at 10:34 AM Aaron Knister <aaron.s.knister at nasa.gov
> <mailto:aaron.s.knister at nasa.gov>> wrote:
>
> Here's the screenshot from the other node with the high cpu utilization.
>
> On 3/24/17 1:32 PM, Aaron Knister wrote:
> > heh, yep we're on sles :)
> >
> > here's a screenshot of the fs manager from the deadlocked filesystem. I
> > don't think there's an nsd server or manager node that's running full
> > throttle across all cpus. There is one that's got relatively high CPU
> > utilization though (300-400%). I'll send a screenshot of it in a sec.
> >
> > no zimon yet but we do have other tools to see cpu utilization.
> >
> > -Aaron
> >
> > On 3/24/17 1:22 PM, Sven Oehme wrote:
> >> you must be on sles as this segfaults only on sles to my knowledge :-)
> >>
> >> i am looking for a NSD or manager node in your cluster that runs at 100%
> >> cpu usage.
> >>
> >> do you have zimon deployed to look at cpu utilization across your nodes ?
> >>
> >> sven
> >>
> >>
> >>
> >> On Fri, Mar 24, 2017 at 10:08 AM Aaron Knister <aaron.s.knister at nasa.gov <mailto:aaron.s.knister at nasa.gov>
> >> <mailto:aaron.s.knister at nasa.gov <mailto:aaron.s.knister at nasa.gov>>> wrote:
> >>
> >> Hi Sven,
> >>
> >> Which NSD server should I run top on, the fs manager? If so the
> >> CPU load
> >> is about 155%. I'm working on perf top but not off to a great
> >> start...
> >>
> >> # perf top
> >> PerfTop: 1095 irqs/sec kernel:61.9% exact: 0.0% [1000Hz
> >> cycles], (all, 28 CPUs)
> >>
> >> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> >>
> >> Segmentation fault
> >>
> >> -Aaron
> >>
> >> On 3/24/17 1:04 PM, Sven Oehme wrote:
> >> > while this is happening run top and see if there is very high cpu
> >> > utilization at this time on the NSD Server.
> >> >
> >> > if there is , run perf top (you might need to install perf
> >> command) and
> >> > see if the top cpu contender is a spinlock . if so send a
> >> screenshot of
> >> > perf top as i may know what that is and how to fix.
> >> >
> >> > sven
> >> >
> >> >
> >> > On Fri, Mar 24, 2017 at 9:43 AM Aaron Knister
> >> <aaron.s.knister at nasa.gov <mailto:aaron.s.knister at nasa.gov>
> <mailto:aaron.s.knister at nasa.gov <mailto:aaron.s.knister at nasa.gov>>
> >> > <mailto:aaron.s.knister at nasa.gov <mailto:aaron.s.knister at nasa.gov>
> >> <mailto:aaron.s.knister at nasa.gov <mailto:aaron.s.knister at nasa.gov>>>> wrote:
> >> >
> >> > Since yesterday morning we've noticed some deadlocks on one
> >> of our
> >> > filesystems that seem to be triggered by writing to it. The
> >> waiters on
> >> > the clients look like this:
> >> >
> >> > 0x19450B0 ( 6730) waiting 2063.294589599 seconds,
> >> SyncHandlerThread:
> >> > on ThCond 0x1802585CB10 (0xFFFFC9002585CB10)
> >> (InodeFlushCondVar), reason
> >> > 'waiting for the flush flag to commit metadata'
> >> > 0x7FFFDA65E200 ( 22850) waiting 0.000246257 seconds,
> >> > AllocReduceHelperThread: on ThCond 0x7FFFDAC7FE28
> >> (0x7FFFDAC7FE28)
> >> > (MsgRecordCondvar), reason 'RPC wait' for
> >> allocMsgTypeRelinquishRegion
> >> > on node 10.1.52.33 <c0n3271>
> >> > 0x197EE70 ( 6776) waiting 0.000198354 seconds,
> >> > FileBlockWriteFetchHandlerThread: on ThCond 0x7FFFF00CD598
> >> > (0x7FFFF00CD598) (MsgRecordCondvar), reason 'RPC wait' for
> >> > allocMsgTypeRequestRegion on node 10.1.52.33 <c0n3271>
> >> >
> >> > (10.1.52.33/c0n3271 <http://10.1.52.33/c0n3271>
> <http://10.1.52.33/c0n3271>
> >> <http://10.1.52.33/c0n3271> is the fs manager
> >> > for the filesystem in question)
> >> >
> >> > there's a single process running on this node writing to the
> >> filesystem
> >> > in question (well, trying to write, it's been blocked doing
> >> nothing for
> >> > half an hour now). There are ~10 other client nodes in this
> >> situation
> >> > right now. We had many more last night before the problem
> >> seemed to
> >> > disappear in the early hours of the morning and now its back.
> >> >
> >> > Waiters on the fs manager look like this. While the
> >> individual waiter is
> >> > short it's a near constant stream:
> >> >
> >> > 0x7FFF60003540 ( 8269) waiting 0.001151588 seconds, Msg
> >> handler
> >> > allocMsgTypeRequestRegion: on ThMutex 0x1802163A2E0
> >> (0xFFFFC9002163A2E0)
> >> > (AllocManagerMutex)
> >> > 0x7FFF601C8860 ( 20606) waiting 0.001115712 seconds, Msg
> >> handler
> >> > allocMsgTypeRelinquishRegion: on ThMutex 0x1802163A2E0
> >> > (0xFFFFC9002163A2E0) (AllocManagerMutex)
> >> > 0x7FFF91C10080 ( 14723) waiting 0.000959649 seconds, Msg
> >> handler
> >> > allocMsgTypeRequestRegion: on ThMutex 0x1802163A2E0
> >> (0xFFFFC9002163A2E0)
> >> > (AllocManagerMutex)
> >> > 0x7FFFB03C2910 ( 12636) waiting 0.000769611 seconds, Msg
> >> handler
> >> > allocMsgTypeRequestRegion: on ThMutex 0x1802163A2E0
> >> (0xFFFFC9002163A2E0)
> >> > (AllocManagerMutex)
> >> > 0x7FFF8C092850 ( 18215) waiting 0.000682275 seconds, Msg
> >> handler
> >> > allocMsgTypeRelinquishRegion: on ThMutex 0x1802163A2E0
> >> > (0xFFFFC9002163A2E0) (AllocManagerMutex)
> >> > 0x7FFF9423F730 ( 12652) waiting 0.000641915 seconds, Msg
> >> handler
> >> > allocMsgTypeRequestRegion: on ThMutex 0x1802163A2E0
> >> (0xFFFFC9002163A2E0)
> >> > (AllocManagerMutex)
> >> > 0x7FFF9422D770 ( 12625) waiting 0.000494256 seconds, Msg
> >> handler
> >> > allocMsgTypeRequestRegion: on ThMutex 0x1802163A2E0
> >> (0xFFFFC9002163A2E0)
> >> > (AllocManagerMutex)
> >> > 0x7FFF9423E310 ( 12651) waiting 0.000437760 seconds, Msg
> >> handler
> >> > allocMsgTypeRelinquishRegion: on ThMutex 0x1802163A2E0
> >> > (0xFFFFC9002163A2E0) (AllocManagerMutex)
> >> >
> >> > I don't know if this data point is useful but both yesterday
> >> and today
> >> > the metadata NSDs for this filesystem have had a constant
> >> aggregate
> >> > stream of 25MB/s 4kop/s reads during each episode (very low
> >> latency
> >> > though so I don't believe the storage is a bottleneck here).
> >> Writes are
> >> > only a few hundred ops and didn't strike me as odd.
> >> >
> >> > I have a PMR open for this but I'm curious if folks have
> >> seen this in
> >> > the wild and what it might mean.
> >> >
> >> > -Aaron
> >> >
> >> > --
> >> > Aaron Knister
> >> > NASA Center for Climate Simulation (Code 606.2)
> >> > Goddard Space Flight Center
> >> > (301) 286-2776 <tel:(301)%20286-2776> <tel:(301)%20286-2776>
> <tel:(301)%20286-2776>
> >> > _______________________________________________
> >> > gpfsug-discuss mailing list
> >> > gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
> >> <http://spectrumscale.org> <http://spectrumscale.org>
> >> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >> >
> >> >
> >> >
> >> > _______________________________________________
> >> > gpfsug-discuss mailing list
> >> > gpfsug-discuss at spectrumscale.org <http://spectrumscale.org> <http://spectrumscale.org>
> >> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >> >
> >>
> >> --
> >> Aaron Knister
> >> NASA Center for Climate Simulation (Code 606.2)
> >> Goddard Space Flight Center
> >> (301) 286-2776 <tel:(301)%20286-2776> <tel:(301)%20286-2776>
> >> _______________________________________________
> >> gpfsug-discuss mailing list
> >> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org> <http://spectrumscale.org>
> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >>
> >>
> >>
> >> _______________________________________________
> >> gpfsug-discuss mailing list
> >> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >>
> >
> >
> >
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >
>
> --
> Aaron Knister
> NASA Center for Climate Simulation (Code 606.2)
> Goddard Space Flight Center
> (301) 286-2776 <tel:(301)%20286-2776>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
-------------- next part --------------
disk disk size failure holds holds free KB free KB
name in KB group metadata data in full blocks in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 8.4 TB)
m01_12_061 52428800 30 yes no 40660992 ( 78%) 689168 ( 1%)
m01_12_062 52428800 30 yes no 40662528 ( 78%) 687168 ( 1%)
m01_12_063 52428800 30 yes no 40657920 ( 78%) 694256 ( 1%)
m01_12_064 52428800 30 yes no 40654080 ( 78%) 703464 ( 1%)
m01_12_096 52428800 30 yes no 15453184 ( 29%) 1138392 ( 2%)
m01_12_095 52428800 30 yes no 15514112 ( 30%) 1139072 ( 2%)
m01_12_094 52428800 30 yes no 15425536 ( 29%) 1111776 ( 2%)
m01_12_093 52428800 30 yes no 15340544 ( 29%) 1126584 ( 2%)
m01_12_068 52428800 30 yes no 40716544 ( 78%) 688752 ( 1%)
m01_12_067 52428800 30 yes no 40690944 ( 78%) 692200 ( 1%)
m01_12_066 52428800 30 yes no 40687104 ( 78%) 692976 ( 1%)
m01_12_065 52428800 30 yes no 40674304 ( 78%) 690848 ( 1%)
m01_12_081 52428800 30 yes no 137728 ( 0%) 487760 ( 1%)
m01_12_082 52428800 30 yes no 60416 ( 0%) 512632 ( 1%)
m01_12_083 52428800 30 yes no 102144 ( 0%) 674152 ( 1%)
m01_12_084 52428800 30 yes no 126208 ( 0%) 684296 ( 1%)
m01_12_085 52428800 30 yes no 117504 ( 0%) 705952 ( 1%)
m01_12_086 52428800 30 yes no 119296 ( 0%) 691056 ( 1%)
m01_12_087 52428800 30 yes no 57344 ( 0%) 493992 ( 1%)
m01_12_088 52428800 30 yes no 60672 ( 0%) 547360 ( 1%)
m01_12_089 52428800 30 yes no 1455616 ( 3%) 888688 ( 2%)
m01_12_090 52428800 30 yes no 1467392 ( 3%) 919312 ( 2%)
m01_12_091 52428800 30 yes no 190464 ( 0%) 745456 ( 1%)
m01_12_092 52428800 30 yes no 1367296 ( 3%) 771400 ( 1%)
m02_22_081 52428800 40 yes no 1245696 ( 2%) 855992 ( 2%)
m02_22_082 52428800 40 yes no 1261056 ( 2%) 869336 ( 2%)
m02_22_083 52428800 40 yes no 1254912 ( 2%) 865656 ( 2%)
m02_22_084 52428800 40 yes no 62464 ( 0%) 698480 ( 1%)
m02_22_085 52428800 40 yes no 64256 ( 0%) 703016 ( 1%)
m02_22_086 52428800 40 yes no 62208 ( 0%) 690032 ( 1%)
m02_22_087 52428800 40 yes no 62464 ( 0%) 687584 ( 1%)
m02_22_088 52428800 40 yes no 68608 ( 0%) 699848 ( 1%)
m02_22_089 52428800 40 yes no 84480 ( 0%) 698144 ( 1%)
m02_22_090 52428800 40 yes no 85248 ( 0%) 720216 ( 1%)
m02_22_091 52428800 40 yes no 98816 ( 0%) 711824 ( 1%)
m02_22_092 52428800 40 yes no 104448 ( 0%) 732808 ( 1%)
m02_22_068 52428800 40 yes no 40727552 ( 78%) 702472 ( 1%)
m02_22_067 52428800 40 yes no 40713728 ( 78%) 688576 ( 1%)
m02_22_066 52428800 40 yes no 40694272 ( 78%) 700960 ( 1%)
m02_22_065 52428800 40 yes no 40694016 ( 78%) 689936 ( 1%)
m02_22_064 52428800 40 yes no 40683264 ( 78%) 695144 ( 1%)
m02_22_063 52428800 40 yes no 40676864 ( 78%) 701288 ( 1%)
m02_22_062 52428800 40 yes no 40670976 ( 78%) 692984 ( 1%)
m02_22_061 52428800 40 yes no 40672512 ( 78%) 690024 ( 1%)
m02_22_096 52428800 40 yes no 15327232 ( 29%) 1149064 ( 2%)
m02_22_095 52428800 40 yes no 15363584 ( 29%) 1146384 ( 2%)
m02_22_094 52428800 40 yes no 15397376 ( 29%) 1172856 ( 2%)
m02_22_093 52428800 40 yes no 15374336 ( 29%) 1163832 ( 2%)
------------- -------------------- -------------------
(pool total) 2516582400 783850240 ( 31%) 37303168 ( 1%)
Disks in storage pool: sp_1620 (Maximum disk size allowed is 5.4 PB)
d23_01_001 46028292096 1620 no yes 3541176320 ( 8%) 37724768 ( 0%)
d23_01_002 46028292096 1620 no yes 3542331392 ( 8%) 37545024 ( 0%)
d23_01_003 46028292096 1620 no yes 3541968896 ( 8%) 37765344 ( 0%)
d23_01_004 46028292096 1620 no yes 3544687616 ( 8%) 37720576 ( 0%)
d23_01_005 46028292096 1620 no yes 3543368704 ( 8%) 37647456 ( 0%)
d23_01_006 46028292096 1620 no yes 3542778880 ( 8%) 37695232 ( 0%)
d23_01_007 46028292096 1620 no yes 3543220224 ( 8%) 37539712 ( 0%)
d23_01_008 46028292096 1620 no yes 3540293632 ( 8%) 37548928 ( 0%)
d23_01_009 46028292096 1620 no yes 3544590336 ( 8%) 37547424 ( 0%)
d23_01_010 46028292096 1620 no yes 3542993920 ( 8%) 37865728 ( 0%)
d23_01_011 46028292096 1620 no yes 3542859776 ( 8%) 37889408 ( 0%)
d23_01_012 46028292096 1620 no yes 3542452224 ( 8%) 37721440 ( 0%)
d23_01_013 46028292096 1620 no yes 3542286336 ( 8%) 37797824 ( 0%)
d23_01_014 46028292096 1620 no yes 3543352320 ( 8%) 37647456 ( 0%)
d23_01_015 46028292096 1620 no yes 3542906880 ( 8%) 37776960 ( 0%)
d23_01_016 46028292096 1620 no yes 3540386816 ( 8%) 37521632 ( 0%)
d23_01_017 46028292096 1620 no yes 3543212032 ( 8%) 37568480 ( 0%)
d23_01_018 46028292096 1620 no yes 3542416384 ( 8%) 37467648 ( 0%)
d23_01_019 46028292096 1620 no yes 3542659072 ( 8%) 37865344 ( 0%)
d23_01_020 46028292096 1620 no yes 3542518784 ( 8%) 37623840 ( 0%)
d23_01_021 46028292096 1620 no yes 3543202816 ( 8%) 37630848 ( 0%)
d23_01_022 46028292096 1620 no yes 3544535040 ( 8%) 37723968 ( 0%)
d23_01_023 46028292096 1620 no yes 3543248896 ( 8%) 37542656 ( 0%)
d23_01_024 46028292096 1620 no yes 3541811200 ( 8%) 37775360 ( 0%)
d23_01_025 46028292096 1620 no yes 3544839168 ( 8%) 37887744 ( 0%)
d23_01_026 46028292096 1620 no yes 3542474752 ( 8%) 37820672 ( 0%)
d23_01_027 46028292096 1620 no yes 3542050816 ( 8%) 37847296 ( 0%)
d23_01_028 46028292096 1620 no yes 3540822016 ( 8%) 37578400 ( 0%)
d23_01_029 46028292096 1620 no yes 3542011904 ( 8%) 37423328 ( 0%)
d23_01_030 46028292096 1620 no yes 3542572032 ( 8%) 37751840 ( 0%)
d23_01_031 46028292096 1620 no yes 3541582848 ( 8%) 37648896 ( 0%)
d23_01_032 46028292096 1620 no yes 3542650880 ( 8%) 37715840 ( 0%)
d23_01_033 46028292096 1620 no yes 3542251520 ( 8%) 37598432 ( 0%)
d23_01_034 46028292096 1620 no yes 3542195200 ( 8%) 37582944 ( 0%)
d23_01_035 46028292096 1620 no yes 3541298176 ( 8%) 37694848 ( 0%)
d23_01_036 46028292096 1620 no yes 3541215232 ( 8%) 37869568 ( 0%)
d23_01_037 46028292096 1620 no yes 3542111232 ( 8%) 37601088 ( 0%)
d23_01_038 46028292096 1620 no yes 3541210112 ( 8%) 37474400 ( 0%)
d23_01_039 46028292096 1620 no yes 3540457472 ( 8%) 37654656 ( 0%)
d23_01_040 46028292096 1620 no yes 3541776384 ( 8%) 37645760 ( 0%)
d23_01_041 46028292096 1620 no yes 3542624256 ( 8%) 37798880 ( 0%)
d23_01_042 46028292096 1620 no yes 3541653504 ( 8%) 37595936 ( 0%)
d23_01_043 46028292096 1620 no yes 3540583424 ( 8%) 37751936 ( 0%)
d23_01_044 46028292096 1620 no yes 3542136832 ( 8%) 37793536 ( 0%)
d23_01_045 46028292096 1620 no yes 3543443456 ( 8%) 37683872 ( 0%)
d23_01_046 46028292096 1620 no yes 3540705280 ( 8%) 37896096 ( 0%)
d23_01_047 46028292096 1620 no yes 3541550080 ( 8%) 37577760 ( 0%)
d23_01_048 46028292096 1620 no yes 3542068224 ( 8%) 37724960 ( 0%)
d23_01_049 46028292096 1620 no yes 3544568832 ( 8%) 37687264 ( 0%)
d23_01_050 46028292096 1620 no yes 3543891968 ( 8%) 37737824 ( 0%)
d23_01_051 46028292096 1620 no yes 3541944320 ( 8%) 37787904 ( 0%)
d23_01_052 46028292096 1620 no yes 3542128640 ( 8%) 37960704 ( 0%)
d23_01_053 46028292096 1620 no yes 3542494208 ( 8%) 37823104 ( 0%)
d23_01_054 46028292096 1620 no yes 3541776384 ( 8%) 37652064 ( 0%)
d23_01_055 46028292096 1620 no yes 3543655424 ( 8%) 37802656 ( 0%)
d23_01_056 46028292096 1620 no yes 3541664768 ( 8%) 37694272 ( 0%)
d23_01_057 46028292096 1620 no yes 3542197248 ( 8%) 37798272 ( 0%)
d23_01_058 46028292096 1620 no yes 3543078912 ( 8%) 37740448 ( 0%)
d23_01_059 46028292096 1620 no yes 3544783872 ( 8%) 37741248 ( 0%)
d23_01_060 46028292096 1620 no yes 3542276096 ( 8%) 37818304 ( 0%)
d23_01_061 46028292096 1620 no yes 3543452672 ( 8%) 37727104 ( 0%)
d23_01_062 46028292096 1620 no yes 3543225344 ( 8%) 37754720 ( 0%)
d23_01_063 46028292096 1620 no yes 3543173120 ( 8%) 37685280 ( 0%)
d23_01_064 46028292096 1620 no yes 3541703680 ( 8%) 37711424 ( 0%)
d23_01_065 46028292096 1620 no yes 3541797888 ( 8%) 37836992 ( 0%)
d23_01_066 46028292096 1620 no yes 3542709248 ( 8%) 37780864 ( 0%)
d23_01_067 46028292096 1620 no yes 3542996992 ( 8%) 37798976 ( 0%)
d23_01_068 46028292096 1620 no yes 3542989824 ( 8%) 37672352 ( 0%)
d23_01_069 46028292096 1620 no yes 3542004736 ( 8%) 37688608 ( 0%)
d23_01_070 46028292096 1620 no yes 3541458944 ( 8%) 37648320 ( 0%)
d23_01_071 46028292096 1620 no yes 3542049792 ( 8%) 37874368 ( 0%)
d23_01_072 46028292096 1620 no yes 3541520384 ( 8%) 37650368 ( 0%)
d23_01_073 46028292096 1620 no yes 3542274048 ( 8%) 37759776 ( 0%)
d23_01_074 46028292096 1620 no yes 3541511168 ( 8%) 37569472 ( 0%)
d23_01_075 46028292096 1620 no yes 3544001536 ( 8%) 37685952 ( 0%)
d23_01_076 46028292096 1620 no yes 3543203840 ( 8%) 37690880 ( 0%)
d23_01_077 46028292096 1620 no yes 3541925888 ( 8%) 37710848 ( 0%)
d23_01_078 46028292096 1620 no yes 3543930880 ( 8%) 37588672 ( 0%)
d23_01_079 46028292096 1620 no yes 3541520384 ( 8%) 37626432 ( 0%)
d23_01_080 46028292096 1620 no yes 3541615616 ( 8%) 37796576 ( 0%)
d23_01_081 46028292096 1620 no yes 3542212608 ( 8%) 37773056 ( 0%)
d23_01_082 46028292096 1620 no yes 3541496832 ( 8%) 37863200 ( 0%)
d23_01_083 46028292096 1620 no yes 3541881856 ( 8%) 37822016 ( 0%)
d23_01_084 46028292096 1620 no yes 3543436288 ( 8%) 37838144 ( 0%)
d23_02_001 46028292096 1620 no yes 3543580672 ( 8%) 37784480 ( 0%)
d23_02_002 46028292096 1620 no yes 3541958656 ( 8%) 38029312 ( 0%)
d23_02_003 46028292096 1620 no yes 3542037504 ( 8%) 37781888 ( 0%)
d23_02_004 46028292096 1620 no yes 3541141504 ( 8%) 37535936 ( 0%)
d23_02_005 46028292096 1620 no yes 3541710848 ( 8%) 37585504 ( 0%)
d23_02_006 46028292096 1620 no yes 3542758400 ( 8%) 37699968 ( 0%)
d23_02_007 46028292096 1620 no yes 3541051392 ( 8%) 37609824 ( 0%)
d23_02_008 46028292096 1620 no yes 3541925888 ( 8%) 37791872 ( 0%)
d23_02_009 46028292096 1620 no yes 3542461440 ( 8%) 37854464 ( 0%)
d23_02_010 46028292096 1620 no yes 3544000512 ( 8%) 37642048 ( 0%)
d23_02_011 46028292096 1620 no yes 3542000640 ( 8%) 37811840 ( 0%)
d23_02_012 46028292096 1620 no yes 3543025664 ( 8%) 37802784 ( 0%)
d23_02_013 46028292096 1620 no yes 3541744640 ( 8%) 37776608 ( 0%)
d23_02_014 46028292096 1620 no yes 3542261760 ( 8%) 37699648 ( 0%)
d23_02_015 46028292096 1620 no yes 3542729728 ( 8%) 37690944 ( 0%)
d23_02_016 46028292096 1620 no yes 3543721984 ( 8%) 37657472 ( 0%)
d23_02_017 46028292096 1620 no yes 3540802560 ( 8%) 37531328 ( 0%)
d23_02_018 46028292096 1620 no yes 3542657024 ( 8%) 37860768 ( 0%)
d23_02_019 46028292096 1620 no yes 3543438336 ( 8%) 37573760 ( 0%)
d23_02_020 46028292096 1620 no yes 3543243776 ( 8%) 37662976 ( 0%)
d23_02_021 46028292096 1620 no yes 46028197888 (100%) 32576 ( 0%) *
d23_02_022 46028292096 1620 no yes 3543544832 ( 8%) 37620160 ( 0%)
d23_02_023 46028292096 1620 no yes 3540716544 ( 8%) 37649536 ( 0%)
d23_02_024 46028292096 1620 no yes 3542181888 ( 8%) 37553760 ( 0%)
d23_02_025 46028292096 1620 no yes 3540486144 ( 8%) 37529376 ( 0%)
d23_02_026 46028292096 1620 no yes 3541583872 ( 8%) 37833792 ( 0%)
d23_02_027 46028292096 1620 no yes 3542169600 ( 8%) 37778912 ( 0%)
d23_02_028 46028292096 1620 no yes 3541048320 ( 8%) 37696864 ( 0%)
d23_02_029 46028292096 1620 no yes 3542336512 ( 8%) 37859264 ( 0%)
d23_02_030 46028292096 1620 no yes 3542102016 ( 8%) 38059168 ( 0%)
d23_02_031 46028292096 1620 no yes 3541311488 ( 8%) 37784480 ( 0%)
d23_02_032 46028292096 1620 no yes 3542036480 ( 8%) 37783456 ( 0%)
d23_02_033 46028292096 1620 no yes 3541478400 ( 8%) 37753792 ( 0%)
d23_02_034 46028292096 1620 no yes 3540772864 ( 8%) 37725312 ( 0%)
d23_02_035 46028292096 1620 no yes 3541840896 ( 8%) 37709664 ( 0%)
d23_02_036 46028292096 1620 no yes 3542415360 ( 8%) 37580448 ( 0%)
d23_02_037 46028292096 1620 no yes 3542515712 ( 8%) 37587808 ( 0%)
d23_02_038 46028292096 1620 no yes 3541250048 ( 8%) 37550976 ( 0%)
d23_02_039 46028292096 1620 no yes 3542627328 ( 8%) 37389952 ( 0%)
d23_02_040 46028292096 1620 no yes 3541750784 ( 8%) 37709216 ( 0%)
d23_02_041 46028292096 1620 no yes 3542558720 ( 8%) 37760288 ( 0%)
d23_02_042 46028292096 1620 no yes 3540994048 ( 8%) 37491680 ( 0%)
d23_02_043 46028292096 1620 no yes 3542491136 ( 8%) 37768576 ( 0%)
d23_02_044 46028292096 1620 no yes 3542805504 ( 8%) 37638144 ( 0%)
d23_02_045 46028292096 1620 no yes 3540658176 ( 8%) 37613120 ( 0%)
d23_02_046 46028292096 1620 no yes 3543098368 ( 8%) 37920320 ( 0%)
d23_02_047 46028292096 1620 no yes 3543087104 ( 8%) 37590432 ( 0%)
d23_02_048 46028292096 1620 no yes 3541494784 ( 8%) 37468224 ( 0%)
d23_02_049 46028292096 1620 no yes 3541920768 ( 8%) 37675968 ( 0%)
d23_02_050 46028292096 1620 no yes 3542463488 ( 8%) 37670816 ( 0%)
d23_02_051 46028292096 1620 no yes 3542427648 ( 8%) 37678176 ( 0%)
d23_02_052 46028292096 1620 no yes 3539824640 ( 8%) 37590176 ( 0%)
d23_02_053 46028292096 1620 no yes 3542251520 ( 8%) 37835200 ( 0%)
d23_02_054 46028292096 1620 no yes 3541064704 ( 8%) 37636224 ( 0%)
d23_02_055 46028292096 1620 no yes 3540130816 ( 8%) 37703360 ( 0%)
d23_02_056 46028292096 1620 no yes 3545320448 ( 8%) 37767712 ( 0%)
d23_02_057 46028292096 1620 no yes 3543144448 ( 8%) 37658208 ( 0%)
d23_02_058 46028292096 1620 no yes 3541233664 ( 8%) 37720640 ( 0%)
d23_02_059 46028292096 1620 no yes 3541435392 ( 8%) 37680896 ( 0%)
d23_02_060 46028292096 1620 no yes 3542780928 ( 8%) 37567360 ( 0%)
d23_02_061 46028292096 1620 no yes 3542073344 ( 8%) 37420992 ( 0%)
d23_02_062 46028292096 1620 no yes 3543192576 ( 8%) 37575232 ( 0%)
d23_02_063 46028292096 1620 no yes 3542048768 ( 8%) 37696192 ( 0%)
d23_02_064 46028292096 1620 no yes 3541509120 ( 8%) 37536224 ( 0%)
d23_02_065 46028292096 1620 no yes 3542299648 ( 8%) 37648736 ( 0%)
d23_02_066 46028292096 1620 no yes 3541864448 ( 8%) 37710976 ( 0%)
d23_02_067 46028292096 1620 no yes 3538870272 ( 8%) 37532288 ( 0%)
d23_02_068 46028292096 1620 no yes 3541297152 ( 8%) 37829184 ( 0%)
d23_02_069 46028292096 1620 no yes 3542179840 ( 8%) 37583424 ( 0%)
d23_02_070 46028292096 1620 no yes 3541616640 ( 8%) 37742080 ( 0%)
d23_02_071 46028292096 1620 no yes 3541706752 ( 8%) 37687328 ( 0%)
d23_02_072 46028292096 1620 no yes 3542240256 ( 8%) 37582112 ( 0%)
d23_02_073 46028292096 1620 no yes 3542102016 ( 8%) 37752736 ( 0%)
d23_02_074 46028292096 1620 no yes 3542754304 ( 8%) 37687456 ( 0%)
d23_02_075 46028292096 1620 no yes 3542194176 ( 8%) 37729344 ( 0%)
d23_02_076 46028292096 1620 no yes 3542798336 ( 8%) 37672096 ( 0%)
d23_02_077 46028292096 1620 no yes 3543918592 ( 8%) 37709024 ( 0%)
d23_02_078 46028292096 1620 no yes 3540992000 ( 8%) 37540992 ( 0%)
d23_02_079 46028292096 1620 no yes 3543330816 ( 8%) 37573888 ( 0%)
d23_02_080 46028292096 1620 no yes 3543735296 ( 8%) 37818208 ( 0%)
d23_02_081 46028292096 1620 no yes 3542054912 ( 8%) 37698016 ( 0%)
d23_02_082 46028292096 1620 no yes 3540747264 ( 8%) 37643776 ( 0%)
d23_02_083 46028292096 1620 no yes 3542846464 ( 8%) 37715680 ( 0%)
d23_02_084 46028292096 1620 no yes 3542061056 ( 8%) 37703904 ( 0%)
------------- -------------------- -------------------
(pool total) 7686724780032 591558139904 ( 8%) 6295334976 ( 0%)
============= ==================== ===================
(data) 7686724780032 591558139904 ( 8%) 6295334976 ( 0%)
(metadata) 2516582400 783850240 ( 31%) 37303168 ( 1%)
============= ==================== ===================
(total) 7689241362432 592341990144 ( 8%) 6332638144 ( 0%)
Inode Information
-----------------
Number of used inodes: 30247288
Number of free inodes: 11695752
Number of allocated inodes: 41943040
Maximum number of inodes: 41943040
-------------- next part --------------
flag value description
------------------- ------------------------ -----------------------------------
-f 8192 Minimum fragment size in bytes (system pool)
32768 Minimum fragment size in bytes (other pools)
-i 512 Inode size in bytes
-I 32768 Indirect block size in bytes
-m 2 Default number of metadata replicas
-M 2 Maximum number of metadata replicas
-r 1 Default number of data replicas
-R 2 Maximum number of data replicas
-j cluster Block allocation type
-D nfs4 File locking semantics in effect
-k posix ACL semantics in effect
-n 5000 Estimated number of nodes that will mount file system
-B 262144 Block size (system pool)
1048576 Block size (other pools)
-Q user;group;fileset Quotas accounting enabled
user;group;fileset Quotas enforced
none Default quotas enabled
--perfileset-quota no Per-fileset quota enforcement
--filesetdf no Fileset df enabled?
-V 13.23 (3.5.0.7) File system version
--create-time Fri Dec 6 15:23:28 2013 File system creation time
-z no Is DMAPI enabled?
-L 8388608 Logfile size
-E yes Exact mtime mount option
-S no Suppress atime mount option
-K whenpossible Strict replica allocation option
--fastea yes Fast external attributes enabled?
--encryption no Encryption enabled?
--inode-limit 41943040 Maximum number of inodes
--log-replicas 0 Number of log replicas
--is4KAligned no is4KAligned?
--rapid-repair no rapidRepair enabled?
--write-cache-threshold 0 HAWC Threshold (max 65536)
-P system;sp_1620 Disk storage pools in file system
-d m01_12_093;m01_12_094;m01_12_095;m01_12_096;m02_22_093;m02_22_094;m02_22_095;m02_22_096;m01_12_061;m01_12_062;m01_12_063;m01_12_064;m01_12_065;m01_12_066;m01_12_067;
-d m01_12_068;m02_22_061;m02_22_062;m02_22_063;m02_22_064;m02_22_065;m02_22_066;m02_22_067;m02_22_068;m01_12_081;m01_12_082;m01_12_083;m01_12_084;m01_12_085;m01_12_086;
-d m01_12_087;m01_12_088;m01_12_089;m01_12_090;m01_12_091;m01_12_092;m02_22_081;m02_22_082;m02_22_083;m02_22_084;m02_22_085;m02_22_086;m02_22_087;m02_22_088;m02_22_089;
-d m02_22_090;m02_22_091;m02_22_092;d23_01_001;d23_01_002;d23_01_003;d23_01_004;d23_01_005;d23_01_006;d23_01_007;d23_01_008;d23_01_009;d23_01_010;d23_01_011;d23_01_012;
-d d23_01_013;d23_01_014;d23_01_015;d23_01_016;d23_01_017;d23_01_018;d23_01_019;d23_01_020;d23_01_021;d23_01_022;d23_01_023;d23_01_024;d23_01_025;d23_01_026;d23_01_027;
-d d23_01_028;d23_01_029;d23_01_030;d23_01_031;d23_01_032;d23_01_033;d23_01_034;d23_01_035;d23_01_036;d23_01_037;d23_01_038;d23_01_039;d23_01_040;d23_01_041;d23_01_042;
-d d23_01_043;d23_01_044;d23_01_045;d23_01_046;d23_01_047;d23_01_048;d23_01_049;d23_01_050;d23_01_051;d23_01_052;d23_01_053;d23_01_054;d23_01_055;d23_01_056;d23_01_057;
-d d23_01_058;d23_01_059;d23_01_060;d23_01_061;d23_01_062;d23_01_063;d23_01_064;d23_01_065;d23_01_066;d23_01_067;d23_01_068;d23_01_069;d23_01_070;d23_01_071;d23_01_072;
-d d23_01_073;d23_01_074;d23_01_075;d23_01_076;d23_01_077;d23_01_078;d23_01_079;d23_01_080;d23_01_081;d23_01_082;d23_01_083;d23_01_084;d23_02_001;d23_02_002;d23_02_003;
-d d23_02_004;d23_02_005;d23_02_006;d23_02_007;d23_02_008;d23_02_009;d23_02_010;d23_02_011;d23_02_012;d23_02_013;d23_02_014;d23_02_015;d23_02_016;d23_02_017;d23_02_018;
-d d23_02_019;d23_02_020;d23_02_021;d23_02_022;d23_02_023;d23_02_024;d23_02_025;d23_02_026;d23_02_027;d23_02_028;d23_02_029;d23_02_030;d23_02_031;d23_02_032;d23_02_033;
-d d23_02_034;d23_02_035;d23_02_036;d23_02_037;d23_02_038;d23_02_039;d23_02_040;d23_02_041;d23_02_042;d23_02_043;d23_02_044;d23_02_045;d23_02_046;d23_02_047;d23_02_048;
-d d23_02_049;d23_02_050;d23_02_051;d23_02_052;d23_02_053;d23_02_054;d23_02_055;d23_02_056;d23_02_057;d23_02_058;d23_02_059;d23_02_060;d23_02_061;d23_02_062;d23_02_063;
-d d23_02_064;d23_02_065;d23_02_066;d23_02_067;d23_02_068;d23_02_069;d23_02_070;d23_02_071;d23_02_072;d23_02_073;d23_02_074;d23_02_075;d23_02_076;d23_02_077;d23_02_078;
-d d23_02_079;d23_02_080;d23_02_081;d23_02_082;d23_02_083;d23_02_084 Disks in file system
-A no Automatic mount option
-o nodev,nosuid Additional mount options
-T /gpfsm/dnb03 Default mount point
--mount-priority 0 Mount priority
More information about the gpfsug-discuss
mailing list