<div dir="ltr">ok, i think i understand now, the data was already corrupted. the config change i proposed only prevents a potentially known future on the wire corruption, this will not fix something that made it to the disk already. <div><br></div><div>Sven</div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Aug 2, 2017 at 11:53 AM Stijn De Weirdt <<a href="mailto:stijn.deweirdt@ugent.be">stijn.deweirdt@ugent.be</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">yes ;)<br>
<br>
the system is in preproduction, so nothing that can't stopped/started in<br>
a few minutes (current setup has only 4 nsds, and no clients).<br>
mmfsck triggers the errors very early during inode replica compare.<br>
<br>
<br>
stijn<br>
<br>
On 08/02/2017 08:47 PM, Sven Oehme wrote:<br>
> How can you reproduce this so quick ?<br>
> Did you restart all daemons after that ?<br>
><br>
> On Wed, Aug 2, 2017, 11:43 AM Stijn De Weirdt <<a href="mailto:stijn.deweirdt@ugent.be" target="_blank">stijn.deweirdt@ugent.be</a>><br>
> wrote:<br>
><br>
>> hi sven,<br>
>><br>
>><br>
>>> the very first thing you should check is if you have this setting set :<br>
>> maybe the very first thing to check should be the faq/wiki that has this<br>
>> documented?<br>
>><br>
>>><br>
>>> mmlsconfig envVar<br>
>>><br>
>>> envVar MLX4_POST_SEND_PREFER_BF 0 MLX4_USE_MUTEX 1 MLX5_SHUT_UP_BF 1<br>
>>> MLX5_USE_MUTEX 1<br>
>>><br>
>>> if that doesn't come back the way above you need to set it :<br>
>>><br>
>>> mmchconfig envVar="MLX4_POST_SEND_PREFER_BF=0 MLX5_SHUT_UP_BF=1<br>
>>> MLX5_USE_MUTEX=1 MLX4_USE_MUTEX=1"<br>
>> i just set this (wasn't set before), but problem is still present.<br>
>><br>
>>><br>
>>> there was a problem in the Mellanox FW in various versions that was never<br>
>>> completely addressed (bugs where found and fixed, but it was never fully<br>
>>> proven to be addressed) the above environment variables turn code on in<br>
>> the<br>
>>> mellanox driver that prevents this potential code path from being used to<br>
>>> begin with.<br>
>>><br>
>>> in Spectrum Scale 4.2.4 (not yet released) we added a workaround in Scale<br>
>>> that even you don't set this variables the problem can't happen anymore<br>
>>> until then the only choice you have is the envVar above (which btw ships<br>
>> as<br>
>>> default on all ESS systems).<br>
>>><br>
>>> you also should be on the latest available Mellanox FW & Drivers as not<br>
>> all<br>
>>> versions even have the code that is activated by the environment<br>
>> variables<br>
>>> above, i think at a minimum you need to be at 3.4 but i don't remember<br>
>> the<br>
>>> exact version. There had been multiple defects opened around this area,<br>
>> the<br>
>>> last one i remember was :<br>
>> we run mlnx ofed 4.1, fw is not the latest, but we have edr cards from<br>
>> dell, and the fw is a bit behind. i'm trying to convince dell to make<br>
>> new one. mellanox used to allow to make your own, but they don't anymore.<br>
>><br>
>>><br>
>>> 00154843 : ESS ConnectX-3 performance issue - spinning on<br>
>> pthread_spin_lock<br>
>>><br>
>>> you may ask your mellanox representative if they can get you access to<br>
>> this<br>
>>> defect. while it was found on ESS , means on PPC64 and with ConnectX-3<br>
>>> cards its a general issue that affects all cards and on intel as well as<br>
>>> Power.<br>
>> ok, thanks for this. maybe such a reference is enough for dell to update<br>
>> their firmware.<br>
>><br>
>> stijn<br>
>><br>
>>><br>
>>> On Wed, Aug 2, 2017 at 8:58 AM Stijn De Weirdt <<a href="mailto:stijn.deweirdt@ugent.be" target="_blank">stijn.deweirdt@ugent.be</a>><br>
>>> wrote:<br>
>>><br>
>>>> hi all,<br>
>>>><br>
>>>> is there any documentation wrt data integrity in spectrum scale:<br>
>>>> assuming a crappy network, does gpfs garantee somehow that data written<br>
>>>> by client ends up safe in the nsd gpfs daemon; and similarly from the<br>
>>>> nsd gpfs daemon to disk.<br>
>>>><br>
>>>> and wrt crappy network, what about rdma on crappy network? is it the<br>
>> same?<br>
>>>><br>
>>>> (we are hunting down a crappy infiniband issue; ibm support says it's<br>
>>>> network issue; and we see no errors anywhere...)<br>
>>>><br>
>>>> thanks a lot,<br>
>>>><br>
>>>> stijn<br>
>>>> _______________________________________________<br>
>>>> gpfsug-discuss mailing list<br>
>>>> gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
>>>> <a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
>>>><br>
>>><br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> gpfsug-discuss mailing list<br>
>>> gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
>>> <a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
>>><br>
>> _______________________________________________<br>
>> gpfsug-discuss mailing list<br>
>> gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
>> <a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
>><br>
><br>
><br>
><br>
> _______________________________________________<br>
> gpfsug-discuss mailing list<br>
> gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
> <a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
><br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
</blockquote></div>