<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Hi Salvatore,<br>
<br>
Just to add that when the local metadata disk fails or the server
goes offline there will most likely be an I/O interruption/pause
whist the GPFS cluster renegotiates.<br>
<br>
The main concept to be aware of (as Paul mentioned) is that when a
disk goes offline it will appear down to GPFS, once you've started
the disk again it will rediscover and scan the metadata for any
missing updates, these updates are then repaired/replicated again.<br>
<p> <span class="signame">Laurence Horrocks-Barlow</span> <br>
<span class="sigtitle">Linux Systems Software Engineer</span> <br>
<span class="sigcompany">OCF plc</span> <br>
</p>
<p> <span class="sighead">Tel:</span> <span class="sigvalue">+44
(0)114 257 2200</span> <br>
<span class="sighead">Fax:</span> <span class="sigvalue">+44
(0)114 257 0022</span> <br>
<span class="sighead">Web:</span> <span class="sigvalue"><a
href="http://www.ocf.co.uk">www.ocf.co.uk</a></span> <br>
<span class="sighead">Blog:</span> <span class="sigvalue"><a
href="http://blog.ocf.co.uk">blog.ocf.co.uk</a></span> <br>
<span class="sighead">Twitter:</span> <span class="sigvalue"><a
href="http://twitter.com/#%21/ocfplc">@ocfplc</a></span> <br>
</p>
<span class="sigsmall">
<p> OCF plc is a company registered in England and Wales.
Registered number 4132533, VAT number GB 780 6803 14.
Registered office address: OCF plc, 5 Rotunda Business Centre,
Thorncliffe Park, Chapeltown, Sheffield, S35 2PG. </p>
<p> This message is private and confidential. If you have
received this message in error, please notify us and remove it
from your system. </p>
</span><br>
On 10/10/2014 17:02, Sanchez, Paul wrote:<br>
</div>
<blockquote
cite="mid:201D6001C896B846A9CFC2E841986AC145187803@mailnycmb2a.winmail.deshaw.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="Generator" content="Microsoft Word 14 (filtered
medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";
color:black;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
tt
{mso-style-priority:99;
font-family:"Courier New";}
span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri","sans-serif";
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
<div class="WordSection1">
<p class="MsoNormal">Hi Salvatore, <br>
<br>
We've done this before (non-shared metadata NSDs with GPFS
4.1) and noted these constraints:<br>
<br>
* Filesystem descriptor quorum: since it will be easier to
have a metadata disk go offline, it's even more important to
have three failure groups with FusionIO metadata NSDs in two,
and at least a desc_only NSD in the third one. You may even
want to explore having three full metadata replicas on
FusionIO. (Or perhaps if your workload can tolerate it the
third one can be slower but in another GPFS "subnet" so that
it isn't used for reads.)<br>
<br>
* Make sure to set the correct default metadata replicas in
your filesystem, corresponding to the number of metadata
failure groups you set up. When a metadata server goes
offline, it will take the metadata disks with it, and you want
a replica of the metadata to be available.<br>
<br>
* When a metadata server goes offline and comes back up (after
a maintenance reboot, for example), the non-shared metadata
disks will be stopped. Until those are brought back into a
well-known replicated state, you are at risk of a
cluster-wide filesystem unmount if there is a subsequent
metadata disk failure. But GPFS will continue to work, by
default, allowing reads and writes against the remaining
metadata replica. You must detect that disks are stopped (e.g.
mmlsdisk) and restart them (e.g. with mmchdisk <fs>
start –a).<br>
<br>
I haven't seen anyone "recommend" running non-shared disk like
this, and I wouldn't do this for things which can't afford to
go offline unexpectedly and require a little more operational
attention. But it does appear to work.<br>
<br>
Thx<br>
Paul Sanchez<br>
<br>
<span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p></o:p></span></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<div>
<div style="border:none;border-top:solid #B5C4DF
1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b><span
style="font-size:10.0pt;font-family:"Tahoma","sans-serif";color:windowtext">From:</span></b><span
style="font-size:10.0pt;font-family:"Tahoma","sans-serif";color:windowtext">
<a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@gpfsug.org">gpfsug-discuss-bounces@gpfsug.org</a>
[<a class="moz-txt-link-freetext" href="mailto:gpfsug-discuss-bounces@gpfsug.org">mailto:gpfsug-discuss-bounces@gpfsug.org</a>]
<b>On Behalf Of </b>Salvatore Di Nardo<br>
<b>Sent:</b> Thursday, October 09, 2014 8:03 AM<br>
<b>To:</b> gpfsug main discussion list<br>
<b>Subject:</b> [gpfsug-discuss] metadata vdisks on
fusionio.. doable?<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span
style="font-size:10.0pt">Hello everyone,<br>
<br>
Suppose we want to build a new GPFS storage using SAN
attached storages, but instead to put metadata in a shared
storage, we want to use FusionIO PCI cards locally on the
servers to speed up metadata operation(
<a moz-do-not-send="true"
href="http://www.fusionio.com/products/iodrive">http://www.fusionio.com/products/iodrive</a>)
and for reliability, replicate the metadata in all the
servers, will this work in case of server failure?<br>
<br>
To make it more clear: If a server fail i will loose also a
metadata vdisk. Its the replica mechanism its reliable
enough to avoid metadata corruption and loss of data?<br>
<br>
Thanks in advance<br>
Salvatore Di Nardo<br>
<br>
<br>
</span><o:p></o:p></p>
</div>
</blockquote>
<br>
</body>
</html>