[gpfsug-discuss] metadata vdisks on fusionio.. doable?

Laurence Horrocks- Barlow lhorrocks-barlow at ocf.co.uk
Fri Oct 10 17:48:24 BST 2014


Hi Salvatore,

Just to add that when the local metadata disk fails or the server goes 
offline there will most likely be an I/O interruption/pause whist the 
GPFS cluster renegotiates.

The main concept to be aware of (as Paul mentioned) is that when a disk 
goes offline it will appear down to GPFS, once you've started the disk 
again it will rediscover and scan the metadata for any missing updates, 
these updates are then repaired/replicated again.

Laurence Horrocks-Barlow
Linux Systems Software Engineer
OCF plc

Tel: +44 (0)114 257 2200
Fax: +44 (0)114 257 0022
Web: www.ocf.co.uk <http://www.ocf.co.uk>
Blog: blog.ocf.co.uk <http://blog.ocf.co.uk>
Twitter: @ocfplc <http://twitter.com/#%21/ocfplc>

OCF plc is a company registered in England and Wales. Registered number 
4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 
5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 
2PG.

This message is private and confidential. If you have received this 
message in error, please notify us and remove it from your system.


On 10/10/2014 17:02, Sanchez, Paul wrote:
>
> Hi Salvatore,
>
> We've done this before (non-shared metadata NSDs with GPFS 4.1) and 
> noted these constraints:
>
> * Filesystem descriptor quorum: since it will be easier to have a 
> metadata disk go offline, it's even more important to have three 
> failure groups with FusionIO metadata NSDs in two, and at least a 
> desc_only NSD in the third one. You may even want to explore having 
> three full metadata replicas on FusionIO. (Or perhaps if your workload 
> can tolerate it the third one can be slower but in another GPFS 
> "subnet" so that it isn't used for reads.)
>
> * Make sure to set the correct default metadata replicas in your 
> filesystem, corresponding to the number of metadata failure groups you 
> set up. When a metadata server goes offline, it will take the metadata 
> disks with it, and you want a replica of the metadata to be available.
>
> * When a metadata server goes offline and comes back up (after a 
> maintenance reboot, for example), the non-shared metadata disks will 
> be stopped. Until those are brought back into a  well-known replicated 
> state, you are at risk of a cluster-wide filesystem unmount if there 
> is a subsequent metadata disk failure. But GPFS will continue to work, 
> by default, allowing reads and writes against the remaining metadata 
> replica. You must detect that disks are stopped (e.g. mmlsdisk) and 
> restart them (e.g. with mmchdisk <fs> start –a).
>
> I haven't seen anyone "recommend" running non-shared disk like this, 
> and I wouldn't do this for things which can't afford to go offline 
> unexpectedly and require a little more operational attention. But it 
> does appear to work.
>
> Thx
> Paul Sanchez
>
> *From:*gpfsug-discuss-bounces at gpfsug.org 
> [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Salvatore Di 
> Nardo
> *Sent:* Thursday, October 09, 2014 8:03 AM
> *To:* gpfsug main discussion list
> *Subject:* [gpfsug-discuss] metadata vdisks on fusionio.. doable?
>
> Hello everyone,
>
> Suppose we want to build a new GPFS storage using SAN attached 
> storages, but instead to put metadata in a shared storage, we want to 
> use  FusionIO PCI cards locally on the servers to speed up metadata 
> operation( http://www.fusionio.com/products/iodrive) and for 
> reliability, replicate the metadata in all the servers, will this work 
> in case of  server failure?
>
> To make it more clear: If a server fail i will loose also a metadata 
> vdisk. Its the replica mechanism its reliable enough to avoid metadata 
> corruption and loss of data?
>
> Thanks in advance
> Salvatore Di Nardo
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20141010/8d4ab475/attachment-0003.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: lhorrocks-barlow.vcf
Type: text/x-vcard
Size: 388 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20141010/8d4ab475/attachment-0003.vcf>


More information about the gpfsug-discuss mailing list