[gpfsug-discuss] GPFS - pagepool data protection?

Dean Hildebrand dhildeb at us.ibm.com
Sat Nov 15 20:31:53 GMT 2014


Hi Pavel,

You are more or less right in your description, but the key that I tried to
convey in my first email is that GPFS only obey's POSIX.  So your question
can be answered by looking at how your application performs the write and
does your application ask to make the data live only in the pagepool or on
stable storage.  By default posix says that file create and writes are
unstable, so just doing a write puts it in the pagepool and will be lost if
a crash occurs immediately after.  To make it stable, the application must
do something in posix to make it stable, of which there are many ways to do
so, including but not limited to O_SYNC, DIO, some form of fsync post
write, etc, etc...

Dean Hildebrand
IBM Almaden Research Center




From:	Pavel Pokorny <pavel.pokorny at datera.cz>
To:	gpfsug-discuss at gpfsug.org
Date:	11/12/2014 04:21 AM
Subject:	Re: [gpfsug-discuss] GPFS - pagepool data protection?
Sent by:	gpfsug-discuss-bounces at gpfsug.org



Hi,
thanks. A I understand the write process to GPFS filesystem:

1. Application on a node makes write call
2. Token Manager stuff is done to coordinate the required-byte-range
3. mmfsd gets metadata from the file’s metanode
4. mmfsd acquires a buffer from the page pool
5. Data is moved from application data buffer to page pool buffer
6. VSD layer copies data from the page pool to the send pool
 and so on.

What I am looking at and want to clarify is step 5. Situation when data is
moved to page pool. What happen if the server will crash at tjis point?
Will GPFS use journal to get to stable state?
Thank you, Pavel

--
Ing. Pavel Pokorný
DATERA s.r.o. | Ovocný trh 580/2 | Praha | Czech Republic
www.datera.cz | Mobil: +420 602 357 194 | E-mail: pavel.pokorny at datera.cz


On Sat, Nov 8, 2014 at 1:00 PM, <gpfsug-discuss-request at gpfsug.org> wrote:
  Send gpfsug-discuss mailing list submissions to
          gpfsug-discuss at gpfsug.org

  To subscribe or unsubscribe via the World Wide Web, visit
          http://gpfsug.org/mailman/listinfo/gpfsug-discuss
  or, via email, send a message with subject or body 'help' to
          gpfsug-discuss-request at gpfsug.org

  You can reach the person managing the list at
          gpfsug-discuss-owner at gpfsug.org

  When replying, please edit your Subject line so it is more specific
  than "Re: Contents of gpfsug-discuss digest..."


  Today's Topics:

     1. Re: GPFS - pagepool data protection? (Dean Hildebrand)


  ----------------------------------------------------------------------

  Message: 1
  Date: Fri, 7 Nov 2014 23:42:06 +0100
  From: Dean Hildebrand <dhildeb at us.ibm.com>
  To: gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
  Subject: Re: [gpfsug-discuss] GPFS - pagepool data protection?
  Message-ID:
          <
  OF1ED92A57.DD700837-ONC1257D89.007C4EF1-88257D89.007CB453 at us.ibm.com>
  Content-Type: text/plain; charset="iso-8859-1"


  Hi Paul,

  GPFS correctly implements POSIX semantics and NFS close-to-open
  semantics.
  Its a little complicated, but effectively what this means is that when
  the
  application issues certain calls to ensure data/metadata is
  "stable" (e.g.,
  fsync), then it is guaranteed to be stable.  It also controls ordering
  between nodes among many other things.  As part of making sure data is
  stable, the GPFS recovery journal is used in a variety of instances.

  With VMWare ESX using NFS to GPFS, then the same thing occurs, except the
  situation is even more simple since every write request will have the
  'stable' flag set, ensuring it does writethrough to the storage system.

  Dean Hildebrand
  IBM Almaden Research Center




  From:   Pavel Pokorny <pavel.pokorny at datera.cz>
  To:     gpfsug-discuss at gpfsug.org
  Date:   11/07/2014 03:15 AM
  Subject:        [gpfsug-discuss] GPFS - pagepool data protection?
  Sent by:        gpfsug-discuss-bounces at gpfsug.org



  Hello to all,
  I would like to ask question about pagepool and protection of data
  written
  through pagepool.
  Is there a possibility of loosing data written to GPFS in situation that
  data are stored in pagepool but still not written to disks?
  I think that for regular file system work this can be solved using GPFS
  journal. What about using GPFS as a NFS store for VMware datastores?
  Thank you for your answers,
  Pavel
  --
  Ing. Pavel Pokorn?
  DATERA s.r.o.?|?Ovocn? trh 580/2?|?Praha?|?Czech Republic
  www.datera.cz?|?Mobil:?+420 602 357 194?|?E-mail:?pavel.pokorny at datera.cz
  _______________________________________________
  gpfsug-discuss mailing list
  gpfsug-discuss at gpfsug.org
  http://gpfsug.org/mailman/listinfo/gpfsug-discuss
  -------------- next part --------------
  An HTML attachment was scrubbed...
  URL: <
  http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20141107/ecec5a47/attachment-0001.html
  >
  -------------- next part --------------
  A non-text attachment was scrubbed...
  Name: graycol.gif
  Type: image/gif
  Size: 105 bytes
  Desc: not available
  URL: <
  http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20141107/ecec5a47/attachment-0001.gif
  >

  ------------------------------

  _______________________________________________
  gpfsug-discuss mailing list
  gpfsug-discuss at gpfsug.org
  http://gpfsug.org/mailman/listinfo/gpfsug-discuss


  End of gpfsug-discuss Digest, Vol 34, Issue 7
  *********************************************
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20141115/6c144289/attachment-0003.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20141115/6c144289/attachment-0003.gif>


More information about the gpfsug-discuss mailing list