[gpfsug-discuss] GPFS on ZFS! ... ?

Marc A Kaplan makaplan at us.ibm.com
Mon Jun 13 18:53:41 BST 2016


How do you set the size of a ZFS file that is simulating a GPFS disk?  How 
do "tell" GPFS about that?

How efficient is this layering, compared to just giving GPFS direct access 
to the same kind of LUNs that ZFS is using?

Hmmm... to partially answer my question, I do something similar, but 
strictly for testing non-performance critical GPFS functions.
On any file system one can:

  dd if=/dev/zero of=/fakedisks/d3 count=1 bs=1M seek=3000  # create a 
fake 3GB disk for GPFS

Then use a GPFS nsd configuration record like this:

%nsd: nsd=d3  device=/fakedisks/d3  usage=dataOnly pool=xtra 
servers=bog-xxx

Which starts out as sparse and the filesystem will dynamically "grow" as 
GPFS writes to it...

But I have no idea how well this will work for a critical "production" 
system...

tx, marc kaplan.



From:   "Allen, Benjamin S." <bsallen at alcf.anl.gov>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   06/13/2016 12:34 PM
Subject:        Re: [gpfsug-discuss] GPFS on ZFS?
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Jaime,

See 
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_nsddevices.htm
. An example I have for add /dev/nvme* devices:

* GPFS doesn't know how that /dev/nvme* are valid block devices, use a 
user exit script to let it know about it

cp /usr/lpp/mmfs/samples/nsddevices.sample /var/mmfs/etc/nsddevices

* Edit /var/mmfs/etc/nsddevices, and add to linux section:

if [[ $osName = Linux ]]
then
  : # Add function to discover disks in the Linux environment.
    for dev in $( cat /proc/partitions | grep nvme | awk '{print $4}' )
      do
        echo $dev generic
    done
fi

* Copy edited nsddevices to the rest of the nodes at the same path
for host in n01 n02 n03 n04; do
  scp /var/mmfs/etc/nsddevices ${host}:/var/mmfs/etc/nsddevices
done


Ben

> On Jun 13, 2016, at 11:26 AM, Jaime Pinto <pinto at scinet.utoronto.ca> 
wrote:
> 
> Hi Chris
> 
> As I understand, GPFS likes to 'see' the block devices, even on a 
hardware raid solution such at DDN's.
> 
> How is that accomplished when you use ZFS for software raid?
> On page 4 I see this info, and I'm trying to interpret it:
> 
> General Configuration
> ...
> * zvols
> * nsddevices
>  - echo "zdX generic"
> 
> 
> Thanks
> Jaime
> 
> Quoting "Hoffman, Christopher P" <cphoffma at lanl.gov>:
> 
>> Hi Jaime,
>> 
>> What in particular would you like explained more? I'd be more than 
happy to discuss things further.
>> 
>> Chris
>> ________________________________________
>> From: gpfsug-discuss-bounces at spectrumscale.org 
[gpfsug-discuss-bounces at spectrumscale.org] on behalf of Jaime Pinto 
[pinto at scinet.utoronto.ca]
>> Sent: Monday, June 13, 2016 10:11
>> To: gpfsug main discussion list
>> Subject: Re: [gpfsug-discuss] GPFS on ZFS?
>> 
>> I just came across this presentation on "GPFS with underlying ZFS
>> block devices", by Christopher Hoffman, Los Alamos National Lab,
>> although some of the
>> implementation remains obscure.
>> 
>> http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
>> 
>> It would be great to have more details, in particular the possibility
>> of straight use of GPFS on ZFS, instead of the 'archive' use case as
>> described on the presentation.
>> 
>> Thanks
>> Jaime
>> 
>> 
>> 
>> 
>> Quoting "Jaime Pinto" <pinto at scinet.utoronto.ca>:
>> 
>>> Since we can not get GNR outside ESS/GSS appliances, is anybody using
>>> ZFS for software raid on commodity storage?
>>> 
>>> Thanks
>>> Jaime
>>> 
>>> 
>> 
>> 
>> 
>> 
>>          ************************************
>>           TELL US ABOUT YOUR SUCCESS STORIES
>>          http://www.scinethpc.ca/testimonials
>>          ************************************
>> ---
>> Jaime Pinto
>> SciNet HPC Consortium  - Compute/Calcul Canada
>> www.scinet.utoronto.ca - www.computecanada.org
>> University of Toronto
>> 256 McCaul Street, Room 235
>> Toronto, ON, M5T1W5
>> P: 416-978-2755
>> C: 416-505-1477
>> 
>> ----------------------------------------------------------------
>> This message was sent using IMP at SciNet Consortium, University of 
Toronto.
>> 
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> 
> 
> 
> 
> 
> 
> 
>         ************************************
>          TELL US ABOUT YOUR SUCCESS STORIES
>         http://www.scinethpc.ca/testimonials
>         ************************************
> ---
> Jaime Pinto
> SciNet HPC Consortium  - Compute/Calcul Canada
> www.scinet.utoronto.ca - www.computecanada.org
> University of Toronto
> 256 McCaul Street, Room 235
> Toronto, ON, M5T1W5
> P: 416-978-2755
> C: 416-505-1477
> 
> ----------------------------------------------------------------
> This message was sent using IMP at SciNet Consortium, University of 
Toronto.
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160613/eae5466d/attachment-0002.htm>


More information about the gpfsug-discuss mailing list