[gpfsug-discuss] Disabling individual Storage Pools by themselves? How about GPFS Native Raid?

Zachary Giles zgiles at gmail.com
Sat Jun 20 23:40:58 BST 2015


All true. I wasn't trying to knock DDN or say "it can't be done", it's
just (probably) not very efficient or cost effective to buy a 12K with
30 drives (as an example).

The new 7700 looks like a really nice base a small building block. I
had forgot about them. There is a good box for adding 4U at a time,
and with 60 drives per enclosure, if you saturated it out at ~3
enclosure / 180 drives, you'd have 1PB, which is also a nice round
building block size. :thumb up:

On Sat, Jun 20, 2015 at 5:12 PM, Vic Cornell <viccornell at gmail.com> wrote:
> Just to make sure everybody is up to date on this, (I work for DDN BTW):
>
>> On 19 Jun 2015, at 21:08, Zachary Giles <zgiles at gmail.com> wrote:
>>
>> It's comparable to other "large" controller systems. Take the DDN
>> 10K/12K for example: You don't just buy one more shelf of disks, or 5
>> disks at a time from Walmart. You buy 5, 10, or 20 trays and populate
>> enough disks to either hit your bandwidth or storage size requirement.
>
> With the 12K you can buy 1,2,3,4,5,,10 or 20.
>
> With the 7700/Gs7K you can buy 1 ,2 ,3,4 or 5.
>
> GS7K comes with 2 controllers and 60 disk slots all in 4U, it saturates (with GPFS scatter) at about 160- 180 NL- SAS disks and you can concatenate as many of them together as you like. I guess the thing with GPFS is that you can pick your ideal building block and then scale with it as far as you like.
>
>> Generally changing from 5 to 10 to 20 requires support to come on-site
>> and recable it, and generally you either buy half or all the disks
>> slots worth of disks.
>
> You can start off with as few as 2 disks in a system . We have lots of people who buy partially populated systems and then sell on capacity to users, buying  disks in groups of 10, 20 or more - thats what the flexibility of GPFS is all about, yes?
>
>> The whole system is a building block and you buy
>> N of them to get up to 10-20PB of storage.
>> GSS is the same way, there are a few models and you just buy a packaged one.
>>
>> Technically, you can violate the above constraints, but then it may
>> not work well and you probably can't buy it that way.
>> I'm pretty sure DDN's going to look at you funny if you try to buy a
>> 12K with 30 drives.. :)
>
> Nobody at DDN is going to look at you funny if you say you want to buy something :-). We have as many different procurement strategies as we have customers. If all you can afford with your infrastructure money is 30 drives to get you off the ground and you know that researchers/users will come to you with money for capacity down the line then a 30 drive 12K makes perfect sense.
>
> Most configs with external servers can be made to work. The embedded (12KXE, GS7K ) are a bit more limited in how you can arrange disks and put services on NSD servers but thats the tradeoff for the smaller footprint.
>
> Happy to expand on any of this on or offline.
>
> Vic
>
>
>>
>> For 1PB (small), I guess just buy 1 GSS24 with smaller drives to save
>> money. Or, buy maybe just 2 NetAPP / LSI / Engenio enclosure with
>> buildin RAID, a pair of servers, and forget GNR.
>> Or maybe GSS22? :)
>>
>> From http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS114-098
>> "
>> Current high-density storage Models 24 and 26 remain available
>> Four new base configurations: Model 21s (1 2u JBOD), Model 22s (2 2u
>> JBODs), Model 24 (4 2u JBODs), and Model 26 (6 2u JBODs)
>> 1.2 TB, 2 TB, 3 TB, and 4 TB hard drives available
>> 200 GB and 800 GB SSDs are also available
>> The Model 21s is comprised of 24 SSD drives, and the Model 22s, 24s,
>> 26s is comprised of SSD drives or 1.2 TB hard SAS drives
>> "
>>
>>
>> On Fri, Jun 19, 2015 at 3:17 PM, Simon Thompson (Research Computing -
>> IT Services) <S.J.Thompson at bham.ac.uk> wrote:
>>>
>>> My understanding I that GSS and IBM ESS are sold as pre configured systems.
>>>
>>> So something like 2x servers with a fixed number of shelves. E.g. A GSS 24 comes with 232 drives.
>>>
>>> So whilst that might be  1Pb system (large scale), its essentially an appliance type approach and not scalable in the sense that it isn't supported add another storage system.
>>>
>>> So maybe its the way it has been productised, and perhaps gnr is technically capable of having more shelves added, but if that isn't a supports route for the product then its not something that as a customer I'd be able to buy.
>>>
>>> Simon
>>> ________________________________________
>>> From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com]
>>> Sent: 19 June 2015 19:45
>>> To: gpfsug main discussion list
>>> Subject: Re: [gpfsug-discuss] Disabling individual Storage Pools by themselves? How about GPFS Native Raid?
>>>
>>> OOps...  here is the official statement:
>>>
>>> GPFS Native RAID (GNR) is available on the following: v IBM Power® 775 Disk Enclosure. v IBM System x GPFS Storage Server (GSS). GSS is a high-capacity, high-performance storage solution that combines IBM System x servers, storage enclosures, and drives, software (including GPFS Native RAID), and networking components. GSS uses a building-block approach to create highly-scalable storage for use in a broad range of application environments.
>>>
>>> I wonder what specifically are the problems you guys see with the "GSS building-block" approach to ... highly-scalable...?
>>>
>>> _______________________________________________
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at gpfsug.org
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
>>
>> --
>> Zach Giles
>> zgiles at gmail.com
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at gpfsug.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-- 
Zach Giles
zgiles at gmail.com



More information about the gpfsug-discuss mailing list