[gpfsug-discuss] Moving/copying files from one file system to another

Richard Lefebvre richard.lefebvre at calculquebec.ca
Wed Nov 6 15:22:13 GMT 2013


Thank you to all that answer, I will stay with scatter. So, now I will
just go with doing a series of mmadddisk and mmdeldisk in sequence. I'm
not doing all the adddisk first since I down want the users to think
that there disk space has doubled, it is a switch with a bit of extra.

I intent to do a rebalance with the mmdeldisk commands. Since the file
system is full at 97% I have a felling that if I don't rebalance, at the
mmdeldisk, something will choke.

Richard

On 10/30/2013 01:56 PM, Alex Chekholko wrote:
> On 10/30/13, 5:47 AM, Jonathan Buzzard wrote:
>> On Mon, 2013-10-28 at 11:31 -0400, Richard Lefebvre wrote:
>>
>> [SNIP]
>>
>>> Also, another question, under what condition a scatter allocation better
>>> then cluster allocation. We currently have a cluster of 650 nodes all
>>> accessing the same 230TB gpfs file system.
>>>
>>
>> Scatter allocation is better in almost all circumstances. Basically by
>> scattering the files to all corners you don't get hotspots where just a
>> small subset of the disks are being hammered by lots of accesses to a
>> handful of files, while the rest of the disks sit idle.
>>
> 
> If you do benchmarks with only a few threads, you will see higher
> performance with 'cluster' allocation.  So if your workload is only a
> few clients accessing the FS in a mostly streaming way, you'd see better
> performance from 'cluster'.
> 
> With 650 nodes, even if each client is doing streaming reads, at the
> filesystem level that would all be interleaved and thus be random reads.
>  But it's tough to do a big enough benchmark to show the difference in
> performance.
> 
> I had a tough time convincing people to use 'scatter' instead of
> 'cluster' even though I think the documentation is clear about the
> difference, and even gives you the sizing parameters ( greater than 8
> disks or 8 NSDs?  use 'scatter').
> 
> We use 'scatter' now.
> 
> Regards




More information about the gpfsug-discuss mailing list