[gpfsug-discuss] relion software using GPFS storage

Robert Esnouf robert at strubi.ox.ac.uk
Wed Feb 27 12:49:38 GMT 2019



Dear Michael,

There are settings within relion for parallel file systems, you should check they are enabled if you have SS underneath.

Otherwise, check which version of relion and then try to understand the problem that is being analysed a little more.

If the box size is very small and the internal symmetry low then the user may read 100,000s of small "picked particle" files for each iteration opening and closing the files each time.

I believe that relion3 has some facility for extracting these small particles from the larger raw images and that is more SS-friendly. Alternatively, the size of the set of picked particles is often only in 50GB range and so staging to one or more local machines is quite feasible...

Hope one of those suggestions helps.
Regards,
Robert

--

Dr Robert Esnouf 

University Research Lecturer, 
Director of Research Computing BDI, 
Head of Research Computing Core WHG, 
NDM Research Computing Strategy Officer 

Main office: 
Room 10/028, Wellcome Centre for Human Genetics, 
Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK 

Emails: 
robert at strubi.ox.ac.uk / robert at well.ox.ac.uk / robert.esnouf at bdi.ox.ac.uk 

Tel:   (+44)-1865-287783 (WHG); (+44)-1865-743689 (BDI)
 

-----Original Message-----
From: "Michael Holliday" <michael.holliday at crick.ac.uk>
To: gpfsug-discuss at spectrumscale.org
Date: 27/02/19 12:21
Subject: [gpfsug-discuss] relion software using GPFS storage


Hi All,
 
We’ve recently had an issue where a job on our client GPFS cluster caused out main storage to go extremely slowly.   The job was running relion using MPI (https://www2.mrc-lmb.cam.ac.uk/relion/index.php?title=Main_Page)
 
It caused waiters across the cluster, and caused the load to spike on NSDS on at a time.  When the spike ended on one NSD, it immediately started on another. 
 
There were no obvious errors in the logs and the issues cleared immediately after the job was cancelled. 
 
Has anyone else see any issues with relion using GPFS storage?
 
Michael
 
Michael Holliday RITTech MBCS
Senior HPC & Research Data Systems Engineer | eMedLab Operations Team
Scientific Computing STP | The Francis Crick Institute
1, Midland Road | London | NW1 1AT | United Kingdom
Tel: 0203 796 3167
 
The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 1 Midland Road London NW1 1AT
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190227/7d690dfc/attachment-0002.htm>


More information about the gpfsug-discuss mailing list