[gpfsug-discuss] Friday MPI fun?

Zachary Giles zgiles at gmail.com
Sat Oct 26 05:13:37 BST 2013


You could do it that way, if you told MPI to bind each task (MPI task, or
non-MPI task in your case) to a processor. By default, most MPI's don't
bind to a specific processor unless the scheduler (you don't have one) or
something else tells it to. OpenMPI, for example, I think, has something
like --bind-to-cpu, or --bind-to-socket. Often a scheduler or c-groups via
a scheduler will pass-in a list of cpu's to bind to, but you could do it
manually with the appropriate flags and files for your MPI distro.

You could also probably get away with just raw c-groups depending on your
distro. You would probably touch a few files and echo a few lines to files
to make the right groups then "cgrun <app>" to get things in to those
groups. That works pretty well for binding, found decent results with that,
I recommend it.


On Fri, Oct 25, 2013 at 12:09 PM, Chair GPFS UG <chair at gpfsug.org> wrote:

> Allo all,
>
>   I'm attempting to cheat.  As per-usual, cheating takes more time than
> 'Doing It Properly' - but it is vastly more fun.
>
> So without setting up Grid or Moab etc, I need to pin processes to a cpu.
>  I.E. on Linux: taskset blah blah.
> I could write a small housekeeping script which RR new spawned processes
> across CPUs using taskset, but I was wondering if OpenMPI could be a good
> way to go.
>
> So:
>
> I have a non-MPI application X.
> Which is spawned and forked by a parent process into its own process group.
> This can occur at any time, however there will only ever be a maximium N
> of appl X.
>
> Using mpirun it appears that you can set off parallel instances of a
> non-MPI application:
>
> mpirun -np 4 applicationX
>
> However, really what I'm requiring to do is say:
>   Max slots = N (which I can define in the mpi hostfile).
>
> mpirun -np 1 applicationX
> mpirun -np 1 applicationX (started at a random future time)
> mpirun -np 1 applicationX (started at a random future time)
> mpirun -np 1 applicationX (started at a random future time)
>
> Each being automatically pinned to a CPU, but I'm fairly convinced this is
> not the way MPI works.
> Would it do what I'm after?
>
> Does anyone know of a better way?
>
> Jez
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>


-- 
Zach Giles
zgiles at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20131026/217d585e/attachment-0003.htm>


More information about the gpfsug-discuss mailing list