taskManeage - Tasks: task #15737, slurm - openmpi -...

 
 

You are not allowed to post comments on this tracker with your current authentication level.

task #15737: slurm - openmpi - (PMIx+libevent+hwloc)

Submitter:  Boud Roukema <boud>
Submitted:  Wed 29 Jul 2020 05:21:39 PM UTC
   
 
Should Start On:  Wed 29 Jul 2020 12:00:00 AM UTC Should be Finished on:  Wed 29 Jul 2020 12:00:00 AM UTC
Category:  None Priority:  7 - High
Status:  None Privacy:  Public
Assigned to:  None Percent Complete:  0%
Open/Closed:  Open Effort:  0.00

Jump to the original submission

Fri 14 Aug 2020 10:46:07 PM UTC, comment #7: 

If the problem is just about the C libraries, please try this:

Put a link to these libraries manually under '.local/lib', for example the command below from the top project directory then re-run the project with the problematic package.


ln -s /lib64/libc.so.6 .local/lib/


In basic.mk we already do this for 'libdl' and 'libpthread' (under the 'low-level-links' target that we define for programs we don't build yet, mostly for macOS tools!). However, we assume they are in '/usr/lib', not '/lib64', so if that is indeed the problem, we should manually check where the C library is installed (very easy with an 'ldd' command on any program!) and use that location for links to the host's C library within the project.

I hope this temporary work-around fixes the problem until task #15390 is complete ;-).

Mohammad Akhlaghi <makhlaghi>
Group administrator
Fri 14 Aug 2020 09:56:54 AM UTC, comment #6: 

The workaround of using serial mode for one program, whose failure in maneage on the CentOS system appeared to be related to openmpi, worked correctly.

> About the libraries from the host OS, they are installed as part of the GNU C Library. Hopefully when task #15390 is complete, they won't cause a problem any more :-).


That's what I suspected. That sounds like the best long-term solution for this problem.

> Was the package with semi-consistant failures only complaining about these?


Those were the only specific libraries I saw errors about, but most failure incidents gave no reports on which libraries failed, and only gave a segmentation fault of unknown origin.

I'm happy to check this later when #15390 is complete.

Boud Roukema <boud>
Group Member
Thu 13 Aug 2020 12:03:17 AM UTC, comment #5: 

Thanks a lot for the check Boud. I agree! This is an important issue and I am increasing its priority to "high".  Maneage has many advantages on large computers (for example that there is no need for root access), so it would be great if this task can be completed.

About the libraries from the host OS, they are installed as part of the GNU C Library. Hopefully when task #15390 is complete, they won't cause a problem any more :-).

Was the package with semi-consistant failures only complaining about these?

Can you add a test scenario (ideally as a branch from Maneage, with a minimal example showing the problem)? If we can reproduce the problem, we may be able to help in solving it (when we have time) ;-).



Mohammad Akhlaghi <makhlaghi>
Group administrator
Wed 12 Aug 2020 05:43:34 PM UTC, comment #4: 

It seems that more work is still needed to compile openmpi to use libraries fully within the maneage subsystem.

I have one package that works fine with the openmpi options that are currently used in Maneage, but another package fails semi-consistently on the same host, within the same overall Maneage system.

Some of the packages that seem to have been installed from the host system (CentOS 2.6.32-754.18.2.el6.x86_64), as indicated by the error tracing messages, appear to be:


    /lib64/libc.so.6
    /lib64/libpthread.so.0
    /usr/lib64/ld-2.17.so
    /usr/lib64/libdl-2.17.so
    /usr/lib64/libm-2.17.so
    /usr/lib64/libutil-2.17.so


Openmpi is a huge package. I do not intend to try to solve this any time soon. (A workaround is to run this particular program in serial mode, which is an acceptable compromise.)

One possibility, that I might try if there's enough time, would be to update to a more recent upstream Maneage, which might solve this.

However, I think that a proper Maneage install of openmpi, and thorough testing on task schedulers like slurm, should be considered a major task.

Boud Roukema <boud>
Group Member
Sat 01 Aug 2020 11:55:03 PM UTC, comment #3: 

Thanks a lot Boud, they are now merged into Maneage as Commit cbd4a41555 and Commit 32f3ba14f6.

Mohammad Akhlaghi <makhlaghi>
Group administrator
Sat 01 Aug 2020 11:04:01 AM UTC, comment #2: 

Commit bbaa6af configures fine for me - see the commit description:

https://codeberg.org/boud/maneage_dev/commit/bbaa6af92186d7b5332062577ae8e8ef33bbc1df

The equivalent changes on a maneage branch that actually runs code with openmpi also worked fine. :)

This uses the internal compile strategy - and is a moderately big (slow) compile, like ghostscript and gnuastro. The compilation of openmpi does not, it seem, to be itself... parallelised.

A different but related hack to the main Maneage branch too - this was necessary for testing a git bundle with only the openmpi_slurm branch :

https://codeberg.org/boud/maneage_dev/commit/4a2b21fa65cc70cca9ab09f5803cd372fb9f96e8

Without this hack, the configure+make cycle had a fatal error at the line


v=$$(git describe --always --long maneage)


in initialize.mk , because the 'maneage' branch was missing.

Testing also requires adding openmpi to top-level-programs  = in TARGETS.conf .

Boud Roukema <boud>
Group Member
Thu 30 Jul 2020 10:48:11 AM UTC, comment #1: 

It would be great if you can post your experiences here as you experiment with various solutions. I haven't had much time to actually test these yet in Maneage.

One thing that does come to my mind and can probably be helpful is the preparation phase (which is organized in 'top-prepare.mk'). In that phase, the project can get basic settings of the host (and necessary analysis), and using those settings to optimize the Make rules of 'top-make.sh' for them.

For example the 'X.sh' can manually add a file (as a configuration file), listing the set of independent targets that './project make' should produce for that particular submission (with 'srun' or 'sbatch').

For example if you have 1000 jobs, and the cluster has 100 computers you would want each computer to do 10 of the independent jobs. So each 'X.sh' can define a variable, listing the final targets for its './project make' command to build. All the 'X.sh' submissions will then use the same Maneaged software environment and raw datasets, but do their jobs independently. In the end you can add one extra 'X.sh' submission to merge the results of all into one final result/paper for example.

I will hopefully start using slurm with Maneage more in the coming months, and will add the low-level structure to facilitate it. But until then please go ahead with testing and post the results here for us to also learn from ;-).

Mohammad Akhlaghi <makhlaghi>
Group administrator
Wed 29 Jul 2020 05:21:39 PM UTC, original submission:  

Parallel processing using possibly non-shared memory, using MPI (message passing interface, a standard, not any particular software), is presently allowed for in Maneage using openmpi. How should we compile openmpi for reproducibility?

In practice, openmpi is normally to be used on a cluster or supercomputer on which jobs are submitted to and queued by and run (or rejected) by a free-software (hopefully) job/user manager such as Slurm: https://slurm.schedmd.com/ .

The computer on which a job is run is (in general) not the one from which a batch job is submitted to slurm, e.g. with 'srun'.

So roughly speaking, as I understand it:

  • user uses srun or sbatch to submit a script X.sh to the slurm daemon on the frontend H;
  • slurm queues the request, and after some time may choose one or more computers K and try to run X.sh under the user's identity on those computers K;
  • the computers K each run X.sh, which can include a maneage package that compiles and runs program P, which uses openmpi to ask the host computer and Slurm which cpus/cores/threads it is allowed to use;
  • the interaction between X.sh on K -> openmpi on K (precompiled library) -> host K + Slurm on H (and in some sense on K) is done through PMIx (pmi or pmi2); libevent; and hwloc .
  • MPI means that data (arrays of bytes :)) can be sent/received among the computers K.


So the question is: for reproducibility, how much of the chain: openmpi -> (pmi + libevent + hwloc) do we want compiled internally within Maneage, and how much should it be based on autotools type automatic searching on the machine for the preferred default libraries?

There is no point trying to include slurm in Maneage, because the whole point is that the sysadmins managing a cluster use slurm to automatically manage a whole bunch of users - it's system-level software that the user's script has to interact with.

official guide: https://slurm.schedmd.com/mpi_guide.html#open_mpi

The official guide doesn't give much in terms of practical, up-to-date experience. Some URLs that seem useful:
Some URLs that seem useful:

https://bugs.schedmd.com/show_bug.cgi?id=5323

https://github.com/open-mpi/ompi/issues/5871

I'm trying some experiments, but any prior experience with this would help speed things up. :)

Boud Roukema <boud>
Group Member

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by makhlaghi (Posted a comment)
  • -email is unavailable- added by boud (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

     

    Follows 1 latest change.

    Date Changed by Updated Field Previous Value => Replaced by
    2020-08-13 makhlaghi Priority5 - Normal 7 - High

    Back to the top

    Powered by Savane 3.14-f13d.
    Corresponding source code