@@ -32,7 +32,37 @@ Guide to the singularity definition file, for building your own custom singulari
Containers using (MPI)[https://docs.sylabs.io/guides/3.5/user-guide/mpi.html] parallelism (Message Passing Interface) can be more complicated, and there are multiple ways to configure singularity plus MPI.
Here is an example using the **bind** approach. A definition file could look like this. It assumes that you have already compiled and built the executable mpi_hello_world from mpi_hello_world.c, borrowed from (here)[https://github.com/mpitutorial/mpitutorial/tree/gh-pages/tutorials/mpi-hello-world]:
Here is an example using the **bind** approach. It assumes that you have already compiled and built the executable mpi_hello_world from mpi_hello_world.c, borrowed from (here)[https://github.com/mpitutorial/mpitutorial/tree/gh-pages/tutorials/mpi-hello-world].
Starting from a Singularity ubuntu 20.04 ScienceCloud instance, the instance can be setup with a version of Open MPI that matches the `openmpi/4.1.3` module on ScienceCluster with the following sequence of commands:
```
## Commands to run from a ScienceCloud instance to install OpenMPI version 4.1.3
In this mode, the application in the container calls the MPI library on the host (ScienceCluster) for communication between tasks. On a ScienceCloud instance, the container image can be built as:
Does the above example work? If not, how would you debug it?
Does the above example work? If not, how would you debug it?
The trade-off in the above "bind" MPI container example is that while the container is relatively lightweight because it does not have MPI installed within it, some work is needed to create the MPI environment to compile the MPI executable that will be copied into the container.
***
For the same example, using a **hybrid** approach, here is a definition file. The important changes are than openmpi is downloaded and built within the post section of the .def file, and then the environment sets the OMPI variables to use the openmpi version within the container:
For the same example, using a **hybrid** approach, here is a definition file. The important changes are than openmpi is downloaded and built within the post section of the .def file, and then the environment sets the OMPI variables to use the openmpi version within the container (mpi_hello_world_hybrid.def):
```
Bootstrap: docker
From: ubuntu:20.04
...
...
@@ -149,7 +182,7 @@ echo 'start'
### Specifying --mem 2999M would work, but safest approach is not to respecify the
### slurm resources in the srun command (by default srun has available the full job resources):