Skip to content
Snippets Groups Projects
Verified Commit a061ac9c authored by Darren Reed's avatar Darren Reed Committed by GitLab UZH
Browse files

Update container_info.md to include both bind and hybrid example

parent dd241876
No related branches found
No related tags found
No related merge requests found
......@@ -84,3 +84,67 @@ srun --exclusive --ntasks=4 --cpus-per-task=1 --nodes=1 --mem 3000M singularity
echo 'finished'
```
***
For the same example, using a **hybrid** approach, here is a definition file. The important change is that the environment sets the OMPI variables to use the openmpi version within the container:
```
Bootstrap: docker
From: ubuntu:20.04
%files
./mpi-hello-world/mpi_hello_world.c /opt
./mpi-hello-world/makefile /opt
%environment
export OMPI_DIR=/opt/ompi
export SINGULARITY_OMPI_DIR=$OMPI_DIR
export SINGULARITYENV_APPEND_PATH=$OMPI_DIR/bin
export SINGULAIRTYENV_APPEND_LD_LIBRARY_PATH=$OMPI_DIR/lib
%post
echo "Installing base packages..."
apt-get update -y && . /etc/environment
apt-get install wget gcc bash g++ make libsysfs2 libsysfs-dev libevent-dev libpmix-dev -y
echo "Installing Open MPI"
export OMPI_DIR=/opt/ompi
export OMPI_VERSION=4.1.3
export OMPI_URL="https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-$OMPI_VERSION.tar.bz2"
mkdir -p /tmp/ompi
mkdir -p /opt
## Download
cd /tmp/ompi && wget -O openmpi-$OMPI_VERSION.tar.bz2 $OMPI_URL && tar -xjf openmpi-$OMPI_VERSION.tar.bz2
# Compile and install
cd /tmp/ompi/openmpi-$OMPI_VERSION && ./configure --enable-orterun-prefix-by-default --with-pmix=/usr/lib/x86_64-linux-gnu/pmix/ --prefix=$OMPI_DIR && make install
# Set env variables so we can compile our application
export PATH=$OMPI_DIR/bin:$PATH
export LD_LIBRARY_PATH=$OMPI_DIR/lib:$LD_LIBRARY_PATH
export MANPATH=$OMPI_DIR/share/man:$MANPATH
echo "Compiling the MPI application..."
cd /opt
make
```
In addition to using a hybrid approach, the mpi_hello_world application is compiled as part of the "post" section of the .def file using the command "make". This required modifying the "files" section to include the source (.c) and the makefile.
The correspond slurm script:
```
#!/bin/bash
#SBATCH --ntasks=4 ## number of MPI tasks
#SBATCH --cpus-per-task=1 ## number of cores per task
#SBATCH --time=00:01:00
#SBATCH --mem 3000M
#SBATCH --nodes=1
#SBTACH --ntasks-per-node=4
module load openmpi/4.1.3
module load singularityce
echo 'start'
srun --exclusive --ntasks=4 --cpus-per-task=1 --nodes=1 --mem 3000M singularity exec ./hellompihybrid /opt/mpi_hello_world
echo 'finished'
```
Further testing of the MPI container could include a multi-node slurm script, and an MPI program that utilizes more MPI features (such as sending messages between tasks).
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment