## Singularity (i.e., a Docker-style workflow that operates on shared cluster systems)
Singularity is another way to build your environment, allowing even greater portability across systems at different universities. A full tutorial on using Singularity can be found [here](https://docs.s3it.uzh.ch/cluster/singularity_tutorial/).
### First, some background:
#### What is a container?
#### How does a container differ from a virtual machine?
### or from a virtual environment (such as conda or mamba)?
Singularity is another way to build your environment, allowing even greater portability across systems at different universities or computing centers.
###### [A short introduction to container concepts from Pawsey Supercomputing Centre](https://pawseysc.github.io/singularity-containers/11-containers-intro/index.html)
### When should you use a container? Some possibilities:
When you are using ScienceCluster, but you have an application to build/install with many or complicated system dependencies (ubuntu packages).<br>
When you have an application that you wish to build and run on multiple systems.<br>
If someone has distributed an application as a docker or singularity container image.<br>
A full tutorial on using Singularity on ScienceCluster can be found [**here**](https://docs.s3it.uzh.ch/cluster/singularity_tutorial/).
Guide to the singularity definition file, for building your own custom singularity container:
Containers using (MPI)[https://docs.sylabs.io/guides/3.5/user-guide/mpi.html] parallelism (Message Passing Interface) can be more complicated, and there are multiple ways to configure singularity plus MPI.
Here is an example using the **bind** approach. A definition file could look like below. It assumes that you have already compiled and built the executable mpi_hello_world from mpi_hello_world.c, borrowed from (here)[https://github.com/mpitutorial/mpitutorial/tree/gh-pages/tutorials/mpi-hello-world]:
In this mode, the application in the container calls the MPI library on the host (ScienceCluster) for communication between tasks. A slurm script to run this container using 4 MPI tasks on a single node could look like this:
```
#!/bin/bash
#SBATCH --ntasks=4 ## number of MPI tasks
#SBATCH --cpus-per-task=1 ## number of cores per task
cd /tmp/ompi && wget -O openmpi-$OMPI_VERSION.tar.bz2 $OMPI_URL && tar -xjf openmpi-$OMPI_VERSION.tar.bz2
# Compile and install
cd /tmp/ompi/openmpi-$OMPI_VERSION && ./configure --enable-orterun-prefix-by-default --with-pmix=/usr/lib/x86_64-linux-gnu/pmix/ --prefix=$OMPI_DIR && make install
# Set env variables so we can compile our application
Notice that Open MPI is installed directly into the container. The files 'mpi_hello_world.c', 'makefile', and hellompi_hybrid.def would be in the current working directory. Because of the sudo requirement, you would need to create the container on a ScienceCloud virtual machine (we have images with singularity pre-installed) or on your own machine. Then the command to build the container image is: