Skip to content

Instantly share code, notes, and snippets.

@npadmana
Last active May 1, 2021 10:57
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save npadmana/577db4b48a58d86562345c8604c3c6b8 to your computer and use it in GitHub Desktop.
Save npadmana/577db4b48a58d86562345c8604c3c6b8 to your computer and use it in GitHub Desktop.
My notes on running singularity with Chapel
BootStrap: docker
From: ubuntu:20.04
%runscript
/bin/bash /opt/runscript.sh "$@"
%files
./runscript.sh /opt/
../../external-files/intel-mpi-mkl-v0 /opt/intel-install
./modules.yaml /etc/spack/
%post -c /bin/bash
apt update && apt -y upgrade
export LC_ALL=C
DEBIAN_FRONTEND=noninteractive apt install -y build-essential git wget curl \
cmake python3 python gfortran cpio libibverbs-dev \
python3-distutils
export MODULEPATH=
mkdir -p /opt
cd /opt
git clone -b develop --single-branch https://github.com/spack/spack.git
. /opt/spack/share/spack/setup-env.sh
spack install environment-modules~X
. $(spack location -i environment-modules)/init/bash
. /opt/spack/share/spack/setup-env.sh
mkdir -p /etc/spack
spack compiler find --scope system
# Installing multiple Intel packages in Spack turns out to be
# clunky, so we do it manually and then fix with a packages.yaml
# file
cd /opt/intel-install
tar xvfz l_mkl_2020.4.304.tgz
cd l_mkl_2020.4.304
cp ../silent.cfg .
./install.sh --silent silent.cfg
cd /opt/intel-install
tar xvfz l_mpi_2019.10.317.tgz
cd l_mpi_2019.10.317
cp ../silent.cfg .
./install.sh --silent silent.cfg
mv /opt/intel-install/packages.yaml /etc/spack
spack install intel-mpi
spack install intel-mkl
spack install gsl@2.6 %gcc
spack install hdf5@1.10.7 +hl +mpi %gcc ^intel-mpi
spack install fftw@3.3.9 %gcc +mpi +openmp ^intel-mpi
spack install sqlite@3.35.4
echo 'export PS1="[${CONTAINER_NAME}-${CONTAINER_VERSION}] \A \W$ "' >> /.singularity.d/env/999-env.sh
mkdir -p /opt/env
touch 00-modules.sh
# Do a spack load here to set some specific environment variables that might
# not be in the module file.
echo spack load intel-mpi intel-mkl > /opt/env/00-modules.sh
echo > /opt/env/00-modules.sh
spack module tcl loads intel-mpi intel-mkl gsl hdf5 fftw sqlite >> /opt/env/00-modules.sh
# Clean up
spack clean -sdf
rm -rf /opt/intel-install
%environment
export LC_ALL=C
export SPACK_ROOT=/opt/spack
export CONTAINER_NAME=base
export CONTAINER_VERSION=0.0
BootStrap: localimage
From: ../base/base.sif
%files
../../external-files/chapels/chapel-1.24.1.tar.gz /chapel/
%post -c /bin/bash
apt-get install -y llvm-11-dev llvm-11 llvm-11-tools clang-11 libclang-11-dev libedit-dev python3-venv
. /opt/spack/share/spack/setup-env.sh
spack load intel-mpi
# Chapel
# Start by building local versions
cd /chapel
tar xvfz chapel-1.24.1.tar.gz
cd chapel-1.24.1
. util/setchplenv.bash
export CHPL_TARGET_CPU=ivybridge
export CHPL_LAUNCHER=none
export CHPL_LLVM=system
export CHPL_COMM=none
make
#-------------------------
# Temporarily turn off LLVM
unset CHPL_LLVM
unset MPICC
export MPI_CC=mpiicc
export I_MPI_CC=gcc
export CHPL_COMM=gasnet
export CHPL_COMM_SUBSTRATE=ibv
export CHPL_GASNET_SEGMENT=fast
make
export CHPL_COMM=gasnet
export CHPL_COMM_SUBSTRATE=mpi
unset CHPL_GASNET_SEGMENT
make
#----------------------------------
# Turn on LLVM
export CHPL_LLVM=system
export I_MPI_CC=clang-11
export CHPL_COMM=gasnet
export CHPL_COMM_SUBSTRATE=ibv
export CHPL_GASNET_SEGMENT=fast
make
export CHPL_COMM=gasnet
export CHPL_COMM_SUBSTRATE=mpi
unset CHPL_GASNET_SEGMENT
make
#---------------------------
make test-venv
# Create locales
apt install -y locales
locale-gen "en_US.UTF-8"
cat << 'EOF' > /opt/env/01-chapel.sh
#!/bin/bash
. /chapel/chapel-1.24.1/util/setchplenv.bash > /dev/null 2>&1
export CHPL_TARGET_CPU=ivybridge
export CHPL_LLVM=system
export CHPL_LAUNCHER=none
export GASNET_PHYSMEM_NOPROBE=1
export GASNET_MPI_THREAD=multiple
export CHPL_MKL_INCLUDE='-I${MKLROOT}/include'
export CHPL_MPI_FLAGS='-lmpi -lpthread -I${I_MPI_ROOT}/intel64/include'
export CHPL_MKL_LINK='-L${MKLROOT}/lib/intel64 --ldflags "-Wl,--no-as-needed" -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -ldl'
export CHPL_MPI_LINK='-L${I_MPI_ROOT}/intel64/lib -L${I_MPI_ROOT}/intel64/lib/release_mt'
export CHPL_WARN_FLAGS=--ccflags='-Wno-incompatible-pointer-types'
local() {
export CHPL_COMM=none
}
export -f local
gasnet-mpi() {
export CHPL_COMM=gasnet
export CHPL_COMM_SUBSTRATE=mpi
unset CHPL_GASNET_SEGMENT
}
export -f gasnet-mpi
gasnet-ibv() {
export CHPL_COMM=gasnet
export CHPL_COMM_SUBSTRATE=ibv
export CHPL_GASNET_SEGMENT=fast
}
export -f gasnet-ibv
npch() {
chpl "$@" ${CHPL_MKL_INCLUDE} ${CHPL_MPI_FLAGS} ${CHPL_MKL_LINK} ${CHPL_MPI_LINK} ${CHPL_WARN_FLAGS}
}
export -f npch
EOF
%environment
export LC_ALL=C
export SPACK_ROOT=/opt/spack
export CONTAINER_NAME=chapel
export CONTAINER_VERSION=1.24.1
odules:
enable:
- tcl
tcl:
all:
environment:
set:
'${PACKAGE}_ROOT': '${PREFIX}'
prefix_inspections:
bin:
- PATH
man:
- MANPATH
share/man:
- MANPATH
share/aclocal:
- ACLOCAL_PATH
lib:
- LIBRARY_PATH
- LD_LIBRARY_PATH
lib64:
- LIBRARY_PATH
- LD_LIBRARY_PATH
include:
- CPATH
- C_INCLUDE_PATH
- INCLUDE_PATH
lib/pkgconfig:
- PKG_CONFIG_PATH
lib64/pkgconfig:
- PKG_CONFIG_PATH
'':
- CMAKE_PREFIX_PATH

Install notes etc

My singularity definition files are here ~/singularity/containers/ . There are two files : base.def which builds my base image (including all of my development tools), and a Chapel-specific image. I do most of my package installs with Spack.

These are somewhat rough notes, but could be further cleaned up for a more automatic build.

Base Image

These images are also built to be run on an Infiniband system, so I’ve included the ibverbs package.

You’ll need to have the modules.yaml and runscript.sh files in the same directory as the base.def file. This also adds in the tarballs for Intel MPI and MKL, plus a few files to build these.

For completeness, the files in the external-files/intel-mpi-mkl-v0/ directory are :

  • l_mkl_2020.4.304.tgz
  • l_mpi_2019.10.317.tgz
  • packages.yaml
  • silent.cfg

packages.yaml is to tell Spack about this, the file is

packages:
  intel-mpi:
    externals:
      - spec: "intel-mpi@2019.10.317 arch=linux-ubuntu20.04-skylake"
        prefix: /opt/intel
        buildable: False
  intel-mkl:
    externals:
      - spec: "intel-mkl@2020.4.304 arch=linux-ubuntu20.04-skylake threads=openmp"
        prefix: /opt/intel
        buildable: False

silent.cfg is to install this without user intervention. That file is

ACCEPT_EULA=accept
ARCH_SELECTED=INTEL64
COMPONENTS=ALL
CONTINUE_WITH_INSTALLDIR_OVERWRITE=yes
CONTINUE_WITH_OPTIONAL_ERROR=yes
PSET_INSTALL_DIR=/opt/intel
PSET_MODE=install
SIGNING_ENABLED=no

I chose to do these with an Ubuntu 20.04 base, for no other reason than that I am familiar with it.

To get MPI support, I needed to pick an MPI library for the container itself. I ended up using Intel-MPI since it looks like it had PMI support built in (necessary for easy interaction with SLURM). Using OpenMPI or MPICH would have required my to figure out how to include the necessary libraries with all of this.

The base image is useful to validate whether MPI is working within the container with SLURM etc.

Installed packages include:

  • Intel MPI
  • Intel MKL
  • FFTW
  • GSL
  • HDF5

Chapel image

The Chapel image starts from the base image (so set your directory paths appropriately). I build with COMM=local, as well as COMM=gasnet-mpi and COMM=gasnet-ibv (the former for my laptop, the latter for our cluster).

Build Commands

Nothing special : sudo singularity build <name>.sif <name>.def

Running

You’ll notice that I have a runscript set up in the initial code; this lets me set up my environment the way I want to (especially spack, which I couldn’t figure out how to get singularity to set up).

The runscript allows me to both go into a shell, or to directly run a command in the container. I tend to develop in a shell, and then run codes directly (see below)

On my laptop, I run with singularity run --cleanenv --no-home --bind <tmpdir>:/home/$USER,$PWD:/data --pwd '/data' <commands>

  • –cleanenv : doesn’t pass the environment. This is useful to avoid eg. modules in the container from picking up things from outside.
  • –no-home : again, nice to keep this clean
  • binds : $HOME inside the container is mapped to a temporary directory to avoid polluting a run directory with eg. .bash_history files. The current directory is mounted under /data, which is where the container starts (–pwd)

For mpi jobs, I do mpirun -np <num> singularity run ….

Running on the cluster requires a few modifications

  • cleanenv doesn’t seem to work because it looks like eg. SLURM passes some information using environment variables. To avoid modules infecting the container, I have the container override certain env variables by setting appropriate SINGULARITYENV_<> variables.
  • Intel MPI needs to find the host PMI library. I bind its location to /host/lib64 and set the I_MPI_PMI_LIBRARY to its value (for Intel MPI, it looks like you need the PMI-2 variant)
  • bind /sys to /sys. On our cluster, we often end up assigning partial nodes. Chapel uses hwloc to figure out details of the system, and it appears that hwloc uses /sys.
  • I sent the home directory to a local scratch directory visible to all nodes, instead of a local tmp directory.

I run an MPI job on the cluster with

  • srun -n <nloc> singularity run ...

Notes for various systems

Laptop

  • Using gasnet+mpi conduit; required GASNET_MPI_THREAD=multiple
  • gasnet+mpi on Intel MPI with the container seems to require setting I_MPI_OFI_PROVIDER=sockets
  • Intel MPI on my laptop seems to do a certain amount of CPU pinning to avoid some oversubscription.

Cluster

  • Need to bind /sys from the host to the container.
  • Mount the location of the PMI library and set I_MPI_PMI_LIBRARY
  • I_MPI_DEBUG=4 is a useful debug state to see what is happening.
  • I don’t seem to need to set I_MPI_OFI_PROVIDER
  • I need to set MLX5_SINGLE_THREADED=0 to allow Chapel to run.
#!/usr/bin/env bash
set -eo pipefail
# Runscript for singularity
# This attempts to figure out what kind of command is
# being run and will set things up appropriately
#
# Initialize variables and other setup
. /opt/spack/share/spack/setup-env.sh
. $(spack location -i environment-modules)/init/bash
. /opt/spack/share/spack/setup-env.sh
for f in /opt/env/*.sh; do . $f; done
COMM=$1
if [[ ! -z ${RUNSCRIPT_DEBUG} ]]; then
echo "Inside the runscript"
echo "Command string : $@"
echo "Command: $1"
fi
if [[ -z ${COMM} ]]; then
/bin/bash
else
$@
fi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment