Dependencies¶
To compile the code you need to have the following dependencies installed:
CMake
(version >= 3.16; verify by runningcmake --version
).GCC
(version >= 8.3.1; verify by runningg++ --version
),llvm
(tested on version >= 11; verify by runningclang++ --version
) or Intel C++ compiler (version >= 19.1 or higher; verify by runningicx --version
).- to compile for NVIDIA GPUs, you need to have the
CUDA toolkit
installed (version >= 11.0; verify by runningnvcc --version
). - to compile for AMD GPUs, you will need the ROCm libraries and the HIP compilers/runtime (verify by running
hipcc --version
). MPI
(e.g.,OpenMPI
,MPICH
, etc.; verify by runningmpicxx --version
) for multi-node simulations.HDF5
for data output (verify by runningh5c++ --version
).
Cuda compatibility
Note, that different versions of CUDA
are compatible with different host compiler versions (i.e., gcc
, llvm
, etc.). Please, refer to the following curated list for the compatibility matrix.
All the other third-party dependencies, such as Kokkos
and ADIOS2
, are included in the repository as submodules and can be automatically compiled when you run cmake
(although, we recommend to install ADIOS2
externally as it can take a while to compile).
In addition to this section, we also provide more detailed instructions on how to set up the dependencies as well as the submit scripts for the most popular and widely used clusters in the following section.
Note
To play with the code with all the dependencies already installed in the containerized environment, please refer to the section on Docker.
Preinstalling third-party libraries¶
To speed up the compilation process, it is often beneficial to precompile & install the third-party libraries and use those during the build process, either by setting the appropriate environment variables, using it within a conda/spack environment, or by using environment modules. Alternatively, of course, you can use the libraries provided by your system package manager (pacman
, apt
, brew
, nix
, ...), or the cluster's module system.
Warning
If the system you're working on has MPI
or HDF5
already installed (either through environment modules or any package manager), it's highly recommended to use these libraries, instead of building your own. Instructions for these two here are provided as a last resort.
Spack (recommended)¶
Spack is essentially a package manager for HPC systems which allows to install all the dependencies locally, optionally cross-compiling them with the already available libraries. If spack is not already available on your system (or on a cluster), you can simply download it (preferably to your home directory) with:
git clone -c feature.manyFiles=true --depth=2 https://github.com/spack/spack.git
and add the following to your .bashrc
or .zshrc
(or analogous) startup file:
. spack/share/spack/setup-env.sh
to activate spack
on shell login.
Identifying pre-installed compilers/packages¶
Since spack compiles everything from the source code, it is recommended to use as many of the already pre-installed packages as possible. In particular, you can use the already installed compilers, or big libraries, such as the MPI and HDF5. To make spack aware of their existence, you can simply run:
# to add compilers
spack compiler add
# and to add all libraries
spack external find
Note
If your machine is using environment modules, you may need to first load the compilers/libraries you need, before running the command above, e.g.:
module load gcc/13
module load openmpi/5
If for some reason spack does not find the local package you need, you may want to add it manually, by modifying the $HOME/.spack/packages.yaml
file to add the following (example for locally installed cuda
and openmpi
):
packages:
# ...
cuda:
buildable: false
externals:
- prefix: /opt/cuda/
spec: cuda@12.8
openmpi:
buildable: false
externals:
- prefix: /usr/
spec: openmpi@5.0.6
Then you can run, e.g., spack spec cuda
to check whether it finds the package: [e]
at the front will indicate that it found the external package, if so -- you can "install" it in spack by using spack install cuda
or spack install openmpi
.
To check which packages spack
has found, simply run spack find
or to check the compilers, run spack compilers
.
Note
It is strongly recommended to use the pre-installed MPI
, CUDA
and other big libraries, instead of installing them via spack
since these can be specifically configured on the machine you're running on.
Setting up spack environment¶
After that, it is recommended to create a spack environment and install all the other libraries within it. To do so, first create & activate the environment by running:
spack env create entity-env
spack env activate entity-env
Whenever you activate this environment, spack will automatically add the libraries installed within it to the PATH
so that cmake
can identify them when compiling the Entity
. Within the environment you may now install all the necessary libraries. Below we present the possible commands you may need to run for each of them. Make sure to first check which dependencies will spack
use to compile the library before actually installing. For that, you can run, e.g.,
spack spec kokkos <OPTIONS>
which will show all the dependencies it will use. In front of each dependency, you'll see one of the following:
-
[e]
: external (locally installed) package, -
[+]
: a package already installed within spack, -
[-]
: a package that will be downloaded and built during the installation.
Once you're satisfied, you may run spack install --add kokkos <OPTIONS>
to actually perform the installation (within the environment).
It is highly recommended to use the HDF5
already installed on the cluster and find it via spack
as described above. Nonetheless, you may also install it via spack
using the following command:
spack install --add hdf5 +cxx
-mpi
flag to disable the MPI
support.
Because we rely on HDF5
together with the ADIOS2
, it is recommended to have hdf5
installed externally (and make sure spack
sees that installation by running spack spec hdf5
). You can then install ADIOS2
(in a spack
environment) using the following command:
spack install --add adios2 +hdf5 +pic
-mpi
option to disable the MPI
support (HDF5
will also have to be serial for that to work).
For Kokkos
, you will always use the following settings +pic +aggressive_vectorization
on top of the architecture specific settings. For example, to compile with CUDA
support on Ampere80 architecture (A100 card), you can do
spack install --add kokkos +pic +aggressive_vectorization +cuda +wrapper cuda_arch=80
And, again, before running this command, make sure to run it with spack spec ...
instead of spack install --add
with the same options just to make sure spack will not install an external CUDA.
Note
Sometimes spack
might not recognize the CPU architecture properly, especially when compiling on a node different from the one where the code will be running (e.g., login node vs compute node). In that case, when compiling the kokkos
, you may need to provide also the following option: target=zen2
(or other target cpu architecture). This might fail on the first try, since by default spack
does not allow for manual architecture specification, in which case first reconfigure spack using spack config add concretizer:targets:host_compatible:false
, and then try again.
spack info
To see all the available configuration options for a given package, simply run spack info <PACKAGE>
.
Using an explicit compiler
You can instruct spack
to use a specific compiler which it has identified (find out by running spack compilers
) by adding the following flag (example for clang
):
spack install <PACKAGE> <OPTIONS> %clang
Garbage collection
Simply uninstalling the package may left behind some build caches which often take up a lot of space. To get rid of these, you may run spack gc
which will try its best to delete all these caches as well as all the unused packages.
Anaconda¶
If you want to have ADIOS2
with the serial HDF5
support (i.e., without MPI
) installed in the conda environment, we provide a shell script conda-entity-nompi.sh
which installs the proper compiler, the hdf5
library, and the ADIOS2
. Run the scripts via:
source conda-entity-nompi.sh
This also pip
-installs the nt2.py
package for post-processing. With this configuration, the Kokkos
library will be built in-tree.
Building dependencies from source¶
The form below allows you to generate the appropriate build scripts and optionally the environment modules for the libraries you want to compile and install.
Prerequisites:
- Make sure to have a host (CPU) compiler such as GCC or LLVM (if necessary, load using
module load
). - If using CUDA, make sure that
$CUDA_HOME
points to CUDA install path (if necessary, load cudatoolkit usingmodule load
).
Note: If using environment modules, add the mentioned module load ...
commands to the new modulefile created at step #4.
Possible configurations:
Procedure:
-
Download the OpenMPI source code:
git clone https://github.com/open-mpi/ompi.git cd ompi
-
Run the script below to configure
-
Compile & install with
make -j make install
-
Optionally, if using environment modules, create a modulefile with the following content:
<MPI_INSTALL_DIR>
and addmodule load
s for the appropriate compilers as needed.
Prerequisites:
- Make sure to have a host (CPU) compiler such as GCC or LLVM (if necessary, load using
module load
). - If using MPI, make sure that
$MPI_HOME
points to its install directory (if necessary, load withmodule load
).
Note: If using environment modules, add the mentioned module load ...
commands to the new modulefile created at step #4.
Possible configurations:
Procedure:
-
Download the latest HDF5 source code (below is an example for
1.14.6
) into a temporary directory:mkdir hdf5src cd hdf5src wget https://github.com/HDFGroup/hdf5/releases/download/hdf5_1.14.6/hdf5-1.14.6.tar.gz tar xvf hdf5-1.14.6.tar.gz
-
Download the latest dependencies into the same (
hdf5src
) temporary directory (do not extract):- HDF5 plugins (e.g., 1.14.6):
wget https://github.com/HDFGroup/hdf5_plugins/releases/download/hdf5-1.14.6/hdf5_plugins-1.14.tar.gz
- ZLIB (e.g., 1.3.1):
wget https://github.com/madler/zlib/releases/download/v1.3.1/zlib-1.3.1.tar.gz
- ZLIBNG (e.g., 2.2.4):
wget https://github.com/zlib-ng/zlib-ng/archive/refs/tags/2.2.4.tar.gz
- LIBAEC (e.g., 1.1.3):
wget https://github.com/MathisRosenhauer/libaec/releases/download/v1.1.3/libaec-1.1.3.tar.gz
- HDF5 plugins (e.g., 1.14.6):
-
Copy three
.cmake
scripts from the uncompressed HDF5 directory to the temporary directory:cp hdf5-1.14.6/config/cmake/scripts/*.cmake .
-
In
HDF5options.cmake
uncomment the following line:set (ADD_BUILD_OPTIONS "${ADD_BUILD_OPTIONS} -DBUILD_TESTING:BOOL=OFF")
-
In
CTestScript.cmake
uncomment the following line (or if not present, simply add it belowcmake_minimum_required
):set (LOCAL_SKIP_TEST "TRUE")
-
From the same temporary directory run the following:
-
Optionally, if using environment modules, create a modulefile with the following content:
Change the <HDF5_INSTALL_DIR>
and add module load
s for the appropriate compiler/MPI as needed.
Prerequisites:
- Make sure to have a host (CPU) compiler such as GCC or LLVM (if necessary, load using
module load
). - Also make sure to have an HDF5 installed; check that
$HDF5_ROOT
properly points to the install directory (if necessary, load withmodule load
). - If using MPI, make sure that
$MPI_HOME
points to its install directory (if necessary, load withmodule load
).
Note: If using environment modules, add the mentioned module load ...
commands to the new modulefile created at step #4.
Possible configurations:
Procedure:
- Download the ADIOS2 source code:
git clone https://github.com/ornladios/ADIOS2.git cd ADIOS2
- Run the script below to configure
- Compile & install with
cmake --build build -j cmake --install build
- Optionally, if using environment modules, create a modulefile with the following content:
<ADIOS2_INSTALL_DIR>
and addmodule load
s for the appropriate compiler/HDF5/MPI as needed.
Prerequisites:
- Make sure to have a host (CPU) compiler such as GCC or LLVM (if necessary, load using
module load
). - If using CUDA, make sure that
$CUDA_HOME
points to CUDA install path (if necessary, load cudatoolkit usingmodule load
). - If using ROCm/HIP, make sure to have
hipcc
, and setCC
andCXX
variables tohipcc
(if necessary, load HIP SDK usingmodule load
).
Note: If using environment modules, add the mentioned module load ...
commands to the new modulefile created at step #4.
Possible configurations:
Procedure:
- Download the Kokkos source code:
git clone -b master https://github.com/kokkos/kokkos.git cd kokkos
- Run the script below to configure
- Compile & install with
cmake --build build -j cmake --install build
- Optionally, if using environment modules, create a modulefile with the following content:
<KOKKOS_INSTALL_DIR>
and addmodule load
s for the appropriate host compiler/CUDA/HIP/SYCL as needed.
Note
We also provide a command-line tool called ntt-dploy
which can be used for the same purpose.
Nix
On systems with the nix
package manager, you can quickly make a development environment with all the dependencies installed using a nix-shell
(from the root directory of the code):
nix-shell dev/nix --arg hdf5 true --arg mpi true --arg gpu \"HIP\" --arg arch \"amd_gfx1100\"
# you can inspect the default settings by
head dev/nix/shell.nix