MPI and Spack based HPC Cluster

This artifact sets up a ready-to-use HPC cluster in the cloud with a master-worker setup. All nodes come pre-installed with MPICH, OpenMPI, Spack, and Lmod (Lua modules) for running and managing MPI-based applications. Users can optionally enable a shared NFS directory for seamless data access across nodes. Example Jupyter notebooks are included to help users get started.

6 1 - 1 Jul. 27, 2025, 6:41 PM

Authors

Launch on Chameleon

Launching this artifact will open it within Chameleon’s shared Jupyter experiment environment, which is accessible to all Chameleon users with an active allocation.

Download Archive

Download an archive containing the files of this artifact.

Download with git

Clone the git repository for this artifact, and checkout the version's commit

git clone https://github.com/rohanbabbar04/MPI-Spack-Experiment-Artifact.git
# cd into the created directory
git checkout 9a26104baecc5775edf4f49ec90789322c79bc34
Feedback

Submit feedback through GitHub issues

Version Stats

1 1 -