Skip to main content
This guide is for advanced users who want to configure and manage their own Slurm deployment on Instant Clusters. If you’re looking for a pre-configured solution, see Slurm Clusters.
This tutorial demonstrates how to configure Brightnode Instant Clusters with Slurm to manage and schedule distributed workloads across multiple bnodes. Slurm is a popular open-source job scheduler that provides a framework for job management, scheduling, and resource allocation in high-performance computing environments. By leveraging Slurm on Brightnode’s high-speed networking infrastructure, you can efficiently manage complex workloads across multiple GPUs. Follow the steps below to deploy a cluster and start running distributed Slurm workloads efficiently.

Requirements

  • You’ve created a Brightnode account and funded it with sufficient credits.
  • You have basic familiarity with Linux command line.
  • You’re comfortable working with Bnodes and understand the basics of Slurm.

Step 1: Deploy an Instant Cluster

  1. Open the Instant Clusters page on the Brightnode web interface.
  2. Click Create Cluster.
  3. Use the UI to name and configure your cluster. For this walkthrough, keep Bnode Count at 2 and select the option for 16x H100 SXM GPUs. Keep the Bnode Template at its default setting (Brightnode PyTorch).
  4. Click Deploy Cluster. You should be redirected to the Instant Clusters page after a few seconds.

Step 2: Clone demo and install Slurm on each Bnode

To connect to a Bnode:
  1. On the Instant Clusters page, click on the cluster you created to expand the list of Bnodes.
  2. Click on a Bnode, for example CLUSTERNAME-bnode-0, to expand the Bnode.
On each Bnode:
  1. Click Connect, then click Web Terminal.
  2. In the terminal that opens, run this command to clone the Slurm demo files into the Bnode’s main directory:
    git clone https://github.com/pandyamarut/slurm_example.git && cd slurm_example
    
  3. Run this command to install Slurm:
    apt update && apt install -y slurm-wlm slurm-client munge
    

Step 3: Overview of Slurm demo scripts

The repository contains several essential scripts for setting up Slurm. Let’s examine what each script does:
  • create_gres_conf.sh: Generates the Slurm Generic Resource (GRES) configuration file that defines GPU resources for each bnode.
  • create_slurm_conf.sh: Creates the main Slurm configuration file with cluster settings, bnode definitions, and partition setup.
  • install.sh: The primary installation script that sets up MUNGE authentication, configures Slurm, and prepares the environment.
  • test_batch.sh: A sample Slurm job script for testing cluster functionality.

Step 4: Install Slurm on each Bnode

Now run the installation script on each Bnode, replacing [MUNGE_SECRET_KEY] with any secure random string (like a password). The secret key is used for authentication between bnodes, and must be identical across all Bnodes in your cluster.
./install.sh "[MUNGE_SECRET_KEY]" bnode-0 bnode-1 10.65.0.2 10.65.0.3
This script automates the complex process of configuring a two-bnode Slurm cluster with GPU support, handling everything from system dependencies to authentication and resource configuration. It implements the necessary setup for both the primary (i.e. master/control) and secondary (i.e compute/worker) bnodes.

Step 5: Start Slurm services

If you’re not sure which Bnode is the primary bnode, run the command echo $HOSTNAME on the web terminal of each Bnode and look for bnode-0.
  1. On the primary bnode (bnode-0), run both Slurm services:
    slurmctld -D
    
  2. Use the web interface to open a second terminal on the primary bnode and run:
    slurmd -D
    
  3. On the secondary bnode (bnode-1), run:
    slurmd -D
    
After running these commands, you should see output indicating that the services have started successfully. The -D flag keeps the services running in the foreground, so each command needs its own terminal.

Step 6: Test your Slurm Cluster

  1. Run this command on the primary bnode (bnode-0) to check the status of your bnodes:
    sinfo
    
    You should see output showing both bnodes in your cluster, with a state of “idle” if everything is working correctly.
  2. Run this command to test GPU availability across both bnodes:
    srun --bnodes=2 --gres=gpu:1 nvidia-smi -L
    
    This command should list all GPUs across both bnodes.

Step 7: Submit the Slurm job script

Run the following command on the primary bnode (bnode-0) to submit the test job script and confirm that your cluster is working properly:
sbatch test_batch.sh
Check the output file created by the test (test_simple_[JOBID].out) and look for the hostnames of both bnodes. This confirms that the job ran successfully across the cluster.

Step 8: Clean up

If you no longer need your cluster, make sure you return to the Instant Clusters page and delete your cluster to avoid incurring extra charges.
You can monitor your cluster usage and spending using the Billing Explorer at the bottom of the Billing page section under the Cluster tab.

Next steps

Now that you’ve successfully deployed and tested a Slurm cluster on Brightnode, you can:
  • Adapt your own distributed workloads to run using Slurm job scripts.
  • Scale your cluster by adjusting the number of Bnodes to handle larger models or datasets.
  • Try different frameworks like Axolotl for fine-tuning large language models.
  • Optimize performance by experimenting with different distributed training strategies.