Skip to main content
Network volumes offer persistent storage that exists independently of your compute resources. Your data is retained even when your Bnodes are terminated or your Serverless workers are scaled to zero. You can use them to share data and maintain datasets across multiple machines and Brightnode products. Network volumes are backed by high-performance NVMe SSDs connected via high-speed networks. Transfer speeds typically range from 200-400 MB/s, with peak speeds up to 10 GB/s depending on location and network conditions.

When to use network volumes

Consider using network volumes when you need:
  • Persistent data that outlives compute resources: Your data remains accessible even after Bnodes are terminated or Serverless workers stop.
  • Shareable storage: Share data across multiple Bnodes or Serverless endpoints by attaching the same network volume.
  • Portable storage: Move your working environment and data between different compute resources.
  • Efficient data management: Store frequently used models or large datasets to avoid re-downloading them for each new Bnode or worker, saving time, bandwidth, and reducing cold start times.

Pricing

Network volumes are billed hourly at a rate of $0.07 per GB per month for the first 1TB, and $0.05 per GB per month for additional storage beyond that.
If your account lacks sufficient funds to cover storage costs, your network volume may be terminated. Once terminated, the disk space is immediately freed for other users, and Brightnode cannot recover lost data. Ensure your account remains funded to prevent data loss.

Create a network volume

Network volume size can be increased later, but cannot be decreased.
  • Web console
  • REST API
To create a new network volume:
  1. Navigate to the Storage page in the Brightnode console.
  2. Click New Network Volume.
  3. Select a datacenter for your volume. Datacenter location does not affect pricing, but determines which GPU types and endpoints your network volume can be used with.
  4. Provide a descriptive name for your volume (e.g., “project-alpha-data” or “shared-models”).
  5. Specify the desired size for the volume in gigabytes (GB).
  6. Click Create Network Volume.
You can edit and delete your network volumes using the Storage page.

Network volumes for Serverless

When attached to a Serverless endpoint, a network volume is mounted at /Brightnode-volume within the worker environment. This allows all workers on that endpoint to access shared data.

Attach to an endpoint

To enable workers on an endpoint to use a network volume:
  1. Navigate to the Serverless section of the Brightnode console.
  2. Select an existing endpoint and click Manage, then select Edit Endpoint.
  3. In the endpoint configuration menu, scroll down and expand the Advanced section.
  4. Click Network Volume and select the network volume you want to attach to the endpoint.
  5. Configure any other fields as needed, then select Save Endpoint.
Data from the network volume will be accessible to all workers for that endpoint from the /Brightnode-volume directory. Use this path to read and write shared data in your handler function.
Writing to the same network volume from multiple endpoints or workers simultaneously may result in conflicts or data corruption. Ensure your application logic handles concurrent access appropriately for write operations.

Benefits for Serverless

Using network volumes with Serverless provides several advantages:
  • Reduced cold starts: Store large models or datasets on a network volume so workers can access them quickly without downloading on each cold start.
  • Cost efficiency: Network volume storage costs less than frequently re-downloading large files.
  • Simplified data management: Centralize your datasets and models for easier updates and management across multiple workers and endpoints.
If you use network volumes with your Serverless endpoint, your deployments will be constrained to the datacenter where the volume is located. This may impact GPU availability and failover options.

Network volumes for Bnodes

When attached to a Bnode, a network volume replaces the Bnode’s default volume disk and is typically mounted at /workspace.
Network volumes are only available for Bnodes in the Secure Cloud. For more information, see Bnode types.

Attach to a Bnode

Network volumes must be attached during Bnode deployment. They cannot be attached to a previously-deployed Bnode, nor can they be detached later without deleting the Bnode. To deploy a Bnode with a network volume attached:
  1. Navigate to the Bnodes section of the Brightnode console.
  2. Select Deploy.
  3. Select Network Volume and choose the network volume you want to attach from the dropdown list.
  4. Select a GPU type. The system will automatically show which Bnodes are available to use with the selected network volume.
  5. Select a Bnode Template.
  6. If you wish to change where the volume mounts, select Edit Template and adjust the Volume Mount Path.
  7. Configure any other fields as needed, then select Deploy On-Demand.
Data from the network volume will be accessible to the Bnode from the volume mount path (default: /workspace). Use this directory to upload, download, and manipulate data that you want to share with other Bnodes.

Share data between Bnodes

You can attach a network volume to multiple Bnodes, allowing them to share data seamlessly. Multiple Bnodes can read files from the same volume concurrently, but you should avoid writing to the same file simultaneously to prevent conflicts or data corruption.

Network volumes for Instant Clusters

Network volumes for Instant Clusters work the same way as they do for Bnodes. They must be attached during cluster creation, and by default are mounted at /workspace within each bnode in the cluster.

Attach to an Instant Cluster

To enable workers on an Instant Cluster to use a network volume:
  1. Navigate to the Instant Clusters section of the Brightnode console.
  2. Click Create Cluster.
  3. Click Network Volume and select the network volume you want to attach to the cluster.
  4. Configure any other fields as needed, then click Deploy Cluster.

S3-compatible API

Brightnode provides an S3-compatible API that allows you to access and manage files on your network volumes directly, without needing to launch a Bnode or run a Serverless worker for file management. This is particularly useful for:
  • Uploading large datasets or models before launching compute resources.
  • Managing files remotely without maintaining an active connection.
  • Automating data workflows using standard S3 tools and libraries.
  • Reducing costs by avoiding the need to keep compute resources running for file management.
  • Pre-populating volumes to reduce worker initialization time and improve cold start performance.
The S3-compatible API supports standard S3 operations including file uploads, downloads, listing, and deletion. You can use it with popular tools like the AWS CLI and Boto3 (Python).
The S3-compatible API is currently available for network volumes in the following datacenters: EUR-IS-1, EU-RO-1, EU-CZ-1, US-KS-2, US-CA-2.

Migrate files

You can migrate files between network volumes (including between data centers) using the following methods:

Using brightnodectl

The simplest way to migrate files between network volumes is to use brightnodectl send and receive on two running Bnodes. Before you begin, you’ll need:
  • A source network volume containing the data you want to migrate.
  • A destination network volume (which can be empty or contain existing data).
1

Deploy Bnodes with network volumes attached

Deploy two Bnodes using the default Brightnode PyTorch template. Each Bnode should have one network volume attached.
  1. Deploy the first Bnode in the source data center and attach the source network volume.
  2. Deploy the second Bnode in the target data center and attach the target network volume.
  3. Start the web terminal in both Bnodes.
2

Open the source volume

Using your source Bnode’s web terminal, navigate to the network volume directory (usually /workspace):
cd workspace
3

Start the transfer

Use brightnodectl send to start the transfer. To transfer the entire volume:
brightnodectl send *
You can also specify specific files or directories instead of *.
4

Copy the receive command

After running the send command, copy the receive command from the output. It will look something like this:
brightnodectl receive 8338-galileo-collect-fidel
5

Open the destination volume

Using your destination Bnode’s web terminal, navigate to the network volume directory (usually /workspace):
cd workspace
6

Receive your files

Paste and run the receive command you copied earlier:
brightnodectl receive 8338-galileo-collect-fidel
The transfer will begin and show progress as it copies files from the source to the destination volume.
For a visual walkthrough using JupyterLab, check out this video tutorial:

Using rsync over SSH

For faster migration speed and more reliability for large transfers, you can use rsync over SSH on two running Bnodes. Before you begin, you’ll need:
  • A network volume in the source data center containing the data you want to migrate.
  • A network volume in the target data center (which can be empty or contain existing data).
1

Deploy Bnodes with network volumes attached

Deploy two Bnodes using the default Brightnode PyTorch template. Each Bnode should have one network volume attached.
  1. Deploy the first Bnode in the source data center and attach the source network volume.
  2. Deploy the second Bnode in the target data center and attach the target network volume.
  3. Start the web terminal in both Bnodes.
2

Set up SSH keys on the source Bnode

On the source Bnode, install required packages and generate an SSH key pair:
apt update && apt install -y vim rsync && \
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -N "" -q && \
cat ~/.ssh/id_ed25519.pub
Copy the public key that appears in the terminal output.
3

Configure the destination Bnode

On the destination Bnode, install required packages and add the source Bnode’s public key to authorized_keys:
apt update && apt install -y vim rsync && \
ip=$(printenv BRIGHTNODE_PUBLIC_IP) && \
port=$(printenv BRIGHTNODE_TCP_PORT_22) && \
echo "rsync -avzP --inplace -e \"ssh -p $port\" /workspace/ root@$ip:/workspace" && \
vi ~/.ssh/authorized_keys
In the editor that opens, paste the public key you copied from the source Bnode, then save and exit (press Esc, type :wq, and press Enter).The command above also displays the rsync command you’ll need to run on the source Bnode. Copy this command for the next step.
4

Run the rsync command

On the source Bnode, run the rsync command from the previous step. If you didn’t copy it, you can construct it manually using the destination Bnode’s IP address and port number.
# Replace DESTINATION_PORT and DESTINATION_IP with values from the destination Bnode
rsync -avzP --inplace -e "ssh -p DESTINATION_PORT" /workspace/ root@DESTINATION_IP:/workspace

# Example:
rsync -avzP --inplace -e "ssh -p 18598" /workspace/ root@157.66.254.13:/workspace
The rsync command displays progress as it transfers files. Depending on the size of your data, this may take some time.
5

Verify the transfer

After the rsync command completes, verify the data transfer by checking disk usage on both Bnodes:
du -sh /workspace
The destination Bnode should show similar disk usage to the source Bnode if all files transferred successfully.
You can run the rsync command multiple times if the transfer is interrupted. The --inplace flag ensures that rsync resumes from where it left off rather than starting over.