Skip to main content

Brightnode console

The web interface for managing your compute resources, account, teams, and billing.

Serverless

A pay-as-you-go compute solution designed for dynamic autoscaling in production AI/ML apps.

Bnode

A dedicated GPU or CPU instance for containerized AI/ML workloads, such as training models, running inference, or other compute-intensive tasks.

Public Endpoint

An AI model API hosted by Brightnode that you can access directly without deploying your own infrastructure.

Instant Cluster

A managed compute cluster with high-speed networking for multi-bnode distributed workloads like training large AI models.

Network volume

Persistent storage that exists independently of your other compute resources and can be attached to multiple Bnodes or Serverless endpoints to share data between machines.

S3-compatible API

A storage interface compatible with Amazon S3 for uploading, downloading, and managing files in your network volumes.

Brightnode Hub

A repository for discovering, deploying, and sharing preconfigured AI projects optimized for Brightnode.

Container

A Docker-based environment that packages your code, dependencies, and runtime into a portable unit that runs consistently across machines.

Data center

Physical facilities where Brightnode’s GPU and CPU hardware is located. Your choice of data center can affect latency, available GPU types, and pricing.

Machine

The physical server hardware within a data center that hosts your workloads. Each machine contains CPUs, GPUs, memory, and storage.