Multi-Node Setup
This is where Banyan earns its keep. Your banyan.yaml doesn’t change — you just have more servers running Agents.
Architecture
+-----------+ | Engine | (control plane) | + store | | + gRPC | +-----+-----+ | +-----------+-----------+ | gRPC (:50051) | | | +-----+-----+ +-----+-----+ | Worker 1 | | Worker 2 | | Agent | | Agent | | containerd| | containerd| +------------+ +------------+The Engine orchestrates. Workers run containers. All communication happens over gRPC with password authentication.
Prerequisites
Install the appropriate binaries on each server. See Installation.
- Engine node:
banyan-engine,banyan-cli(embedded BadgerDB store — no external dependency by default) - Worker nodes:
banyan-agent, containerd, nerdctl - Deploy machine:
banyan-cli(can be the engine node or any other machine)
1. Start the Engine
On your Engine server (e.g., 192.168.1.10):
sudo banyan-engine initsudo banyan-engine startDuring init, you’ll set a cluster password. All agents and CLI clients must use the same password.
The Engine starts a gRPC server on port 50051 by default. Verify from another machine:
# On the deploy machine, configure the CLI to point at the enginesudo banyan-cli init# Enter: 192.168.1.10 for host, 50051 for port, and the cluster password
banyan-cli status2. Start the Agents
On Worker 1 (192.168.1.11):
sudo banyan-agent init# Enter: 192.168.1.10 for engine host, 50051 for port, and the cluster password
sudo banyan-agent start --node-name worker-1On Worker 2 (192.168.1.12):
sudo banyan-agent initsudo banyan-agent start --node-name worker-2Each Agent connects to the Engine via gRPC, registers, and starts a heartbeat.
3. Verify the cluster
banyan-cli statusBanyan Cluster - Status========================================Engine: RUNNINGConnection: 192.168.1.10:50051
Agents: 2 - worker-1 (status: ready, last seen: 2s ago) - worker-2 (status: ready, last seen: 3s ago)
Deployments: 0
========================================4. Deploy
The same manifest from the Quickstart works here without changes. Banyan distributes replicas across workers automatically.
name: my-app
services: web: build: ./web ports: - "80:80" depends_on: - api
api: build: ./api deploy: replicas: 3 ports: - "8080:8080" environment: - DB_HOST=my-app-db-0 - DB_PORT=5432 depends_on: - db
db: image: postgres:15-alpine ports: - "5432:5432" environment: - POSTGRES_USER=banyan - POSTGRES_PASSWORD=secret - POSTGRES_DB=appbanyan-cli deploy -f banyan.yamlThe CLI connects to the Engine using the host and port configured during banyan-cli init. Banyan distributes 5 containers across 2 workers using round-robin:
| Worker 1 | Worker 2 |
|---|---|
| my-app-web-0 | my-app-api-0 |
| my-app-api-1 | my-app-api-2 |
| my-app-db-0 |
5. Check containers on workers
SSH into each worker and list running containers:
sudo nerdctl psDeploying from a remote machine
You don’t need to run deploy from the Engine node. Any machine with banyan-cli can deploy as long as it can reach the Engine’s gRPC port:
# First configure the CLI (run once)sudo banyan-cli init# Enter the engine host, port, and password
# Then deploybanyan-cli deploy -f banyan.yamlAdding more workers
- Install
banyan-agent, containerd, and nerdctl on the new server. - Run
sudo banyan-agent init(enter engine host, port, and password) - Run
sudo banyan-agent start --node-name worker-3
The new worker appears in banyan-cli status within seconds. Future deployments include it automatically.
Firewall requirements
| Port | Protocol | Direction | Purpose |
|---|---|---|---|
| 50051 | TCP | Agents/CLI → Engine | gRPC (all control plane communication) |
| 50052 | TCP | Engine → Agents | gRPC (log streaming) |
| 5000 | TCP | Agents → Engine | OCI registry (image distribution) |
Workers don’t need to communicate with each other directly.