Self-Hosting
Operating VOLT on your own infrastructure: app deployment, cluster setup, storage, and runbooks.
A self-hosted VOLT deployment has two operational layers:
- the main application deployment that handles users, teams, APIs, auth, and the browser experience,
- and the cluster deployment that handles storage, runtime execution, notebooks, trajectory preprocessing, plugin jobs, and remote operations.
Both layers must be operational for VOLT to function end-to-end.
What the app layer needs
At minimum, the main application expects:
- a Linux host,
- Node.js,
- MongoDB,
- Redis,
- MinIO,
- and a reverse proxy such as nginx in front of the server and client.
Those services support the workspace layer: auth, metadata, object storage, queue state, notifications, and the HTTP or WebSocket surfaces the UI talks to.
What the cluster layer needs
Each team cluster runs the daemon plus the local services it depends on and requires working Docker support for containers and notebooks. A healthy cluster deployment requires:
- valid cloud connection settings,
- a working daemon password and enrollment flow,
- MinIO, MongoDB, and Redis connectivity,
- and enough free memory and disk to handle trajectory processing and analysis workloads.
Minimum requirements
| Component | Minimum | Recommended |
|---|---|---|
| OS | Ubuntu 20.04+ / Debian 11+ | Ubuntu 22.04 LTS |
| Node.js | 18.x | 22.x |
| RAM | 4 GB | 8 GB+ |
| Disk | 20 GB | Sized to your trajectory and artifact volume |
| Docker | 20.x+ | Latest stable |
Supporting services
| Service | Purpose | Default Port |
|---|---|---|
| MongoDB | Structured metadata and result projections | 27017 |
| Redis | Queues, cache, and runtime state | 6379 |
| MinIO | Trajectory dumps, models, plugins, previews, and assets | 9000 |
| Guacamole (optional) | Browser-accessible remote desktop relay support | 4822 |
Quick start for the app stack
For a local or first-pass deployment, Docker Compose is the simplest way to bring up the supporting services.
version: "3.8"
services:
mongo:
image: mongo:7
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: volt
MONGO_INITDB_ROOT_PASSWORD: changeme
volumes:
- volt-mongo-data:/data/db
redis:
image: redis:7
ports:
- "6379:6379"
command: --requirepass changeme
minio:
image: minio/minio
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: voltadmin
MINIO_ROOT_PASSWORD: changeme
volumes:
- volt-minio-data:/data
command: server /data --console-address ":9001"
volumes:
volt-mongo-data:
volt-minio-data:docker compose up -dBuilding the main application
git clone https://github.com/VoltLabs-Research/Volt.git
cd Volt
cp server/.env.example server/.env
cp client/.env.example client/.envConfigure the environment files with the MongoDB, Redis, MinIO, server URL, client URL, secret-key, and SSH encryption settings.
Two values are operationally critical:
SECRET_KEYsigns sensitive server-side tokens.SSH_ENCRYPTION_KEYprotects stored SSH credentials used by import flows.
Then build:
cd server && npm install && npm run build
cd ../client && npm install && npm run buildThe server listens on port 8000 by default.
Reverse proxying
You need a reverse proxy in front of the app for three reasons: serving the built client, routing API traffic, and preserving WebSocket support for the real-time layer.
server {
listen 443 ssl http2;
server_name your-domain.com;
ssl_certificate /etc/ssl/certs/your-domain.crt;
ssl_certificate_key /etc/ssl/private/your-domain.key;
location / {
root /path/to/Volt/client/dist;
try_files $uri $uri/ /index.html;
}
location /api/ {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 0;
}
location /socket.io/ {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}Cluster deployment
Once the app is online, the next step is connecting a team cluster. The installer bootstraps Docker support, writes daemon configuration, and waits for the daemon to come online and begin heartbeating.
A healthy app without a healthy cluster has no execution target: trajectory uploads, analysis runs, notebooks, container operations, and SSH imports all fail.
Verification
Verify the system as a workflow rather than as isolated services:
- open the app and confirm sign-in works,
- connect a cluster and wait for a clean heartbeat,
- upload a small trajectory,
- make sure it reaches a completed state,
- run a lightweight analysis,
- open the result and verify the full round-trip.
Recommended runbooks
Cluster provisioning runbook
When you connect a new cluster, verify the full bootstrap path instead of stopping at "the installer finished":
- confirm the install command completed without obvious errors,
- verify the local services are up,
- check that the daemon reached its startup log cleanly,
- confirm the team cluster moved from waiting to connected,
- confirm a fresh heartbeat is visible from the server side.
If the cluster never leaves the waiting state, the issue is often not the installer itself but the daemon's ability to reach the control plane afterward.
Analysis end-to-end runbook
After provisioning, run a small analysis on a small trajectory. This exercises upload, storage, queueing, daemon connectivity, plugin execution, result processing, and UI visibility in one check.
Notebook runbook
For notebook validation, do not stop at "session created." A new Jupyter session can exist while still warming up. The real check is:
- create or open a notebook,
- wait for the runtime to become ready,
- load the proxied Jupyter UI successfully,
- and run a simple cell inside the live session.
Remote service runbook
If your deployment uses containerized services, VNC-capable images, or internal tools exposed through VOLT, validate at least one proxied service path from end to end. That confirms the exposure registry, reverse channel, and relay behavior are all healthy.
Common failure points
- The app loads, but no cluster ever reaches connected.
- The daemon starts, but cannot reach MongoDB, Redis, or MinIO.
- Native runtime dependencies fail, which breaks parsing, GLB generation, or preview rasterization.
- Jupyter startup fails because of port availability or Docker issues.
- The reverse proxy is correct for HTTP but not for WebSockets.
- A build exists for the app, but the daemon deployment was never completed.
Troubleshooting by symptom
Cluster stays in waiting-for-connection
Bootstrap completed but the daemon never reached a healthy control-plane connection. Check daemon logs, VoltCloud URL settings, and network reachability between the machine and the server.
Cluster becomes disconnected after working earlier
The local services may still run, but the daemon has stopped sending heartbeats or the reverse channel has dropped. Check daemon health and network path.
Jobs remain queued or delayed
A healthy queue can stall if the daemon is under memory pressure. Workers intentionally delay or requeue work when the process is stressed.
Jupyter stays in starting
Common causes: no free host ports, Docker problems, incorrect public-base-path settings, or a runtime container that has not reached readiness.
VNC or proxied services never appear
Check the container state, the expected labels, and whether the daemon's exposure registry is publishing snapshots. This is typically a discovery issue.
Update succeeds but the cluster never comes back
Managed update flows depend on the installation mode and on the daemon being able to replace itself. If the deployment was not created in the expected image-driven way, the control plane accepts the request while the runtime-side update path fails.
Delete gets stuck halfway
Cluster deletion spans the control plane and the runtime host. If remote cleanup does not complete, the control-plane object may be removed while local resources remain, or vice versa.
Deployment hygiene
- Keep secrets and
.envfiles out of version control. - Maintain an operator checklist covering cluster connection, upload, analysis, and notebook validation.
Storage layout and buckets
MinIO is not just a generic object store in this architecture. Different buckets carry different kinds of runtime artifacts.
| Bucket | Content |
|---|---|
volt-models | Generated 3D models and GLB assets |
volt-rasterizer | PNG previews and raster outputs |
volt-plugins | Plugin payloads and many plugin result files |
volt-dumps | Stored trajectory dumps |
volt-whiteboards | Whiteboard state and assets |
volt-avatars | Public avatar images |
volt-chat | Chat attachments |
volt-latex-assets | LaTeX document assets |