Documentation Index
Fetch the complete documentation index at: https://docs.stacyide.xyz/llms.txt
Use this file to discover all available pages before exploring further.
Production Deployment
This guide covers a single-node StacyVM deployment suitable for an internal service, staging, or a small production installation. The default production path uses the Docker provider because it works on the broadest set of hosts; Firecracker and PRoot require extra host setup and should be validated on the target platform before rollout.Requirements
- Linux host with Docker installed when using the Docker provider.
- A persistent data directory, normally
/var/lib/stacyvm. - A generated API key with at least 32 bytes of entropy.
- TLS and public ingress handled by a reverse proxy or load balancer in front of StacyVM.
- Explicit
server.cors_allowed_originsfor every browser origin that may call the API. - Health checks wired to the API endpoints listed below.
./stacyvm.yaml, then ~/.stacyvm/config.yaml, then STACYVM_ environment variables. In production, prefer a checked-in baseline config plus environment variables or secret files for secrets and environment-specific values. Worker credentials can be mounted through auth.worker_token_file and auth.worker_signing_key_file; the loader rejects configs that set both the inline secret and its file reference.
Before starting a single-node staging or production host, lint the final config with the same environment variables the service will use:
stacyvm doctor --production after linting when you also want live host checks for Docker, Firecracker, PRoot, database directories, and installed binaries.
Health and Metrics
Use these endpoints for load balancers and monitors:| Endpoint | Purpose |
|---|---|
GET /api/v1/live | Process liveness. Use this for simple restart checks. |
GET /api/v1/ready | Readiness. Use this before routing traffic after deploys. |
GET /api/v1/health | Dependency and provider health summary. |
GET /api/v1/metrics/prometheus | Prometheus metrics scrape endpoint. |
X-API-Key: <api-key> to protected API endpoints. Keep health probes scoped to your private network if they bypass auth at an upstream proxy.
After a deploy, run the smoke script:
Docker Compose
The files indeploy/ provide a production-oriented Compose starting point:
deploy/docker-compose.ymlstarts StacyVM and Traefik for live previews.deploy/stacyvm.production.yamlenables auth, explicit CORS origins, rate limiting, sandbox caps, queueing, JSON logs, and persistent SQLite state.deploy/.env.examplelists the environment variables expected by the Compose file.deploy/stacyvm.env.exampleis the systemd environment file template.
STACYVM_API_KEY and STACYVM_ADMIN_API_KEY in production. Admin routes live under /api/v1/admin/* and should be restricted to operator networks where possible. Replace the example server.cors_allowed_origins value with the exact public console/API origins for your deployment; do not expose browser clients with wildcard CORS.
See admin-control-plane for admin dashboard setup, quota operations, diagnostics, audit export, and audit retention notes. See security-governance for the production admin hardening checklist and OIDC/SSO integration plan. The production config keeps 90 days of admin audit history with auth.admin_audit_retention: "2160h" and disables admin fallback with auth.admin_fallback_enabled: false.
3000 and requesting Traefik with Host: 3000-<sandbox-id>.<preview-domain>.
systemd
Usedeploy/stacyvm.service when running the binary directly on a Linux host.
/etc/stacyvm/stacyvm.env and set real STACYVM_AUTH_API_KEY and STACYVM_AUTH_ADMIN_API_KEY values. Then enable the service:
WorkingDirectory=/etc/stacyvm so StacyVM can load /etc/stacyvm/stacyvm.yaml through its current ./stacyvm.yaml lookup path while keeping persistent database state in /var/lib/stacyvm.
Reverse Proxy
Terminate TLS before StacyVM. A typical proxy should:- Forward API traffic to
http://127.0.0.1:7423. - Preserve
X-API-Keyheaders. - Route live preview hostnames such as
3000-sb-<id>.<preview-domain>to Traefik when using Docker live previews. - Restrict admin and metrics endpoints to trusted networks.
server.preview_domain or STACYVM_SERVER_PREVIEW_DOMAIN to the domain that resolves preview subdomains to your proxy.
Backups
The default store is SQLite at/var/lib/stacyvm/stacyvm.db. Prefer the built-in backup command because it uses SQLite’s online backup path and validates the output:
Upgrades
Before changing binaries or images, rehearse the upgrade with the exact config and database path the service uses:--include-doctor on the target host when you also want live provider checks before the upgrade.
Upgrade flow:
- Check the release notes for config or API changes.
- Run
stacyvm upgrade rehearseand resolve any failing checks. - Back up
/var/lib/stacyvm/stacyvm.dbwithstacyvm db backup. - Replace the binary or update
STACYVM_IMAGE. - Restart the service.
- Confirm
GET /api/v1/readysucceeds before routing traffic. - If the upgrade fails, stop StacyVM and restore the pre-upgrade backup with
stacyvm db restore --yes.
Support Bundles
For support requests, generate a redacted bundle instead of sharing raw config, logs, or environment output:/api/v1/diagnostics output. Secret-shaped keys, API keys, bearer tokens, and URLs with embedded credentials are redacted before the file is written.
Provider Notes
Docker is the safest default for broad deployment compatibility. For stronger isolation, run Docker with gVisor (runtime: "runsc") or Kata after validating the runtime on the host.
Firecracker requires Linux/KVM, a kernel image, rootfs images, networking setup, and the stacyvm-agent binary available to the runtime. Keep Firecracker disabled in shared templates until a host conformance check passes.
PRoot requires a real rootfs with the binaries your sandboxes need. Use it for restricted environments where Docker and KVM are unavailable, and validate memory/disk limits against the host because PRoot enforcement is not equivalent to VM isolation.
Use runtime-conformance as the signoff checklist for Docker, gVisor, Kata, Firecracker, PRoot, E2B, and custom providers.
