Sandboxes
Hosts provision isolated sandboxes. Connect a host, verify connectivity, and understand how token revocation affects sandbox placement.
Manage sandboxes
Connect a host
- Create or copy a bootstrap token from the control plane.
- Run the installer command on the target host.
- Wait for the host to appear in the console.
The host agent uses the bootstrap token only for enrollment. After successful enrollment, the control plane issues stream credentials for normal operation.
Connected hosts provision sandboxes on demand for isolated AI-agent code execution and interactive runtime access.
Sandbox placement is cluster-scoped. It uses the selected cluster, host connectivity, routing rules, tags, traits, and runtime health.
Every tenant has a default BYO cluster for self-hosted infrastructure. Additional clusters can be BYO or GlobalStacks-managed. BYO clusters are the current self-hosted agent path and are the only clusters that accept host enrollment and provisioning tokens. Managed and cloud-backed clusters can provide the same sandbox abstraction later without exposing host provisioning.
When a host has traits that cannot be safely auto-detected, configure them on the agent with AGENT_TRAITS, for example:
AGENT_TRAITS=public-ingress,private-network,trustedConfigured traits are reported alongside auto-detected traits such as docker, amd64, or arm64.
Confirm connectivity
After install, verify:
- the host is listed as connected
- the host overview reports runtime telemetry
- the host logs view shows live agent operational events
- terminal sessions open successfully
- new sandboxes can be placed on the host when routing policy allows it
If terminal access fails, verify the host is still connected before debugging the terminal session itself.
Create and inspect sandboxes
You can create sandboxes directly without starting from a repository or deployment artifact.
Typical placement inputs:
- cluster ID
- explicit host ID
- required tags
- required traits
- ownership policy
Sandbox source inputs:
- omit
imageandblueprintto use the control-plane default image - set
imageto use an explicit image reference - set
blueprintto use an active blueprint ID or unique blueprint name
Blueprint-based sandbox creation uses the blueprint’s immutable internal artifact_ref and records the source blueprint on the sandbox. image and blueprint are mutually exclusive.
The console and CLI both expose sandbox creation and inspection. Sandbox detail pages include runtime activity when available.
Runtime references use the container ID as the control-plane handle. When the runtime also reports a container name, it is retained as display metadata for operators.
Create and use blueprints
Blueprints make sandbox sources reusable.
Image-backed blueprints are created from external image references, optionally through a registry connection. External registries are read-only sources. GlobalStacks stores created blueprints in the embedded internal OCI registry and uses the internal digest reference for later sandbox creation. Agents receive short-lived scoped pull credentials over the control-plane stream when they need to pull a blueprint artifact.
Create a registry connection for a private source:
gstacks registry create ghcr --provider ghcr --url https://ghcr.io --username robot --credential-secret-ref secret://registry/ghcrCreate an image-backed blueprint:
gstacks blueprint create ghcr.io/acme/dev:latest --name dev-ready --registry reg_123Create a sandbox from that blueprint:
gstacks sandbox create --cluster-id cls_123 --blueprint dev-readyCreate a sandbox from that blueprint and clone a repository into it:
gstacks sandbox create --cluster-id cls_123 \
--blueprint dev-ready \
--repo-url https://github.com/example/app.git \
--repo-branch main \
--repo-path /workspace/appCreate a blueprint from an existing sandbox:
gstacks sandbox blueprint sbx_123 --name captured-devboxSandbox blueprint capture is dispatched to the connected source host. The blueprint stays building until the agent reports success, then becomes active with an internal artifact reference. Capture failures mark the blueprint build_failed.
Repository bootstrap status is exposed on the sandbox metadata while the agent clones the repository inside the sandbox container.
Blueprint repository storage can run against the local filesystem for development or an S3-compatible bucket for durable production storage. With S3 configured, blueprint manifests, blobs, and upload state survive control-plane restarts and local cache rebuilds. Agents cache blueprint images by immutable digest, can warm cache entries before first use, and report cache status so placement can prefer a warm host when capacity and policy allow it.
Attach sandboxes to networks
Runtime networks let sandboxes communicate across hosts through stable DNS aliases.
Attach a sandbox to a network with an alias and exposed port:
gstacks network attach dev <sandbox-id> --alias api --port 8080After attachment, other allowed network members can target:
api.dev.gstacks:8080The sandbox detail page shows attached networks and published aliases. Network membership and policy are managed from the top-level Networks page.
Existing sandboxes may need restart or reattach before their container DNS settings point at the host agent.
Network diagnostics run through connected agents with typed control messages. They do not use a shell on the control-plane server.
Deprovision and archive sandboxes
Deprovisioning a sandbox removes its active runtime workload from the connected host and then archives the sandbox record instead of removing it from inventory.
After deprovisioning:
- the sandbox status moves to
archived - runtime workloads are removed through the connected host agent
- the sandbox record remains visible for later inspection
- sandbox activity history remains available
If the host is disconnected and runtime cleanup cannot be confirmed, deprovisioning is rejected instead of silently orphaning the workload.
Container rediscovery after agent reinstall
Managed sandboxes are tracked by durable identities written to Docker labels, so a newly installed or reinstalled host agent can rediscover existing workloads and reconnect them to the same sandbox record.
Rediscovery uses this precedence:
sandbox_idfrom the managed runtime labels- fallback runtime metadata when sandbox identity is unavailable
When rediscovery succeeds:
- the sandbox is rebound to the reporting host
- runtime metadata is refreshed with
container(ID) andcontainer_name(display metadata) - ownership transfer is recorded if another host previously managed the sandbox
When rediscovery is rejected:
- archived sandboxes are not reattached
- missing sandboxes are ignored
- a
sandbox.rediscovery.ignoredinfrastructure event is recorded with a reason (archivedornot_found)
Operator runbook for rediscovery mismatches:
- Open the sandbox activity stream and confirm whether
sandbox.rediscovery.ignoredwas emitted. - Verify the current sandbox status is still
archived(or the record is deleted indeleted_at). - Confirm the host labels include the durable sandbox identity fields.
- Recreate the sandbox if policy requires a fresh managed workload.
Automated rediscovery behavior is validated in tests using fake Docker client seams and fake control-plane stream inputs, so CI does not require a real Docker daemon.
Revoke access
Revoke the bootstrap token when the host should no longer have control-plane access.
After revocation:
- the host record moves to
no_token - active sessions are disconnected
- future reconnect attempts are rejected
- new sandboxes are not placed on that host
If a revoked host still appears connected after the cleanup window, treat that as a product issue.
Terminal sessions
Terminal access depends on:
- an authenticated browser session
- a connected host
- a valid terminal session created by the control plane
When the backend closes a terminal session, the UI should immediately show the session as closed.
Sandbox terminal access is proxied through the connected host agent, not through a shell on the control-plane server.