Documentation Index
Fetch the complete documentation index at: https://internal.september.wtf/llms.txt
Use this file to discover all available pages before exploring further.
This page brings up bap-engine on your laptop, registers a product,
provisions an Engine for a user, and admits a request. By the end
you’ll have a working orchestrator + Engine pair you can hit with
curl.
Prerequisites
- Docker with Compose v2.
- Git.
- An Anthropic API key and an OpenAI API key (the orchestrator
forwards these to engine containers).
- A GitHub PAT with read access to
septemberai/engine (only needed
at image build time).
cd ~/work
git clone git@github.com:septemberai/bap-engine.git
cd bap-engine
Create .env from the template:
Open .env and set, at minimum:
# Database (matches docker-compose's postgres service)
ORCH_DATABASE_URL=postgresql://orch_user:orch_local_dev@postgres:5432/orchestrator
# Encryption + admin
ORCH_MASTER_KEY=$(python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())")
ORCH_ADMIN_KEY=dev-admin-key
# Engine image
ORCH_ENGINE_IMAGE=september-engine:2.3.0
ORCH_ENGINE_BACKEND=docker
ORCH_ENGINE_ENV_PASSTHROUGH=LLM_API_KEY,OPENAI_API_KEY,ANTHROPIC_API_KEY,GEMINI_API_KEY
# LLM provider keys (forwarded to engines)
LLM_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# Build-time only
GITHUB_TOKEN=ghp_...
For the full list of variables see
Environment variables.
Step 2 — Build the engine image
The orchestrator launches engine containers from a local image. Build
it once:
cd ~/work/engine # the engine repo, not bap-engine
docker build -t september-engine:2.3.0 --target prod .
cd -
Your local Docker now has september-engine:2.3.0, which matches
ORCH_ENGINE_IMAGE above.
Step 3 — Start the orchestrator
From the bap-engine repo root:
docker compose up postgres orchestrator
The first run pulls Postgres, builds the orchestrator image, applies
migrations, and starts both. You’ll see:
postgres-1 | database system is ready to accept connections
orchestrator-1 | INFO: Uvicorn running on http://0.0.0.0:8000
orchestrator-1 | INFO: Application startup complete.
Leave it running. Open a second terminal.
Step 4 — Confirm the orchestrator is up
curl -fsS http://localhost:8000/health
You should see:
/health is the only endpoint that doesn’t require auth.
Step 5 — Register a product
The orchestrator is multi-product. Before you can provision engines,
register a product. This call requires X-Admin-Key (the
ORCH_ADMIN_KEY you set in step 1).
curl -X POST http://localhost:8000/products/register \
-H "X-Admin-Key: dev-admin-key" \
-H "Content-Type: application/json" \
-d '{
"slug": "demo",
"display_name": "Demo product",
"policy": {
"max_engines": 10,
"rate_limit_rpm": 60
}
}'
Response:
{
"product_id": "5c2f...",
"slug": "demo",
"platform_api_key": "pk-sept-aBcDeFg..."
}
Save the platform_api_key. It’s the only time the orchestrator
returns it. Set it as an env var for convenience:
export PLATFORM_KEY=pk-sept-aBcDeFg...
Step 6 — Admit a user
POST /engines/{user_id}/admit is the primary entry point. It
checks policy, auto-provisions if allowed, and returns the engine
URL + key.
curl -X POST http://localhost:8000/engines/demo-user-001/admit \
-H "X-Platform-Key: $PLATFORM_KEY" \
-H "Content-Type: application/json" \
-d '{
"auto_provision": true,
"auto_wake": true
}'
Response:
{
"admitted": true,
"engine": {
"engine_id": "...",
"user_id": "demo-user-001",
"status": "running",
"url": "http://engine-...:8000",
"api_key": "sk-sept-aBcDeFgHi...",
"engine_version": "september-engine:2.3.0",
"created_at": "2026-04-27T...",
"last_health_at": "2026-04-27T..."
}
}
The orchestrator just spun up a fresh Engine container, applied
migrations to its brain, generated an API key, and returned the
endpoint. All in a few seconds.
Step 7 — Talk to the Engine
The product (you, in this example) calls the Engine directly.
The orchestrator stays out of the data path.
export ENGINE_URL=http://localhost:9001 # or whatever port came back
export ENGINE_KEY=sk-sept-aBcDeFgHi...
curl -N -X POST "$ENGINE_URL/execute" \
-H "X-Engine-Key: $ENGINE_KEY" \
-H "Content-Type: application/json" \
-d '{"message": "Say hello.", "task_id": "demo-001"}'
You should see the SSE stream from the Engine. It worked.
If the engine container is bound to 127.0.0.1:9001 only (the default
in docker-compose.yml), the URL above works from your host machine.
From other containers in the same Docker network, use the engine’s
container hostname.
Step 8 — Observe the fleet
curl -fsS http://localhost:8000/status \
-H "X-Platform-Key: $PLATFORM_KEY"
{
"total": 1,
"by_status": {"running": 1},
"health": "ok",
"unhealthy_count": 0
}
curl -fsS http://localhost:8000/engines \
-H "X-Platform-Key: $PLATFORM_KEY"
Lists the user’s engine.
Step 9 — Shut it down
# stop the engine
curl -X POST http://localhost:8000/engines/demo-user-001/stop \
-H "X-Platform-Key: $PLATFORM_KEY"
# destroy it (removes container + brain volume + registry row)
curl -X DELETE http://localhost:8000/engines/demo-user-001 \
-H "X-Platform-Key: $PLATFORM_KEY"
# tear down the orchestrator
docker compose down
To wipe the Postgres state too:
docker compose down --volumes
What you’ve just done
You’ve run the full path a real product takes. The orchestrator
stood up a fresh Engine, gave you a URL + key, and watched its
health. The Engine ran an SSE turn against the LLM. When you were
done, you tore it down cleanly.
Everything that lives in production lives in this loop. Production
deploys add: TLS upstream of the orchestrator, a real secret manager,
a backup strategy for the Postgres + brain volumes, and observability.
Where to go next