This page brings the Engine up in production from a clean infrastructure slate. It assumes Docker (or any OCI-compatible runtime) and a place to mount a persistent volume for the brain database.Documentation Index
Fetch the complete documentation index at: https://internal.september.wtf/llms.txt
Use this file to discover all available pages before exploring further.
0. Prerequisites
- Container runtime with persistent volume support.
- HTTPS-terminating reverse proxy upstream (Caddy, nginx, ALB, Cloudflare).
- Outbound HTTPS to your LLM provider(s).
- Credentials set up:
- LLM provider API key.
- OpenAI API key for embeddings.
AD_ENCRYPTION_KEY(a Fernet key — see step 3).ENGINE_KEY_HASHfor the Engine’s own auth.
1. Build or pull the image
From the engine repo:pyproject.toml version field).
2. Provision a volume for the brain
The brain SQLite file lives at/data/brain.sqlite inside the container.
Mount a persistent volume there.
| Platform | How |
|---|---|
| Docker Compose | volumes: ["engine_data:/data"] |
| Kubernetes | PVC mounted at /data |
| AWS Fargate | EFS access point at /data |
3. Generate the Asset Directory encryption key
If you’ll use any MCP connectors:AD_ENCRYPTION_KEY. Do not commit it.
Do not rotate it without a re-encrypt step — connections encrypted
under the old key will not decrypt.
4. Generate the Engine API key hash
Generate a strong API key and its SHA-256 hash:ENGINE_KEY_HASH in the Engine. The plaintext never lives in the
Engine’s environment.
5. Configure environment
Minimum for production:6. Run migrations
Migrations run automatically on Engine startup. Bring the Engine up once and watch the logs to confirm migrations applied cleanly.7. Bring up the service
Start the Engine in the foreground first, hit/health, then daemonize.
{"status":"ok",...}.
8. Wire up your reverse proxy
Point your HTTPS-terminating proxy atlocalhost:8000 (or wherever the
Engine listens). Required:
- TLS termination.
- Pass through
X-Engine-Keyheader. - Long timeouts on
/execute(SSE streams can run minutes). - No buffering of the response body (otherwise clients won’t see streaming).
9. Schedule backups
The brain database holds the user’s entire memory. Back it up. Options:- Volume snapshots. Daily snapshot of the volume holding
/data. Cheapest. Recovery is point-in-time. /memory/export. Periodic API export to S3 (or equivalent). Restorable to a fresh Engine via/memory/import.- SQLite
.backup. Runsqlite3 brain.sqlite .backup brain-$(date +%s).dbinside the container on a schedule. Cheap and incremental-friendly.
10. Configure observability
- Health probe:
GET /healthevery 10s. - Log shipping: configure the container to ship
stdout/stderrto your logging stack. - Metrics: see Observability for what to scrape.
- Alerts: see SLO definitions for the alerts that matter.
You’re live
A production Engine is running. Next steps:- Set up upgrade rollouts.
- Plan rollback before you need it.
- Wire alerts to your incident tooling and brief the on-call team.

