Skip to main content

Documentation Index

Fetch the complete documentation index at: https://internal.september.wtf/llms.txt

Use this file to discover all available pages before exploring further.

The Engine ships with a web_search platform tool that the agent can call to look up information on the open web. It uses Brave Search by default, with Tavily as fallback.

When the agent uses it

The agent reaches for web_search when:
  • The question requires recent information (events, prices, releases) the model wouldn’t know from training.
  • The question involves a niche topic or named entity not well-covered by general training.
  • The user explicitly asks the agent to look something up.
The agent doesn’t use it for general knowledge questions. The model answers those directly without burning a search.

Configuration

Two environment variables:
NameRequiredDefaultPurpose
BRAVE_API_KEYfor Brave pathBrave Search API key. Free tier: 2000 queries/month.
TAVILY_API_KEYfor fallbackTavily API key. Free tier: 1000 queries/month.
If both are unset, the web_search tool is registered but returns an error. The agent will surface “I can’t search the web — the search tool isn’t configured.”

What the tool returns

A search returns a list of results:
[
  {
    "title": "...",
    "url": "...",
    "snippet": "...",
    "published_at": "..."
  },
  ...
]
The agent reads the snippets, decides which result is relevant, and optionally follows up by reading the full page (via a separate web_fetch tool, if configured).

Quotas

Brave’s free tier gives 2000 queries/month. For an active agent, expect that to last 1–2 weeks. Either:
  • Upgrade Brave’s tier.
  • Cache aggressively at the application layer (same query within an hour returns the cached result).
  • Use Tavily as primary instead of fallback.

Pitfalls

  • Searches that should have been memory lookups. If the agent is searching for the user’s own data, that’s a sign retrieval from the brain isn’t surfacing what it should. Look at memory writes during the relevant past sessions.
  • Searches that should have been a known URL fetch. “Find the Anthropic docs” → search → “Read the first result” is wasteful. Add a small docs_search tool that goes to known doc sites directly.
  • Cascading searches. The agent searches, reads the first result, searches again from the result. Five searches deep, you’ve spent significant tokens. Cap the depth in the agent’s system prompt.

See also