- Code models —
preview,duce, andmagmaall handle code. Pick the tier that matches your latency/quality budget. - Sandboxed execution — run generated code in an isolated environment and capture
stdout,stderr, artifacts, and exit codes. - Agent tools — plug Geoff into OpenClaw, OpenCode, Claude Code, or Geoff Agent and let a model drive multi-step tasks against your repo or a fresh sandbox.
1. One-shot code generation
Use the chat endpoint for snippets, refactors, tests, or reviews.2. Execute code in a sandbox
Run untrusted or model-generated code without spinning up your own runtime. Python, JavaScript, TypeScript, Rust, Go, and shell are supported.cURL
sandbox-create to provision, then sandbox-exec to run commands against the same filesystem.
3. Generate → execute → iterate
A minimal self-correcting loop: ask a model for code, run it, feed failures back.Python
4. Drive an agent against your repo
For real codebases, skip reinventing the harness. Point any of these agents at Geoff as the backend and let it plan, edit, and test:Claude Code
Run Anthropic’s CLI agent against Geoff models.
OpenCode
Open-source coding agent that works with any OpenAI-compatible endpoint.
OpenClaw
Lightweight terminal agent for repo-scoped tasks.
Geoff Agent
First-party agent tuned for the Geoff stack.
Pick a model
| Model | Good for |
|---|---|
| preview | Autocomplete, quick scaffolds, linting explanations. |
| duce | Day-to-day coding: feature work, tests, refactors. |
| magma | Long-context work — whole-repo edits, migrations. |