FastAPI is a Python natural fit for ai system. GreatCTO auto-detects both — adds the ai-system archetype overlay, wires ai-system-specific gates, and runs 34 specialist agents around your existing FastAPI workflow.
GreatCTO reads your pyproject.toml / requirements.txt and detects fastapi + ai-system archetype from signals: imports, file structure, env vars, README hints.
Attaches the ai-system archetype overlay: EU AI Act + GDPR + OWASP LLM gates, training-data lineage, model card requirements. Override if your specifics differ; the defaults are sensible for FastAPI-style projects.
qa-engineer runs mypy / ruff / pytest --cov; security-officer scans for SQL injection patterns common in ORMs (SQLAlchemy, Django ORM); performance-engineer profiles async patterns for inference latency.
Bugs you've hit before in other FastAPI projects (connection-pool exhaustion, ORM N+1 queries, retry storms) — the agent's Step 0 includes the prior detection order. MTTR drops 94 % on second occurrence (methodology).
$ cd my-fastapi-app && npx great-cto init ✓ scanning manifests… found pyproject.toml ✓ stack: fastapi (Python) ✓ archetype: ai-system ✓ overlay: applied ✓ 34 agents ready $ /start "add model inference endpoint" ▸ architect drafting ARCH-ai-system.md… ▸ pm decomposing into beads tasks… ⚐ gate:plan — your approval needed
Approve → 3 senior-devs run in parallel worktrees → 5 reviewers fan out in parallel → gate:ship → deploy. One real run walked stage-by-stage: /proof.
No black-box "AI does it all" loop. GreatCTO is a deterministic state machine — 8 stages, 22 nodes, 2 human gates. Every node maps to a real agent on GitHub. Inspect the state machine →
$ npx great-cto init
Free, MIT, runs locally. Works in Claude Code, Cursor, OpenAI Codex CLI, Aider, and Continue.