Phoenix is a Elixir workable choice for ai system. GreatCTO auto-detects both — adds the ai-system archetype overlay, wires ai-system-specific gates, and runs 34 specialist agents around your existing Phoenix workflow.
GreatCTO reads your mix.exs and detects phoenix + ai-system archetype from signals: imports, file structure, env vars, README hints.
Attaches the ai-system archetype overlay: EU AI Act + GDPR + OWASP LLM gates, training-data lineage, model card requirements. Override if your specifics differ; the defaults are sensible for Phoenix-style projects.
qa-engineer runs mix test --cover / credo --strict; security-officer audits Ecto query injection + Phoenix CSRF; performance-engineer profiles BEAM scheduler + Ecto pool.
Bugs you've hit before in other Phoenix projects (connection-pool exhaustion, ORM N+1 queries, retry storms) — the agent's Step 0 includes the prior detection order. MTTR drops 94 % on second occurrence (methodology).
$ cd my-phoenix-app && npx great-cto init ✓ scanning manifests… found manifest ✓ stack: phoenix (Elixir) ✓ archetype: ai-system ⚠ archetype + stack combo is unusual — review overlay manually ✓ 34 agents ready $ /start "add model inference endpoint" ▸ architect drafting ARCH-ai-system.md… ▸ pm decomposing into beads tasks… ⚐ gate:plan — your approval needed
Approve → 3 senior-devs run in parallel worktrees → 5 reviewers fan out in parallel → gate:ship → deploy. One real run walked stage-by-stage: /proof.
Phoenix (Elixir) is not a typical fit for ai system. The archetype overlay still attaches, but you may want to override defaults more aggressively. Check the ai-system archetype page for the typical stack list and decide if your case is the right tool / right archetype.
No black-box "AI does it all" loop. GreatCTO is a deterministic state machine — 8 stages, 22 nodes, 2 human gates. Every node maps to a real agent on GitHub. Inspect the state machine →
$ npx great-cto init
Free, MIT, runs locally. Works in Claude Code, Cursor, OpenAI Codex CLI, Aider, and Continue.