For teams using AI to build software
AI makes writing code effortless. That's the problem. Upfront is a set of slash commands for Claude Code that make the thinking process explicit, challenging, and auditable. Every command forces humans to engage — not rubber-stamp.
$ /feature checkout-timeout-fix
Phase 1: Intent
What problem does this solve?
Not what it builds — what problem goes away?
Push back on vague answers.
Refuse to move on until thinking is substantive.
Every step forces thinking. Every transition produces an artifact.
The AI never leads with suggestions. It asks an open question, waits for your answer, then fills the gaps you missed. "Do you approve this?" produces a yes. "Walk me through what happens when two of these run at the same time" produces understanding.
AI writes the tests. You write the critical code. Concurrency, security, core business logic — the parts where understanding matters most. Tagged [human-writes] in git so you know which code your team actually understands.
Every phase produces a record: what was decided, why, what was rejected, what was skipped. The spec is the audit trail of the thinking, not just the conclusions. A reviewer can tell in 30 seconds if real thinking happened.
After all phases complete, an adversarial agent tries to break correctness, concurrency, boundaries, tests, and security. Fixes obvious issues, asks about judgment calls, flags design concerns.
Every build runs in an isolated git worktree. Main stays clean. Failed builds? Delete the worktree. No stash dance, no half-done code in your working tree.
Markdown files in .claude/commands/. Copy them into any project. No build step, no SaaS, no API keys, no lock-in. The commands are instructions, not software.
"The thinking is the product, not the code."
Code is generated, reviewed, tested, and eventually deleted. What survives is the team's understanding of their systems. Upfront exists to protect that understanding.