SpecifyThat is live — and it makes context files, not specs
The last update on SpecifyThat said it just needed to work again. It works. It's live.
But the more important thing that shipped isn't the status badge — it's clarity about what the tool actually makes.
The clearest signal was trying to explain it to someone. A spec is a document you write before you build. A context file is a document your AI reads while it builds. One sits in Notion. The other gets used. I'd been calling it the wrong thing for months.
What shipped
- The output is a context file, not a spec. SpecifyThat now generates the file your AI coding tool reads at the start of every session — stack, architecture, conventions, non-goals — not a document for humans. That distinction matters. Specs sit in Notion. Context files get used.
- v2 flow: describe once, AI fills the rest. You write a sentence or two about your project. AI works through all 13 questions behind the scenes and comes back with a full review screen. Everything pre-filled. You adjust what's wrong — most people change nothing — then generate.
- Works for every major AI coding tool. Cursor, Claude Code, Windsurf, Bolt, Copilot. Each one expects its context in a different file at a different path. The generated file works for all of them. /how-it-works now explains where to drop it for each.
- Multi-part support. If your project has distinct modules or you want to spec a backend and frontend separately, you can run the flow multiple times in one session and get a file per part.
Those four changes shipped in the same push. They felt like separate things right up until I noticed they were all solving the same problem: getting from vague idea to a file your AI can actually use, with as little friction as possible.
Why
The old framing — "generate a spec" — was wrong. Nobody pastes a spec into Cursor. They paste context. Renaming it didn't change the output much, but it changed who the tool is for and how it gets used. Solo builders who want to start coding in ten minutes, not write documentation.
The v2 flow exists because the original interview — question by question, one at a time — felt like homework. The point was always to skip the planning step, not to disguise it as a chat.
I built this for me first. I skip the planning step too. Now I don't have to.
What's next
Three things in the backlog, in rough priority order:
AI confidence review. After generating, a second AI pass reads the spec against the original project description and surfaces 1-3 gaps — things that are ambiguous, missing, or contradictory. You answer them or skip. The idea came from manually asking a model "review this until you're 95% sure you could build it" — that step should be automatic. Planning to use Claude Opus 4.6 for this, which is built for exactly this kind of sustained critical review.
UI/design constraints. One new question in the interview: "Any UI/design constraints AI should know?" — mobile-first vs desktop, dark mode requirement, component library already in use. Not a brand exercise; just enough context so AI doesn't make arbitrary stack choices. Users with a brand.md can skip it.
Brand discovery mode. A separate interview mode that generates a brand.md instead of a context file — vibe, tone, personality, emotional direction. Parked until there's user signal that people want it, but the interview pattern fits naturally.
None of those ship before there's signal. The rate limit data will tell me when to add billing. Usage will tell me what to build next. The issues are parked until then.
Right now it just needs to be used.
Follow the build.
Get notified when the next tool drops. No newsletters. Just launches.