Chapter 21: /speckit.plan — Implementation Planning
Learning Objectives
By the end of this chapter, you will be able to:
- Explain what
/speckit.plandoes and how it transforms specifications into implementation plans - Identify the inputs (spec.md) and outputs (plan.md, data-model.md, contracts/, research.md, quickstart.md)
- Understand specification analysis: how requirements, user stories, and acceptance criteria drive the plan
- Apply constitutional compliance to ensure plans align with project constitution
- Translate business requirements into technical decisions with rationale
- Navigate the plan template structure: technology decisions, data models, API contracts, phase gates, file creation order
- Generate an implementation plan for a feature through a hands-on tutorial
- Validate plans against phase gates (Simplicity, Anti-Abstraction, Integration-First)
- Use quickstart validation scenarios and research documents effectively
What /speckit.plan Does
/speckit.plan is the second command in the Spec Kit workflow. It transforms a feature specification—the WHAT and WHY—into an implementation plan: the HOW. The plan bridges the gap between "what users need" and "what we will build."
The Transformation
Input: specs/[branch-name]/spec.md (from /speckit.specify)
Output:
plan.md— Implementation plan with phases, technology decisions, and file creation orderdata-model.md— Entity definitions, field types, relationshipscontracts/— API contracts (OpenAPI, AsyncAPI, or similar) with request/response schemasresearch.md— Optional: library comparisons, technology evaluations, performance benchmarksquickstart.md— Key scenarios for rapid validation and smoke testing
The command does not generate code. It generates design—the blueprint that /speckit.tasks will break into executable work units.
Inputs: The Feature Specification
/speckit.plan reads spec.md as its primary input. It extracts:
Requirements
- Functional requirements (FR-001, FR-002, ...): What the system must do
- Acceptance criteria (AC-001, AC-002, ...): Testable conditions for "done"
- User stories: Who wants what and why
- Edge cases: Boundary conditions and error paths
- Non-goals: What we are explicitly not building
How the Plan Uses Each
| Spec Section | Plan Usage |
|---|---|
| Functional Requirements | Drives API design, data model, and component breakdown |
| Acceptance Criteria | Becomes validation scenarios in quickstart.md; informs test design |
| User Stories | Informs UX flow and API sequencing |
| Edge Cases | Drives error handling design, validation rules, and contract definitions |
| Non-Goals | Constrains scope; prevents plan from including out-of-scope work |
If the spec is vague, the plan will be vague. If the spec has [NEEDS CLARIFICATION] markers, the plan may make assumptions—document them so they can be revisited.
Outputs: The Plan Artifacts
plan.md
The central artifact. It contains:
- Architecture overview: High-level structure (components, layers, boundaries)
- Technology decisions: What we use and why (libraries, frameworks, patterns)
- Implementation phases: Ordered stages (e.g., Phase 1: Contracts, Phase 2: Data Layer, Phase 3: API, Phase 4: Integration)
- File creation order: Which files to create first (contracts → tests → source)
- Phase gates: Checkpoints (Simplicity, Anti-Abstraction, Integration-First) that must pass before proceeding
- Dependencies: What blocks what; sequencing constraints
data-model.md
Entity definitions for the feature:
- Entities: ChatRoom, Message, Participant, etc.
- Fields: Name, type, constraints, nullable
- Relationships: One-to-many, many-to-many
- Indexes: For query performance
This document informs database schema, ORM models, and API response shapes.
contracts/
API contracts in machine-readable format (OpenAPI, AsyncAPI, etc.):
- Endpoints: Path, method, request/response schema
- Events: For real-time features (e.g., message.created, user.joined)
- Schemas: Reusable types (Message, Room, User)
Contracts enable contract-first development: tests and clients can be built against the contract before implementation exists.
research.md (Optional)
When the plan involves technology choices (e.g., "which WebSocket library?"), research.md documents:
- Library comparisons: Option A vs. B vs. C
- Criteria: Performance, bundle size, maintenance, community
- Recommendation: What we chose and why
- Benchmarks: If relevant (latency, throughput)
Research reduces "we should have chosen X" regret by making decisions explicit and documented.
quickstart.md
Key scenarios for rapid validation:
- Scenario 1: Create room, send message, receive message (happy path)
- Scenario 2: Non-participant cannot access messages (authorization)
- Scenario 3: 50 concurrent users send messages (load)
These scenarios become smoke tests or manual validation checklists. They answer: "How do I quickly verify this works?"
Specification Analysis
/speckit.plan analyzes the specification through several lenses.
Requirement Extraction
The command parses structured content:
FR-001,FR-002→ Functional requirementsAC-001,AC-002→ Acceptance criteria- User stories in "As a... I want... so that..." format
It builds an internal model: what must exist, what must be testable, what flows exist.
User Journey Mapping
User stories imply flows:
- "Create room" → Room creation API, Room entity
- "Send message" → Message API, Message entity, delivery mechanism
- "See messages in real time" → Real-time delivery (WebSocket, SSE, or polling)
The plan connects each user story to technical components.
Edge Case Handling
Edge cases from the spec become:
- Validation rules (max message length, empty message rejection)
- Error responses (404, 403, 429)
- Boundary logic (room at capacity, user offline)
The plan explicitly addresses each edge case so implementation doesn't guess.
Constitutional Compliance
If your project has a constitution (see Part V), /speckit.plan checks the plan against it.
What Is Checked
- Library-First Principle: Does the plan prefer existing libraries over custom implementation?
- Simplicity Gate: Are we avoiding unnecessary abstractions?
- Anti-Abstraction Gate: Do we have 3+ use cases before introducing an abstraction?
- Integration-First Testing: Are we testing through the real API, not just unit tests?
- CLI Interface Mandate: If applicable, does the plan include CLI for operations?
How Compliance Is Enforced
- Pre-generation: The plan template includes constitution references. AI (or rules) are instructed to align with them.
- Post-generation: A compliance checklist in plan.md: "Does this design violate any constitutional principle?"
- Phase gates: The Simplicity, Anti-Abstraction, and Integration-First gates are explicit checkpoints. The plan cannot proceed without passing them.
When the Plan Violates the Constitution
If the generated plan violates a constitutional principle:
- The plan should flag it (e.g., "WARNING: Proposed abstraction has only 1 use case; violates Anti-Abstraction Gate")
- You must either: (a) revise the plan to comply, or (b) document an exception with justification
- Exceptions should be rare and reviewed
Technical Translation
The plan translates business requirements into technical decisions.
From "Messages delivered within 2 seconds" to...
- Real-time mechanism: WebSocket, Server-Sent Events, or long polling?
- Scaling: Single server vs. distributed (Redis pub/sub, message queue)?
- Client: How does the frontend receive updates?
The plan makes these decisions and documents rationale. It does not leave them to implementation-time guesswork.
From "Only participants can see messages" to...
- Authorization model: Check room membership on every message fetch
- Data model: Participant entity linked to Room and User
- API: 403 when non-participant requests messages
The plan specifies the authorization approach so implementation is consistent.
From "50 concurrent users per room" to...
- Load target: Plan includes this as a non-functional requirement
- Testing: quickstart.md includes a 50-user scenario
- Architecture: Single WebSocket server may suffice; document if scaling is needed later
The Plan Template Structure
1. Technology Decisions with Rationale
Every significant technology choice is documented:
## Technology Decisions
### TD-001: WebSocket for Real-Time Delivery
- **Choice**: Socket.io (Node.js) / native WebSocket (browser)
- **Alternatives**: Server-Sent Events, long polling
- **Rationale**: WebSocket provides bidirectional, low-latency communication. Socket.io adds reconnection and fallback. For 50 users/room, single-server is sufficient.
- **Constitution**: Library-First satisfied (Socket.io is established library)
2. Data Models with Field Definitions
## Data Model
### ChatRoom
| Field | Type | Constraints |
|-------|------|-------------|
| id | UUID | PK, generated |
| name | string | required, max 100 chars |
| description | string | optional, max 500 chars |
| created_at | timestamp | required |
| created_by | UUID | FK → User |
### Message
| Field | Type | Constraints |
|-------|------|-------------|
| id | UUID | PK, generated |
| room_id | UUID | FK → ChatRoom |
| sender_id | UUID | FK → User |
| content | string | required, max 4000 chars |
| created_at | timestamp | required |
3. API Contracts with Request/Response Schemas
# contracts/chat-api.yaml (OpenAPI excerpt)
paths:
/rooms:
post:
summary: Create chat room
requestBody:
content:
application/json:
schema:
type: object
required: [name]
properties:
name: { type: string, maxLength: 100 }
description: { type: string, maxLength: 500 }
responses:
'201':
description: Room created
content:
application/json:
schema: { $ref: '#/components/schemas/Room' }
'400':
description: Validation error
4. Phase Gates
## Phase Gates
### Simplicity Gate
Before implementation, verify:
- [ ] No abstractions without 3+ use cases
- [ ] Using existing libraries where possible
- [ ] No speculative "future-proofing"
### Anti-Abstraction Gate
- [ ] No generic repository if only one entity type
- [ ] No message bus if only one consumer
- [ ] No plugin system if only one plugin
### Integration-First Gate
- [ ] Tests hit real API (not just mocks)
- [ ] Contract tests validate API shape
- [ ] Quickstart scenarios are automated or manually runnable
5. File Creation Order
## File Creation Order
1. contracts/chat-api.yaml
2. contracts/events.yaml
3. data-model.md (refined from plan)
4. tests/contract/chat-api.test.ts
5. src/entities/room.ts
6. src/entities/message.ts
7. src/repositories/room-repository.ts
8. src/repositories/message-repository.ts
9. src/services/chat-service.ts
10. src/api/chat-routes.ts
11. src/websocket/chat-handler.ts
12. tests/integration/chat-flow.test.ts
Order matters: contracts first (so tests can be written), then entities, repositories, services, API. Dependencies flow downward.
Tutorial: Generate an Implementation Plan for "Real-Time Chat"
This tutorial continues from Chapter 19. You have specs/004-real-time-chat/spec.md and want to generate the implementation plan.
Prerequisites
- Completed spec from Chapter 19 tutorial
- Spec Kit installed
- Project constitution (if your project has one) in
specs/global/constitution.mdor similar
Step 1: Run /speckit.plan with Technology Preferences
From your project root:
/speckit.plan
Feature: 004-real-time-chat
Technology preferences:
- Backend: Node.js with Express (existing stack)
- Real-time: Prefer Socket.io (we use it elsewhere)
- Database: PostgreSQL (existing)
- Testing: Jest, Supertest for API tests
Constitution: Align with specs/global/constitution.md
What happens:
- Spec Kit reads
specs/004-real-time-chat/spec.md - Optionally reads
specs/global/constitution.md - Analyzes requirements, user stories, acceptance criteria
- Generates plan.md, data-model.md, contracts/, research.md (if tech choices needed), quickstart.md
- Writes all artifacts to
specs/004-real-time-chat/
Step 2: Review Generated Plan
Open specs/004-real-time-chat/plan.md. Review:
Architecture Overview:
- Does it match your mental model?
- Are components clearly bounded?
- Are there any surprises?
Technology Decisions:
- Are choices justified?
- Do they align with your stack?
- Any alternatives you'd prefer?
Implementation Phases:
- Is the order logical?
- Are dependencies correct?
- Can any phases run in parallel?
Phase Gates:
- Can you pass Simplicity, Anti-Abstraction, Integration-First?
- Any violations to fix?
Step 3: Validate Against Phase Gates
Work through each gate:
Simplicity Gate:
- Are we using Socket.io (existing library) or building custom WebSocket handling? ✓ Library
- Are we adding a message queue for 50 users? Probably not—single server suffices. ✓ No over-engineering
- Are we building a generic "realtime service" for one feature? If yes, flag it. ✗ Violation
Anti-Abstraction Gate:
- Generic repository for Room and Message? Only 2 entities—consider simple data access. May need to simplify.
- Event bus for message delivery? Only one consumer (WebSocket handler). ✗ Violation—use direct call.
Integration-First Gate:
- Do we have contract tests? Plan should include them. ✓
- Do we test through real API? Plan should specify. ✓
- Quickstart scenarios? quickstart.md should have them. ✓
Fix any violations. Update the plan. Document exceptions if justified.
Step 4: Review Data Models and Contracts
data-model.md:
- Are all entities from the spec represented? (Room, Message, Participant)
- Are field types correct? (UUID, string, timestamp)
- Are constraints from spec captured? (max 4000 chars, max 100 participants)
contracts/:
- Do endpoints cover all user stories? (create room, send message, list messages, list participants)
- Are request/response schemas complete?
- Are error responses (400, 403, 404, 429) defined?
- For real-time: Are events defined? (message.created, user.joined, user.left)
Refine as needed. The plan is a living document until implementation starts.
Quickstart Validation
quickstart.md provides key scenarios for rapid validation. Use it to:
- Smoke test: After implementation, run through scenarios manually or automate them
- Acceptance check: Each scenario maps to acceptance criteria
- Demo: Show stakeholders "it works" without deep testing
Example: Condensed plan.md Output
Below is a condensed example of what a generated plan.md might look like for the real-time chat feature:
# Implementation Plan: Real-Time Chat
## Architecture Overview
Components: API (Express), WebSocket (Socket.io), ChatService, RoomRepository,
MessageRepository, ParticipantRepository. Data flow: REST for CRUD, WebSocket
for real-time. Boundaries: Auth from existing system; DB is PostgreSQL.
## Technology Decisions
- TD-001: Socket.io for real-time. Rationale: Existing in stack, reconnection,
fallback. Alternatives: ws (lighter), SSE (one-way). Chosen: Socket.io.
- TD-002: PostgreSQL. Rationale: Existing. No new DB.
- TD-003: Repository pattern. Rationale: Decouples service from DB. AC-003.
## Data Model
ChatRoom(id, name, description, created_at, created_by)
Message(id, room_id, sender_id, content, created_at)
Participant(id, room_id, user_id, joined_at)
## Implementation Phases
Phase 1: Contracts + entities (contracts/, entities/)
Phase 2: Repositories (room, message, participant)
Phase 3: ChatService + API routes
Phase 4: WebSocket handler + integration tests
## Phase Gates
Simplicity: ✓ Library-first (Socket.io). No custom WebSocket.
Anti-Abstraction: ✓ No generic repo (only 2 entity types; simple repos).
Integration-First: ✓ Contract tests + integration tests in plan.
Example quickstart.md Content
# Quickstart: Real-Time Chat
## Scenario 1: Happy Path
1. Create room "Test Room"
2. User A joins room
3. User B joins room
4. User A sends "Hello"
5. User B receives "Hello" within 2 seconds
6. User B sends "Hi back"
7. User A receives "Hi back" within 2 seconds
**Validates**: AC-001, AC-002
## Scenario 2: Authorization
1. User A creates room
2. User C (not participant) requests GET /rooms/{id}/messages
3. Expect 403 Forbidden
**Validates**: AC-003
## Scenario 3: Load
1. Create room
2. 50 users join
3. Each sends 10 messages over 1 minute
4. All messages delivered; no errors
5. No message loss
**Validates**: AC-004, FR-005
Research Documents
When the plan involves technology choices, research.md documents the evaluation.
When research.md Is Generated
- New library/framework selection (e.g., "which WebSocket library?")
- Performance-critical decisions (e.g., "in-memory vs. Redis for pub/sub?")
- Multiple viable alternatives with tradeoffs
research.md Structure
# Research: Real-Time Delivery for Chat
## Options Evaluated
### Option A: Socket.io
- Pros: Mature, reconnection, fallback to polling, we use it elsewhere
- Cons: Heavier than raw WebSocket
- Verdict: Recommended (consistency, features)
### Option B: ws (raw WebSocket)
- Pros: Lightweight, minimal
- Cons: No reconnection, no fallback, more code
- Verdict: Viable for simple cases
### Option C: Server-Sent Events
- Pros: Simpler, HTTP-based
- Cons: One-way only (server→client); need separate channel for client→server
- Verdict: Not suitable for chat (bidirectional needed)
## Decision
Use Socket.io. Aligns with existing stack, handles reconnection, sufficient for 50 users/room.
Plan Quality Checklist
Before considering the plan complete:
- All spec requirements map to plan components
- Data model covers all entities implied by spec
- Contracts cover all API surface and events
- Phase gates pass (or exceptions documented)
- File creation order respects dependencies
- Quickstart scenarios cover key acceptance criteria
- Technology decisions have rationale
- Constitution compliance verified
Integration with the SDD Pipeline
/speckit.plan sits between specification and task generation:
spec.md → /speckit.plan → plan.md, data-model.md, contracts/
↓ ↓ ↓
WHAT/WHY HOW (design) tasks.md (next)
The plan is the bridge. It takes the abstract "what" and makes it concrete enough for task breakdown. Without a plan, tasks would either be too high-level ("implement chat") or too implementation-scattered (random file creation order).
Handoff to Task Generation
When you run /speckit.tasks, it will:
- Read plan.md (required)
- Read data-model.md, contracts/, research.md (optional but recommended)
- Derive tasks from: contracts, entities, phases, quickstart scenarios
- Produce tasks.md with atomic, ordered, traceable tasks
A well-structured plan produces a well-structured task list. Ambiguous phases produce ambiguous tasks.
Plan Template Deep Dive
Architecture Overview Section
The architecture overview should answer:
- What are the main components? (API, services, repositories, real-time layer)
- How do they connect? (dependency direction, data flow)
- What are the boundaries? (what is in scope, what touches existing systems)
Example for real-time chat:
## Architecture Overview
### Components
- **API Layer**: Express routes for room CRUD, message send (REST)
- **WebSocket Layer**: Socket.io for real-time message delivery
- **Chat Service**: Business logic (room creation, message validation, participant management)
- **Repositories**: RoomRepository, MessageRepository, ParticipantRepository
- **Database**: PostgreSQL (rooms, messages, participants tables)
### Data Flow
1. Client creates room via POST /rooms
2. Client connects WebSocket, joins room
3. Client sends message via WebSocket or POST /rooms/:id/messages
4. Server broadcasts to room participants via WebSocket
5. Messages persisted to DB for history (if retention applies)
Implementation Phases Section
Phases should be ordered by dependency. Typically:
- Phase 1: Foundation — Data model, contracts, repository interfaces
- Phase 2: Core Logic — Services, business rules
- Phase 3: API — Routes, WebSocket handlers
- Phase 4: Integration — End-to-end tests, quickstart validation
Each phase has deliverables. The next phase cannot start until the previous is complete (or has stable interfaces).
Dependency Graph
Some plans include a dependency graph:
contracts/ → entities → repositories → services → API → tests
↓ ↓ ↓ ↓ ↓
(no deps) (contracts) (entities) (repos) (services)
This visual helps when parallelizing work: contracts can be done first; entities depend on contracts; etc.
Handling Plan Iteration
Plans evolve. When you discover a gap or a better approach:
- Update the plan: Edit plan.md, data-model.md, or contracts/
- Document the change: Add to a Changelog section or commit message
- Re-run tasks if needed: If the plan changed significantly, regenerate tasks.md
- Preserve traceability: Ensure tasks still map to plan phases and spec requirements
Avoid "plan drift"—where implementation diverges from the plan without updating it. The plan should remain the source of truth until implementation is complete.
Plan Review Checklist
Before passing the plan to /speckit.tasks, verify:
- Completeness: Every spec requirement has a plan counterpart
- Consistency: Data model matches contracts; contracts match API design
- Constitution: Phase gates pass; no undocumented violations
- Clarity: A new team member could implement from the plan alone
- Testability: Quickstart scenarios exist; integration test approach is clear
- Order: File creation order is correct; no circular dependencies
Common Mistakes
Mistake 1: Skipping the Plan Phase
The Error: Going from spec directly to tasks or code.
Why It's Wrong: You lose architecture. Tasks become disconnected. You discover "we need a message queue" mid-implementation.
The Fix: Always run /speckit.plan. The 5–10 minutes saves hours of rework.
Mistake 2: Plan That Repeats the Spec
The Error: Plan is just a restatement of requirements with no technical design.
Why It's Wrong: The plan should add HOW. If it doesn't, tasks will make ad-hoc technical decisions.
The Fix: Plan must include: technology choices, data model, contracts, file order, phase gates.
Mistake 3: Ignoring Phase Gates
The Error: Plan includes abstractions with one use case, or skips integration tests.
Why It's Wrong: Phase gates exist to prevent over-engineering and ensure testability. Ignoring them defeats the purpose.
The Fix: Work through each gate. Fix violations or document exceptions with justification.
Mistake 4: Contracts After Implementation
The Error: Building API first, then writing OpenAPI from the code.
Why It's Wrong: Contract-first enables parallel work (frontend and backend can develop against the contract) and ensures the API is designed, not discovered.
The Fix: Contracts are created in the plan phase. Implementation follows the contract.
Mistake 5: Vague Technology Rationale
The Error: "We'll use Redis" with no explanation.
Why It's Wrong: Future readers (and your future self) won't know why. Reconsideration becomes guesswork.
The Fix: Every significant choice: what, alternatives, why this one.
Frequently Asked Questions
Q: Can I run /speckit.plan without a constitution?
A: Yes. The command will skip constitutional compliance checks. Plan generation proceeds without them.
Q: What if the plan violates a phase gate?
A: Either revise the plan to comply, or document an exception with justification. Exceptions should be rare and reviewed.
Q: Should contracts be OpenAPI, AsyncAPI, or something else?
A: Use what your stack supports. OpenAPI for REST. AsyncAPI for events. The format matters less than having a machine-readable contract.
Q: How detailed should data-model.md be?
A: Enough to implement. Field names, types, constraints, relationships. It can be a table or a more formal schema. The plan phase should produce something usable.
Q: When do I need research.md?
A: When there are technology choices with multiple viable options. Document the evaluation so the decision is traceable.
Try With AI
Prompt 1: Plan Generation
"I have a specification at specs/004-real-time-chat/spec.md. Run /speckit.plan to generate an implementation plan. Use Node.js, Express, Socket.io, PostgreSQL. Show me the plan.md structure and explain how each section was derived from the spec. Identify any assumptions the plan made."
Prompt 2: Phase Gate Validation
"Review my plan at specs/[branch-name]/plan.md. For each phase gate (Simplicity, Anti-Abstraction, Integration-First), check: (1) Does the plan pass? (2) Are there any violations? (3) If we need an exception, what justification would we document? List specific line references."
Prompt 3: Contract Completeness
"Compare my spec (specs/[branch-name]/spec.md) with my API contracts (specs/[branch-name]/contracts/). For each functional requirement and user story, is there a corresponding endpoint or event? List any gaps. Suggest the contract additions needed."
Prompt 4: Research Document
"My plan proposes using [Library X] for [purpose]. Generate a research.md section that: (1) Lists 2–3 alternatives, (2) Compares them on criteria relevant to our use case, (3) Recommends one with rationale. Format for inclusion in specs/[branch-name]/research.md."
Practice Exercises
Exercise 1: Plan from Spec
Take the real-time chat spec from Chapter 19 (or create a minimal spec for "user profile update"). Run /speckit.plan. Review the output. For each plan section (architecture, data model, contracts, phases), trace back to the spec: which requirement or acceptance criterion drove it? Document the traceability in a table.
Expected outcome: A traceability matrix showing spec → plan mapping.
Exercise 2: Phase Gate Audit
Take an existing implementation plan (from a past project or sample). Audit it against the three phase gates. For each gate: (1) Does the plan pass? (2) If not, what changes would bring it into compliance? (3) Are there any justified exceptions? Write a 1-page audit report.
Expected outcome: Phase gate audit with specific recommendations.
Exercise 3: Contract-First Workflow
Generate a plan that includes API contracts. Before implementing, write a contract test that validates the API shape (e.g., using OpenAPI and a contract testing library). Implement the API to satisfy the contract. Reflect: How did contract-first change your implementation approach? What would you do differently?
Expected outcome: A contract test, a passing implementation, and a brief reflection.
Key Takeaways
-
/speckit.plantransforms specification (WHAT/WHY) into implementation plan (HOW). It produces plan.md, data-model.md, contracts/, research.md, and quickstart.md. -
Specification analysis drives the plan: requirements → components, user stories → flows, edge cases → validation and error handling. A clear spec produces a clear plan.
-
Constitutional compliance ensures the plan aligns with project principles. Phase gates (Simplicity, Anti-Abstraction, Integration-First) must pass before implementation.
-
Technical translation converts business requirements into technology decisions, data models, and API contracts. Rationale is documented for every significant choice.
-
File creation order matters: contracts → tests → source. Contract-first enables parallel development and ensures the API is designed, not discovered.
-
Quickstart and research support validation and decision-making. quickstart.md provides smoke test scenarios; research.md documents technology evaluations.
Quick Reference: /speckit.plan
| Aspect | Detail |
|---|---|
| Input | spec.md (required) |
| Output | plan.md, data-model.md, contracts/, research.md, quickstart.md |
| Analysis | Requirements, user stories, acceptance criteria, edge cases |
| Compliance | Constitution, phase gates (Simplicity, Anti-Abstraction, Integration-First) |
| Key sections | Architecture, tech decisions, data model, contracts, phases, file order |
| Contract-first | Contracts before implementation |
| When to run | After spec is complete and clarified |
| Review time | 10–20 min; validate phase gates |
Chapter Quiz
-
What are the five output artifacts of
/speckit.plan? What is the purpose of each? -
How does the plan use functional requirements vs. acceptance criteria differently?
-
What are the three phase gates, and what does each prevent?
-
Why should API contracts be created before implementation? What problem does contract-first solve?
-
What is constitutional compliance in the context of planning? How is it enforced?
-
Describe the file creation order for a typical API feature. Why is this order important?
-
When is research.md generated? What should it contain?
-
How does quickstart.md support validation? Give an example scenario that maps to an acceptance criterion.
Case Study: Plan for "Export Data as CSV"
Given a spec for CSV export (see Chapter 19 case study), the plan might look like:
Technology decisions:
- TD-001: Use existing CSV library (e.g., csv-stringify for Node) — Library-First
- TD-002: No new tables — export reads from existing User/Order data
- TD-003: Streaming for large exports — avoid memory spike for 10k rows
Data model: No new entities. Export uses existing User, Order (or equivalent). Plan documents which tables/columns are included.
Contracts:
- GET /api/export/csv?from=...&to=...&limit=10000
- Response: 200 (CSV file), 400 (invalid params), 429 (rate limit)
Phases:
- Contract + export service interface
- Implement streaming CSV generation
- Add route, auth, rate limit
- Integration test
Phase gates: Simplicity ✓ (no new abstractions). Anti-Abstraction ✓ (one export type). Integration-First ✓ (test via real API).
This plan is minimal—no over-engineering. It matches the spec's scope.
Plan Anti-Patterns to Avoid
Anti-Pattern 1: Plan as Spec Restatement
Bad: Plan just repeats "Users can create rooms" with no technical design.
Good: Plan says "POST /rooms, RoomRepository.create(), ChatService.createRoom(), validation rules..."
The plan adds HOW. If it doesn't, it's not a plan.
Anti-Pattern 2: Technology Without Rationale
Bad: "We'll use Redis." (No explanation.)
Good: "We'll use Redis for pub/sub. Alternatives: in-memory (doesn't scale across instances), RabbitMQ (overkill for 50 users). Redis fits our scale and we use it elsewhere."
Every significant choice needs rationale.
Anti-Pattern 3: Skipping Phase Gates
Bad: Plan includes a generic repository for 2 entities because "we might add more later."
Good: Either justify the exception ("Team standard; we use it everywhere") or use simple repositories. Don't silently violate gates.
Anti-Pattern 4: Contracts After Code
Bad: "We'll implement the API first, then generate OpenAPI from the code."
Good: Contracts in the plan phase. Code implements the contract. Contract-first.
Anti-Pattern 5: Vague File Order
Bad: "Implement the backend."
Good: "1. contracts/chat-api.yaml 2. src/entities/room.ts 3. src/repositories/room-repository.ts ..." (Explicit, ordered.)
Appendix: Plan Template Structure
Use this as a reference when creating or customizing plan templates:
# Implementation Plan: {{FEATURE_NAME}}
## Metadata
- Feature: {{FEATURE_NUMBER}}
- Branch: {{BRANCH_NAME}}
- Created: {{DATE}}
- Spec: spec.md
## Architecture Overview
- Components
- Data flow
- Boundaries
## Technology Decisions
- TD-001: [Choice] — Rationale, alternatives
- TD-002: ...
## Data Model
- Entity definitions (table format)
- Relationships
- Indexes
## API Contracts
- Reference: contracts/*.yaml
- Endpoints summary
- Events summary
## Implementation Phases
- Phase 1: [Name] — Deliverables
- Phase 2: ...
- Phase 3: ...
## File Creation Order
1. contracts/...
2. src/entities/...
3. src/repositories/...
4. ...
## Phase Gates
### Simplicity Gate
### Anti-Abstraction Gate
### Integration-First Gate
## Dependencies
- External systems
- Blocking work
## Changelog
| Date | Change |
|------|--------|
Troubleshooting /speckit.plan
"Spec not found"
Ensure specs/[branch-name]/spec.md exists. Run /speckit.specify first if you haven't created the spec.
"Constitution not found"
If your project uses a constitution, ensure it's at the expected path (e.g., specs/global/constitution.md). The command may proceed without it, but compliance checks will be skipped.
"Plan is too generic"
The plan quality depends on spec quality. If the spec is vague, the plan will be vague. Add more detail to the spec: specific requirements, edge cases, acceptance criteria. Re-run the command.
"Technology choices don't match our stack"
Pass technology preferences explicitly when running the command. The plan should respect "Node.js, Express, PostgreSQL" if you specify them. If the generated plan ignores them, you may need to customize the plan template or add project rules.
"Contracts are incomplete"
Review the spec. Every user story that involves an API should have a corresponding endpoint or event. If the plan missed one, add it manually to contracts/ and update plan.md to reference it.
"Phase gates fail but we have good reasons"
Document an exception. Add a section to plan.md:
## Phase Gate Exceptions
### Anti-Abstraction Gate: Generic Repository
- **Violation**: Using generic repository for Room and Message (only 2 entities)
- **Justification**: Team standard; we use this pattern everywhere for consistency
- **Approved by**: [Name/Date]
Exceptions should be rare and reviewed. Don't use them to bypass gates without thought.
Advanced: Multi-Phase Planning
For large features, you may split planning into phases:
Phase 1 Plan: High-level architecture, technology decisions, core data model. Get stakeholder sign-off.
Phase 2 Plan: Detailed contracts, file order, phase gates. Ready for task generation.
This approach allows early validation of architecture before investing in detailed design. Use when the feature is complex or has significant technology risk.