Skip to main content

Chapter 22: /speckit.tasks — Task Generation


Learning Objectives

By the end of this chapter, you will be able to:

  • Explain what /speckit.tasks does and how it derives tasks from plans
  • Identify inputs (plan.md, data-model.md, contracts/, research.md) and output (tasks.md)
  • Understand task derivation: converting contracts, entities, and scenarios into specific tasks
  • Apply parallelization: marking independent tasks [P] and identifying safe parallel groups
  • Define task anatomy: what makes a task atomic, testable, and traceable
  • Generate tasks for a feature through a hands-on tutorial
  • Execute the first task with AI and validate the checkpoint pattern
  • Trace the complete pipeline: specify → plan → tasks → implement
  • Compare traditional vs. SDD workflow time using benchmark ranges and assumptions
  • Apply task execution strategies: sequential, parallel, agent-per-task

What /speckit.tasks Does

/speckit.tasks is the third command in the Spec Kit workflow. It transforms an implementation plan—the HOW—into an executable task list: the WHAT TO DO, in order. Tasks are atomic work units that you or an AI agent can implement one at a time, with clear acceptance criteria and traceability to the spec and plan.

The Transformation

Input:

  • plan.md (required) — Implementation plan with phases, file order, and design decisions
  • data-model.md (optional) — Entity definitions
  • contracts/ (optional) — API contracts
  • research.md (optional) — Technology research

Output:

  • tasks.md — Ordered list of atomic tasks with IDs, dependencies, acceptance criteria, and traceability

The command does not implement anything. It produces the task list that guides implementation. Each task is designed to be completable in 15–30 minutes, with a single acceptance criterion and a verifiable output.


Inputs: What the Command Reads

plan.md (Required)

The primary input. The command extracts:

  • Implementation phases: Each phase becomes a group of tasks
  • File creation order: Each file or logical unit becomes one or more tasks
  • Phase gates: May generate a "validate phase gate" task before implementation
  • Dependencies: Used to order tasks and identify blocking relationships

data-model.md (Optional)

Entity definitions drive tasks:

  • Each entity → Task: "Create [Entity] entity with fields X, Y, Z"
  • Relationships → May affect task order (e.g., Message depends on Room)
  • Indexes → Task: "Add index on [field] for [purpose]"

contracts/ (Optional)

API contracts drive tasks:

  • Each endpoint → Task: "Implement POST /rooms" or "Add route for room creation"
  • Each event → Task: "Implement message.created event emission"
  • Schemas → Inform task acceptance criteria (e.g., "Response matches OpenAPI schema")

research.md (Optional)

Research informs tasks when technology choices affect implementation:

  • Library selection → Task: "Install and configure [library]"
  • Configuration → Task: "Add [config] for [purpose]"

Including these optional inputs produces more precise, traceable tasks.


Output: tasks.md Structure

Task Format

Each task in tasks.md follows a consistent format:

## Task T-001: Create ChatRoom entity
- **Depends on**: —
- **Duration**: 15 min
- **Traceability**: data-model.md (ChatRoom), plan Phase 1
- **Acceptance**: ChatRoom entity exists with id, name, description, created_at, created_by
- **Output**: src/entities/room.ts
- **[P]**: Can run in parallel with T-002

Key Fields

FieldPurpose
IDUnique identifier (T-001, T-002, ...) for reference
Depends onTask IDs that must complete first
DurationEstimated time (15–30 min typical)
TraceabilityLinks to spec (AC-001), plan (Phase 1), data-model, contracts
AcceptanceTestable criterion for "done"
OutputFile or artifact produced
[P]Optional: marks task as parallelizable

Task Ordering

Tasks are ordered by dependency. Tasks with no dependencies come first. Tasks that depend on T-001 come after T-001. The command builds a dependency graph and outputs tasks in topological order.


Task Derivation

How does the command convert plan artifacts into tasks?

From Contracts to Tasks

For each endpoint in the contract:

  • Create → Task: "Implement POST [path]" with acceptance "Returns 201, body matches schema"
  • Read → Task: "Implement GET [path]" with acceptance "Returns 200, body matches schema"
  • Update → Task: "Implement PATCH [path]"
  • Delete → Task: "Implement DELETE [path]"

For each event:

  • Task: "Emit [event] when [condition]" with acceptance "Event payload matches schema"

From Entities to Tasks

For each entity in data-model.md:

  • Task: "Create [Entity] entity" with fields from the model
  • If migrations are used: Task: "Create migration for [Entity] table"

From Phases to Tasks

Plan phases often map to task groups:

  • Phase 1: Foundation → Tasks for contracts, entities, migrations
  • Phase 2: Repositories → Tasks for each repository
  • Phase 3: Services → Tasks for service methods
  • Phase 4: API → Tasks for routes, WebSocket handlers
  • Phase 5: Integration → Tasks for integration tests, quickstart validation

From Quickstart to Tasks

Each quickstart scenario can become a validation task:

  • Task: "Implement Scenario 1: Happy path" or "Add integration test for Scenario 1"

Parallelization

Not all tasks must run sequentially. Independent tasks can run in parallel.

Marking Parallelizable Tasks [P]

Tasks that have no dependency on each other are marked [P]:

## Task T-001: Create ChatRoom entity
- **[P]**: Can run in parallel with T-002, T-003

## Task T-002: Create Message entity
- **[P]**: Can run in parallel with T-001, T-003

## Task T-003: Create Participant entity
- **[P]**: Can run in parallel with T-001, T-002

T-001, T-002, T-003 all have "Depends on: —". They can be implemented simultaneously.

Safe Parallel Groups

A parallel group is a set of tasks that can all run at once:

Group 1 (parallel): T-001, T-002, T-003  (entities)
Group 2 (parallel): T-004, T-005 (repositories, after entities)
Group 3 (sequential): T-006 (service depends on both repos)

The task list may include a "Parallelization Summary" section:

## Parallelization Summary

- **Group 1**: T-001, T-002, T-003 (no deps) — 3 tasks, ~45 min if parallel
- **Group 2**: T-004, T-005 (after Group 1) — 2 tasks, ~30 min if parallel
- **Sequential**: T-006, T-007, ... (depend on Group 2)

When NOT to Parallelize

  • Shared mutable state: If two tasks modify the same file in conflicting ways, run sequentially
  • Unclear boundaries: If task boundaries are fuzzy, parallel work may cause merge conflicts
  • Human review: When using the checkpoint pattern, parallel tasks may complete out of order—ensure your review process handles that

Task Anatomy: What Makes a Good Task

Atomic

A task does one thing. It is not "Implement chat feature" but "Create ChatRoom entity with fields id, name, description, created_at, created_by."

  • Too small: "Add id field to ChatRoom" (trivial)
  • Too large: "Implement all chat APIs and WebSocket" (multi-hour)
  • Just right: "Create ChatRoom entity" (15–30 min)

Testable

The task has a clear acceptance criterion. You can verify completion without ambiguity.

  • Bad: "Work on chat" (untestable)
  • Bad: "Make it work" (vague)
  • Good: "ChatRoom entity exists; all fields from data-model present; unit test passes"

Traceable

The task links back to the spec and plan. When you complete T-001, you know it satisfies part of Phase 1 and supports FR-001.

  • Traceability: plan Phase 1, data-model ChatRoom, FR-001 (room creation)

Produces Verifiable Output

The task produces something concrete: a file, a test, a migration.

  • Output: src/entities/room.ts
  • Output: tests/unit/room.test.ts

You can open the file, run the test, and confirm the task is done.


Tutorial: Generate Tasks for "Real-Time Chat"

This tutorial continues from Chapters 19 and 20. You have spec.md and plan.md (plus data-model.md, contracts/, quickstart.md) for the real-time chat feature.

Prerequisites

  • Completed spec (Chapter 19)
  • Completed plan (Chapter 20)
  • All plan artifacts in specs/004-real-time-chat/

Step 1: Run /speckit.tasks

From your project root:

/speckit.tasks

Feature: 004-real-time-chat
Inputs: plan.md, data-model.md, contracts/

What happens:

  1. Spec Kit reads plan.md, data-model.md, contracts/
  2. Extracts phases, entities, endpoints, events
  3. Derives tasks for each
  4. Orders tasks by dependency
  5. Marks parallelizable tasks [P]
  6. Writes tasks.md to specs/004-real-time-chat/

Step 2: Review Task List for Completeness

Open specs/004-real-time-chat/tasks.md. Check:

Coverage:

  • Does every plan phase have corresponding tasks?
  • Does every entity have a "create entity" task?
  • Does every endpoint have an "implement" task?
  • Are quickstart scenarios covered (integration test tasks)?

Order:

  • Do dependencies make sense? (Entities before repositories, repositories before services, etc.)
  • Are there any circular dependencies? (There shouldn't be.)

Quality:

  • Is each task atomic? (One thing, 15–30 min)
  • Does each have a clear acceptance criterion?
  • Is each traceable to spec/plan?

Add or refine tasks as needed. The generated list is a draft.

Step 3: Identify Parallelizable Tasks

Look for tasks with "Depends on: —" or that only depend on the same completed group. These can run in parallel.

Example:

  • T-001 (ChatRoom entity), T-002 (Message entity), T-003 (Participant entity) — all parallel
  • T-004 (RoomRepository), T-005 (MessageRepository) — both depend on entities; can run parallel after T-001, T-002, T-003

Note the parallel groups. If you have multiple developers or AI agents, they can work on different tasks in the same group simultaneously.

Step 4: Execute First Task with AI

Choose T-001 (or the first task in your list). Execute it with your AI coding assistant.

Prompt:

"I'm implementing Task T-001 from specs/004-real-time-chat/tasks.md. The task is: Create ChatRoom entity. Acceptance: ChatRoom entity exists with id, name, description, created_at, created_by. Output: src/entities/room.ts. Use the data model from specs/004-real-time-chat/data-model.md. Create the entity and a unit test."

Checkpoint:

  1. AI produces the file(s)
  2. You review: Does it match the data model? Does the test pass?
  3. You approve: "Looks good"
  4. You commit: git add src/entities/room.ts tests/unit/room.test.ts && git commit -m "feat(004): T-001 ChatRoom entity"
  5. You direct: "What's the next task?"

This is the checkpoint pattern: you stay in control. Each task is a checkpoint. You review before proceeding.


The Complete Pipeline: Specify → Plan → Tasks → Implement

The full SDD pipeline in action:

1. /speckit.specify
Input: "Real-time chat: users create rooms, send messages, see them in real time"
Output: spec.md (problem, user stories, requirements, acceptance criteria, edge cases, non-goals)

2. /speckit.plan
Input: spec.md
Output: plan.md, data-model.md, contracts/, research.md, quickstart.md

3. /speckit.tasks
Input: plan.md, data-model.md, contracts/
Output: tasks.md (T-001, T-002, ... T-N)

4. Implementation
Input: tasks.md
Process: For each task, implement with AI or manually; checkpoint; commit; next task
Output: Working feature, tested, committed

Each step consumes the output of the previous. The pipeline is linear and traceable.


Example: Full tasks.md Output

Below is an expanded example of a generated tasks.md for the real-time chat feature:

# Tasks: Real-Time Chat

## Metadata
- Feature: 004
- Branch: 004-real-time-chat
- Plan: plan.md
- Generated: 2026-03-09

## Task List

### Task T-001: Create ChatRoom entity
- **Depends on**: —
- **Duration**: 15 min
- **Traceability**: data-model.md (ChatRoom), plan Phase 1, FR-001
- **Acceptance**: ChatRoom entity with id (UUID), name (string, max 100), description (string, max 500, optional), created_at (timestamp), created_by (UUID). Unit test passes.
- **Output**: src/entities/room.ts, tests/unit/room.test.ts
- **[P]**: Parallel with T-002, T-003

### Task T-002: Create Message entity
- **Depends on**: —
- **Duration**: 15 min
- **Traceability**: data-model.md (Message), plan Phase 1, FR-002
- **Acceptance**: Message entity with id, room_id, sender_id, content (max 4000), created_at. Unit test passes.
- **Output**: src/entities/message.ts, tests/unit/message.test.ts
- **[P]**: Parallel with T-001, T-003

### Task T-003: Create Participant entity
- **Depends on**: —
- **Duration**: 15 min
- **Traceability**: data-model.md (Participant), plan Phase 1, FR-006
- **Acceptance**: Participant entity with id, room_id, user_id, joined_at. Unit test passes.
- **Output**: src/entities/participant.ts, tests/unit/participant.test.ts
- **[P]**: Parallel with T-001, T-002

### Task T-004: Create RoomRepository
- **Depends on**: T-001
- **Duration**: 20 min
- **Traceability**: plan Phase 2, contracts/chat-api.yaml
- **Acceptance**: RoomRepository with create, findById, findByUser. Tests use test DB or mocks.
- **Output**: src/repositories/room-repository.ts, tests/unit/room-repository.test.ts
- **[P]**: Parallel with T-005, T-006 (after T-001, T-002, T-003)

... (T-005 through T-012 follow similar structure)

## Parallelization Summary

- **Group 1** (0 deps): T-001, T-002, T-003 — ~45 min sequential, ~15 min parallel
- **Group 2** (after Group 1): T-004, T-005, T-006 — ~60 min sequential, ~20 min parallel
- **Group 3** (after Group 2): T-007 (ChatService) — ~30 min
- **Group 4** (after T-007): T-008, T-009, T-010 — ~55 min sequential, ~25 min parallel
- **Group 5** (after Group 4): T-011, T-012 — integration tests

**Total estimate**: ~4–5 hours sequential; ~2.5–3 hours with parallelization

Example: Full End-to-End from One-Sentence Idea to Task List

Input (One Sentence)

"Add real-time chat. Users create rooms, send messages, see them in real time. 50 users per room, 2-second delivery."

After /speckit.specify

spec.md with:

  • Problem, user stories, FR-001..FR-007, AC-001..AC-005
  • Edge cases, non-goals, completeness checklist

After /speckit.plan

plan.md with:

  • Architecture: API, WebSocket, ChatService, Repositories
  • Tech: Socket.io, PostgreSQL, Express
  • Data model: ChatRoom, Message, Participant
  • Contracts: POST /rooms, GET /rooms/:id/messages, WebSocket events
  • Phases, file order, phase gates

data-model.md, contracts/, quickstart.md

After /speckit.tasks

tasks.md with:

## Task T-001: Create ChatRoom entity
- Depends on: —
- Duration: 15 min
- Traceability: data-model, Phase 1
- Acceptance: Entity with id, name, description, created_at, created_by
- Output: src/entities/room.ts
- [P]

## Task T-002: Create Message entity
- Depends on: —
- Duration: 15 min
- [P]

## Task T-003: Create Participant entity
- Depends on: —
- Duration: 15 min
- [P]

## Task T-004: Create RoomRepository
- Depends on: T-001
- Duration: 20 min
- [P] with T-005

## Task T-005: Create MessageRepository
- Depends on: T-002
- Duration: 20 min
- [P] with T-004

## Task T-006: Create ParticipantRepository
- Depends on: T-003
- Duration: 15 min

## Task T-007: Implement ChatService
- Depends on: T-004, T-005, T-006
- Duration: 30 min

## Task T-008: Implement POST /rooms
- Depends on: T-007
- Duration: 15 min

## Task T-009: Implement GET /rooms/:id/messages
- Depends on: T-007
- Duration: 15 min

## Task T-010: Implement WebSocket message handler
- Depends on: T-007
- Duration: 25 min

## Task T-011: Integration test — happy path
- Depends on: T-008, T-009, T-010
- Duration: 25 min

## Task T-012: Integration test — authorization
- Depends on: T-011
- Duration: 15 min

You now have 12 atomic tasks. Execute them in order (or parallelize where marked). Each takes 15–30 minutes. Total: roughly 4–6 hours of implementation, with clear checkpoints.


Traditional vs. SDD Workflow Time Comparison

Traditional Workflow (No SDD)

  1. Discuss feature (1–2 hours): Meetings, back-and-forth, vague requirements
  2. Write tickets (1–2 hours): Break into Jira/Linear tickets; often high-level
  3. Design (2–3 hours): Architecture decisions, data model, API design—sometimes ad-hoc during implementation
  4. Implement (6–8 hours): Code, with frequent "wait, what did we agree on?" moments
  5. Rework (2–4 hours): Fix misalignments, add missing edge cases, refactor

Total: ~12–20 hours, with rework and ambiguity

SDD Workflow (With Spec Kit)

  1. Specify (15 min): Run /speckit.specify, refine spec, resolve ambiguities
  2. Plan (10 min): Run /speckit.plan, review plan, validate phase gates
  3. Tasks (5 min): Run /speckit.tasks, review task list
  4. Implement (4–6 hours): Execute tasks one by one with AI; checkpoint each

Total (illustrative benchmark): ~15 minutes of setup + 4–6 hours implementation for a well-scoped feature with mature templates and tooling. Treat this as a benchmark, not a universal guarantee.

Why SDD Is Faster

  • Clarity upfront: Spec and plan eliminate "what did we mean?" during implementation
  • AI efficiency: AI has full context (spec, plan, contracts). It generates correct code faster
  • Checkpoints: Small tasks mean small mistakes. Fix early, not after 8 hours of wrong direction
  • Traceability: Every line of code traces to a requirement. No orphan code, no speculative features

Task Execution Strategies

Sequential (Default)

Execute tasks in order. T-001, then T-002, then T-003, ... Simple, predictable. Best when working alone or with one AI agent.

Parallel (Multiple Agents)

When tasks are marked [P], assign them to different agents or developers:

  • Agent A: T-001, T-004, T-008
  • Agent B: T-002, T-005, T-009
  • Agent C: T-003, T-006, T-010

Coordinate at sync points (e.g., after Group 1 completes, before Group 3). Merge results. Run integration tests.

Agent-Per-Task

Use a single AI agent, but treat each task as a separate session:

  • Session 1: "Implement T-001. Here's tasks.md and data-model.md."
  • Session 2: "Implement T-002. T-001 is done. Here's tasks.md."
  • ...

Each session has focused context. No context window bloat from 12 tasks at once.

Hybrid

  • You do tasks that require judgment (e.g., phase gate validation, integration test design)
  • AI does mechanical tasks (entities, repositories, routes)
  • Checkpoint after each task regardless of who did it

Checkpoint Pattern in Detail

The checkpoint pattern keeps you in control when AI implements tasks.

The Loop

1. Agent: "I've completed T-001. Created src/entities/room.ts and tests/unit/room.test.ts. Test passes."
2. You: Review the code. Run tests. Check against acceptance criterion.
3. You: APPROVE → "Looks good. Commit."
4. You: "What's next?"
5. Agent: "T-002 is next. Create Message entity. Should I proceed?"
6. You: "Yes."
7. Agent: Implements T-002...
8. (repeat)

Why Checkpoints Matter

Without checkpoints: Agent does T-001 through T-012 in one go. You get a large PR. You find issues in T-003 that affect T-007. Rework is painful. You don't know what's solid and what's not.

With checkpoints: You verify each task before the next. If T-003 is wrong, you fix it before T-007 builds on it. Each commit is a known-good state. You can stop and resume anytime.

Your Role at Each Checkpoint

  1. Review: Does the output match the acceptance criterion?
  2. Test: Run the tests. Do they pass?
  3. Approve or reject: Approve → commit. Reject → agent fixes, you review again.
  4. Direct: You say "next task." Agent doesn't autonomously continue.

Integration with the SDD Pipeline

/speckit.tasks is the final command before implementation:

spec.md  →  plan.md  →  tasks.md  →  Implementation
↓ ↓ ↓
WHAT/WHY HOW WHAT TO DO

The task list is the handoff to implementation. Whether you implement manually or with AI, tasks.md is your guide. Each task is a unit of work with a clear start and end.

Handoff to Implementation

When implementing:

  1. Load tasks.md (and optionally spec.md, plan.md, contracts/ for context)
  2. Execute tasks in order (or parallelize where marked)
  3. At each task: implement, verify acceptance criterion, checkpoint, commit
  4. When all tasks complete: run full test suite, validate quickstart scenarios, merge

The implementation phase is "execute tasks" — no ad-hoc design decisions. The plan already made them.


Task Refinement and Iteration

Tasks are not set in stone. As you implement, you may discover:

  • Task too large: Split T-007 into T-007a (create) and T-007b (update). Update tasks.md.
  • Missing task: You need "Add database migration" — add T-006a. Update dependencies.
  • Wrong order: T-008 should come before T-007. Adjust dependencies and reorder.

When you change tasks.md:

  1. Document the change (changelog or commit message)
  2. Ensure dependencies still make sense
  3. Update traceability if the task map changed

Troubleshooting /speckit.tasks

"Plan not found"

Ensure specs/[branch-name]/plan.md exists. Run /speckit.plan first.

"Tasks are too high-level"

The plan may be vague. Add more detail to plan.md: specific file paths, method signatures, or phase deliverables. Re-run /speckit.tasks. Alternatively, add tasks manually for the missing granularity.

"Tasks are too granular"

Merge small tasks. "Add id field" and "Add name field" can become "Create ChatRoom entity with all fields." Edit tasks.md to consolidate. Fewer tasks mean fewer checkpoints—use judgment.

"Circular dependency"

If the plan has a circular dependency (A depends on B, B depends on A), tasks will fail to order. Fix the plan. Break the cycle by introducing an interface or splitting a component.

"Parallel tasks conflict"

Two [P] tasks might both edit the same file. If so, remove [P] from one or split the work so they touch different files. Parallelization assumes independent outputs.

"AI ignores task acceptance criterion"

When prompting AI to implement a task, include the acceptance criterion explicitly: "Acceptance: ChatRoom entity exists with id, name, description, created_at, created_by. Verify before claiming done." Reinforce the criterion in your review.


Case Study: Task Execution with Checkpoints

A developer implements the real-time chat feature using the checkpoint pattern:

T-001: AI creates ChatRoom entity. Developer reviews: fields match data-model? Test passes? Commit. ✓

T-002: AI creates Message entity. Review. Commit. ✓

T-003: AI creates Participant entity. Review. Developer notices: "We need a unique constraint on (room_id, user_id)." Rejects. AI adds constraint. Review. Commit. ✓

T-004: AI creates RoomRepository. Review. Developer notices: "findByUser is missing—we need it for room list." Rejects. AI adds method. Review. Commit. ✓

T-007: AI creates ChatService. Review. Developer notices: "Message validation (max 4000 chars) is missing." Rejects. AI adds validation. Review. Commit. ✓

Lesson: Checkpoints caught 3 issues before they propagated. Without checkpoints, these would have been discovered in integration testing or production. Fixing early is cheaper.


Task Anti-Patterns to Avoid

Anti-Pattern 1: "Implement the Feature"

Bad: One task: "Implement real-time chat." (4+ hours, no checkpoint.)

Good: 12 tasks of 15–30 min each. Each has acceptance criterion and output.

Anti-Pattern 2: No Traceability

Bad: Task "Create room entity" with no link to spec or plan.

Good: "Traceability: data-model ChatRoom, plan Phase 1, FR-001."

When requirements change, traceability tells you what to update.

Anti-Pattern 3: Acceptance Criteria That Can't Be Verified

Bad: "Code is clean." "Implementation is correct."

Good: "ChatRoom entity exists. All fields from data-model. Unit test passes."

You must be able to verify completion without subjective judgment.

Anti-Pattern 4: Ignoring Dependencies

Bad: Implementing ChatService before RoomRepository exists.

Good: Follow task order. T-007 depends on T-004, T-005, T-006. Complete those first.

Anti-Pattern 5: Batch Implementation Without Checkpoints

Bad: "Do tasks T-001 through T-006, then show me."

Good: T-001 → review → commit. T-002 → review → commit. Checkpoint each.

Batch implementation defeats the checkpoint pattern. Errors compound.


Appendix: tasks.md Template Structure

Use this as a reference when creating or customizing task output:

# Tasks: {{FEATURE_NAME}}

## Metadata
- Feature: {{FEATURE_NUMBER}}
- Branch: {{BRANCH_NAME}}
- Plan: plan.md
- Generated: {{DATE}}

## Task List

### Task T-001: [Title]
- **Depends on**: —
- **Duration**: 15 min
- **Traceability**: data-model.md (Entity), plan Phase 1, FR-001
- **Acceptance**: [Testable criterion]
- **Output**: path/to/file.ts
- **[P]**: Can run in parallel with T-002, T-003

### Task T-002: [Title]
- **Depends on**: T-001
- **Duration**: 20 min
- ...

## Parallelization Summary

- Group 1: T-001, T-002, T-003 (no deps)
- Group 2: T-004, T-005 (after Group 1)
- Sequential: T-006, T-007, ...

## Changelog

| Date | Change |
|------|--------|

Advanced: Task Estimation and Velocity

For planning sprints or iterations:

  • Count tasks: e.g., 12 tasks
  • Average duration: 15–30 min → assume 20 min
  • Sequential estimate: 12 × 20 min = 4 hours
  • With parallelization: Group 1 (3 tasks) = 20 min; Group 2 (2 tasks) = 20 min; etc. Total may drop to ~3 hours for a single implementer, or ~2 hours with 2 parallel agents

Use historical data: If your previous features averaged 25 min per task, adjust. Track actual vs. estimated to refine.


Common Mistakes

Mistake 1: Tasks Too Large

The Error: "Task T-001: Implement entire chat feature" (4 hours)

Why It's Wrong: No checkpoint. No atomic verification. Hard to fix if wrong.

The Fix: Break into 15–30 min tasks. "Create entity," "Create repository," "Implement route."

Mistake 2: No Acceptance Criteria

The Error: Task says "Work on ChatRoom" with no definition of done.

Why It's Wrong: You can't verify completion. AI might do too much or too little.

The Fix: Every task has "Acceptance: [testable criterion]."

Mistake 3: Skipping Traceability

The Error: Task has no link to spec or plan.

Why It's Wrong: When requirements change, you don't know which tasks are affected. Orphan tasks accumulate.

The Fix: Every task has "Traceability: plan Phase X, FR-001, AC-002."

Mistake 4: Ignoring Dependencies

The Error: Implementing T-007 (ChatService) before T-004 (RoomRepository).

Why It's Wrong: ChatService needs RoomRepository. You'll have to stub or redo.

The Fix: Follow the task order. Dependencies exist for a reason.

Mistake 5: No Checkpoints

The Error: Letting AI run through all tasks without review.

Why It's Wrong: Errors compound. You discover problems late. Rework is expensive.

The Fix: Checkpoint after every task. Review, approve, commit, then next.


Frequently Asked Questions

Q: Can I run /speckit.tasks without data-model or contracts?
A: Yes. plan.md is required. The command will derive tasks from the plan alone. Including data-model and contracts produces more precise tasks.

Q: What if a task takes longer than 30 minutes?
A: Split it. "T-007: Implement ChatService" might become T-007a (create) and T-007b (update). Update tasks.md.

Q: What if two [P] tasks conflict (same file)?
A: Remove [P] from one or split the work. Parallel tasks should touch different files.

Q: When should I use parallel execution?
A: When you have multiple agents or developers and tasks with no dependencies. Sync at dependency boundaries.

Q: Do I have to checkpoint every task?
A: The checkpoint pattern is recommended. It catches errors early. For very small tasks you might batch 2–3, but increase risk.


Try With AI

Prompt 1: Task Generation

"I have a plan at specs/004-real-time-chat/plan.md, data-model.md, and contracts/. Run /speckit.tasks to generate tasks.md. Show me the task list and explain how each task was derived from the plan. Identify which tasks can run in parallel."

Prompt 2: Task Completeness Review

"Review my tasks.md at specs/[branch-name]/tasks.md. For each plan phase and each contract endpoint, is there a corresponding task? List any gaps. Suggest additional tasks if needed."

Prompt 3: Execute Task with Context

"Implement Task T-001 from specs/004-real-time-chat/tasks.md. I'll provide the full context: [paste tasks.md excerpt, data-model excerpt]. Create the ChatRoom entity and unit test. When done, show me the output and confirm the acceptance criterion is met."

Prompt 4: Pipeline Summary

"I've completed the SDD pipeline for real-time chat: specify → plan → tasks. Summarize the full flow: What did each command produce? How many tasks did we get? What's the estimated implementation time? How does this compare to a traditional workflow without SDD?"


Practice Exercises

Exercise 1: Generate and Review Tasks

Take the real-time chat plan from Chapter 20 (or use a plan from your project). Run /speckit.tasks. Review the output. For each task, verify: (1) Is it atomic? (2) Does it have acceptance criteria? (3) Is it traceable? Create a 1-page review with any improvements.

Expected outcome: A task list review with specific refinements.

Exercise 2: Execute Three Tasks with Checkpoints

Execute the first three tasks (or three tasks of your choice) using the checkpoint pattern. For each: implement with AI, review, approve or reject, commit. Document: How long did each take? Did you catch any issues at checkpoint? How did the checkpoint pattern change your experience?

Expected outcome: Three completed tasks, committed, with a brief reflection.

Exercise 3: Parallelization Plan

From your tasks.md, identify all parallel groups. Create a "parallel execution plan": If you had 2 AI agents, how would you assign tasks? What are the sync points? Draw a simple timeline showing when each task would complete. Estimate total time (sequential vs. parallel).

Expected outcome: A parallelization plan with timeline and time comparison.


Key Takeaways

  1. /speckit.tasks transforms plan (HOW) into executable tasks (WHAT TO DO). It reads plan.md, data-model.md, contracts/ and produces tasks.md with atomic, ordered, traceable tasks.

  2. Task derivation converts contracts (endpoints, events), entities, phases, and quickstart scenarios into specific tasks. Each task has ID, dependencies, acceptance criterion, and output.

  3. Parallelization marks independent tasks [P]. Safe parallel groups allow multiple agents or developers to work simultaneously. Sync at dependency boundaries.

  4. Task anatomy: Atomic (one thing), testable (clear acceptance), traceable (links to spec/plan), verifiable output (file, test). 15–30 min per task.

  5. Checkpoint pattern: Review each task before proceeding. Approve → commit → next. Keeps you in control. Prevents compound errors.

  6. Pipeline efficiency: In mature teams, specify → plan → tasks can take ~15 minutes for straightforward features. Traditional workflows may take much longer due to ambiguity and rework; actual time varies by domain complexity, approval requirements, and team readiness.


Summary: The Complete Spec Kit Pipeline

Part VII introduced three commands. Here is the complete workflow in one place:

StepCommandInputOutputTime
1/speckit.specifyFeature descriptionspec.md, branch, directory~5 min
2Review & refinespec.mdResolved spec (no [NEEDS CLARIFICATION])~10 min
3/speckit.planspec.mdplan.md, data-model.md, contracts/, etc.~5 min
4Review & validateplan.mdPlan passing phase gates~10 min
5/speckit.tasksplan.md, data-model, contractstasks.md~2 min
6Review taskstasks.mdRefined task list~5 min
7Implementtasks.mdWorking feature, committed4–6 hours

Total setup: ~35–40 minutes. Total to done: ~5–7 hours including implementation.

Without Spec Kit: 12+ hours with ambiguity, rework, and ad-hoc design. The pipeline eliminates those costs.


Quick Reference: /speckit.tasks

AspectDetail
Inputplan.md (required), data-model.md, contracts/, research.md (optional)
Outputtasks.md
Task propertiesAtomic, testable, traceable, 15–30 min
DerivationContracts → endpoints/events; entities → create tasks; phases → groups
Parallelization[P] on independent tasks; parallel groups
Checkpoint patternReview each task → approve → commit → next
When to runAfter plan is complete
Pipeline timespecify + plan + tasks ≈ 15 min

Chapter Quiz

  1. What are the required and optional inputs to /speckit.tasks? What is the output?

  2. How does the command derive tasks from (a) contracts, (b) entities, (c) plan phases?

  3. What does [P] mean on a task? When can two tasks run in parallel?

  4. What are the four properties of a good task (atomic, testable, traceable, and what else)?

  5. Describe the checkpoint pattern. Why is it important when using AI for implementation?

  6. What is the complete SDD pipeline? What does each command produce?

  7. Compare traditional vs. SDD workflow time. Why is SDD faster despite the upfront spec/plan/tasks steps?

  8. Name three task execution strategies. When would you use each?