Skip to main content

Chapter 2: The Power Inversion


Learning Objectives

By the end of this chapter, you will be able to:

  • Describe the "power inversion" — the structural shift from code-as-truth to specification-as-truth
  • Map the four core artifacts in an SDD system and explain each artifact's role
  • Compare the traditional SDLC to the AI-native SDLC step by step
  • Practice the specification-first workflow by building a feature from spec to code
  • Explain why code becomes "disposable" in a spec-driven world and what that means in practice

From Code Serving Specifications to Specifications Serving Code

For decades, the power structure in software development looked like this:

Traditional Power Structure:

Code ← Source of truth

Documentation ← Serves code (updated after the fact)

Requirements ← Initial guide (abandoned quickly)

Business Intent ← Starting point (often lost)

Specifications served code. They were the scaffolding we built and discarded once the "real work" of coding began. We wrote PRDs to guide development, created design docs to inform implementation, drew diagrams to visualize architecture. But these were always subordinate to the code. Code was truth. Everything else was, at best, good intentions.

Spec-Driven Development inverts this power structure:

SDD Power Structure:

Specification ← Source of truth

Implementation Plan ← Architecture derived from spec

Tasks ← Execution units derived from plan

Code ← Generated artifact (serves the spec)

In SDD, specifications don't serve code — code serves specifications. The Product Requirements Document isn't a guide for implementation; it's the source that generates implementation. Technical plans aren't documents that inform coding; they're precise definitions that produce code.

This isn't an incremental improvement. It's a fundamental rethinking of what drives development.


The Four Core Artifacts

Every SDD system is organized around four artifacts, each with a specific role:

1. Specification (Source of Truth)

The specification defines what the system does and why. It contains:

  • Problem statement and user context
  • User journeys with concrete scenarios
  • Functional requirements with acceptance criteria
  • Non-functional requirements (performance, security, scalability)
  • Constraints (what the system must NOT do)
  • Edge cases and error handling expectations

The specification is written in natural language (structured Markdown) and is readable by both humans and AI agents.

# Feature: Password Reset

## Problem
Users who forget their passwords cannot access their accounts.
The current process requires contacting support, which takes 24-48 hours.

## User Journey
1. User clicks "Forgot Password" on the login page
2. User enters their registered email address
3. System validates the email exists in the database
4. System generates a cryptographically secure reset token (UUID v4)
5. System sends an email with a reset link containing the token
6. User clicks the link (valid for 1 hour)
7. User enters and confirms a new password
8. System validates password strength (min 12 chars, 1 uppercase,
1 number, 1 special)
9. System invalidates the token and all existing sessions
10. User is redirected to login with a success message

## Constraints
- Reset tokens expire after exactly 1 hour
- Each token can be used exactly once
- A user can request at most 3 resets per hour (rate limiting)
- Password history: cannot reuse any of the last 5 passwords
- The reset email must not reveal whether the email exists in the system

2. Implementation Plan (Architecture)

The implementation plan translates the specification into technical decisions. It maps what to how:

# Implementation Plan: Password Reset

## Technology Decisions
- Token storage: Redis with 1-hour TTL (auto-expiration)
- Email delivery: SendGrid API (transactional template)
- Password hashing: bcrypt with cost factor 12
- Rate limiting: Redis-based sliding window counter

## Data Model Changes
- New table: `password_reset_tokens`
- `id` UUID PRIMARY KEY
- `user_id` UUID REFERENCES users(id)
- `token_hash` VARCHAR(255) — store hash, not raw token
- `expires_at` TIMESTAMP
- `used_at` TIMESTAMP NULLABLE
- `created_at` TIMESTAMP

## API Contract
- POST /auth/forgot-password
- Body: { "email": "string" }
- Response: 200 OK (always, to prevent email enumeration)
- POST /auth/reset-password
- Body: { "token": "string", "password": "string" }
- Response: 200 OK or 400 Bad Request

3. Tasks (Execution Units)

Tasks are atomic, implementable units derived from the plan:

# Tasks: Password Reset

- [x] Create `password_reset_tokens` migration
- [x] Implement token generation service (UUID v4, bcrypt hash)
- [ ] Implement POST /auth/forgot-password endpoint
- [ ] Email validation
- [ ] Rate limiting check (3/hour per email)
- [ ] Token generation and storage
- [ ] Email dispatch via SendGrid
- [ ] Always return 200 (security requirement)
- [ ] Implement POST /auth/reset-password endpoint
- [ ] Token lookup and validation
- [ ] Password strength validation
- [ ] Password history check
- [ ] Session invalidation
- [ ] Token consumption (mark as used)
- [ ] Write integration tests for complete flow
- [ ] Write rate limiting tests

4. Code (Generated Artifact)

Code is the output of the process, not the input. It is generated from tasks, validated against the specification's acceptance criteria, and — critically — regenerable.

If the specification changes, you don't patch the code. You regenerate it from the updated specification. If a better framework emerges, you don't migrate — you regenerate with the new technology choice in the implementation plan.

Expert Insight: The phrase "code is disposable" sounds radical, but it's already true in practice for many artifacts. CSS generated from design tokens is disposable. API clients generated from OpenAPI specs are disposable. Database migrations generated from schema definitions are disposable. SDD extends this principle to application code itself.


The Lifecycle Comparison

Traditional SDLC

Requirements → Design → Implementation → Testing → Deployment
↓ ↓ ↓ ↓ ↓
(abandoned) (stale) (source of (after the (manual)
truth) fact)

The traditional SDLC treats each phase as a handoff. Requirements are "done" when design begins. Design is "done" when coding begins. Each phase produces artifacts that are consulted once and then abandoned.

AI-Native SDLC

Intent → Specification → Implementation Plan → AI Code Generation → Validation → Deployment
↑ ↑ ↑ ↓ ↓ ↓
└──────────┴──────────────────┴─────── feedback ─────┴────────────────┘ |
|
continuous loop ◄─────────────────────────┘

The AI-native SDLC is a continuous loop. Specifications are never "done" — they evolve as the team learns. Implementation plans update when technology decisions change. Code is regenerated whenever the upstream artifacts change.

Key differences:

AspectTraditionalAI-Native SDD
Source of truthCodeSpecification
DocumentationAfter the factBefore and during
TestingWritten after codeDerived from spec, written before code
MaintenanceEdit code directlyEdit spec, regenerate code
OnboardingRead the codebaseRead the specifications
AI roleCode autocompleteFeature implementation from spec
Knowledge preservationIn developers' headsIn specification repository
Pivoting costHigh (rewrite code)Low (revise spec, regenerate)

What "Disposable Code" Really Means

The idea that code is disposable often provokes resistance. "But I spent months writing this code!" The resistance is understandable but misplaced.

Disposable doesn't mean worthless. It means regenerable. When your specifications are complete and precise, the code that implements them can be reproduced at any time. This has profound implications:

Technology Migrations Become Specification Updates

Traditional migration from Express.js to Fastify:

  1. Read every route handler and understand the business logic
  2. Rewrite each handler in the new framework
  3. Update middleware, error handling, testing infrastructure
  4. Test exhaustively
  5. Debug differences between old and new behavior
  6. Timeline: weeks to months

SDD migration from Express.js to Fastify:

  1. Update implementation plan: change "Framework: Express.js" to "Framework: Fastify"
  2. Regenerate code from specification
  3. Validate against acceptance criteria
  4. Timeline: hours to days

Refactoring Becomes Specification Refinement

Instead of restructuring code (which risks changing behavior), you restructure the specification (which defines behavior) and regenerate. The generated code is correct by construction — it implements the spec you wrote, using the architecture you chose.

Debugging Becomes Specification Analysis

When a bug appears, the question changes:

  • Traditional: "What's wrong with the code?"
  • SDD: "What's missing from the specification?"

Often, the "bug" is actually a missing specification — an edge case or business rule that was never explicitly defined.


Tutorial: Your First Power Inversion

In this tutorial, you'll practice the SDD workflow end-to-end for a small feature.

The Feature: User Profile API

Step 1: Write the Specification

Create a file called spec-user-profile.md:

# Feature: User Profile API

## Problem
Users need to view and update their profile information.
Currently there is no API endpoint for profile management.

## User Stories
- As an authenticated user, I can view my profile (name, email,
bio, avatar URL)
- As an authenticated user, I can update my name and bio
- As an authenticated user, I cannot change my email through
this endpoint (email changes require verification)
- As any user, I cannot access another user's profile through
this endpoint

## Acceptance Criteria
- GET /users/me returns 200 with profile object
- PATCH /users/me accepts partial updates (name, bio only)
- PATCH /users/me with email field returns 400 with error
"Email changes not permitted through this endpoint"
- All endpoints require valid JWT authentication
- Unauthenticated requests return 401
- Name must be 1-100 characters
- Bio must be 0-500 characters
- Response includes: id, name, email, bio, avatar_url,
created_at, updated_at

## Non-Functional Requirements
- Response time < 100ms at p95
- Profile data cached for 5 minutes (invalidated on update)

Step 2: Generate the Implementation Plan

Give your AI assistant this prompt:

"Here is a feature specification. Generate an implementation plan that includes: (1) technology decisions with rationale, (2) data model, (3) API contract with request/response schemas, (4) caching strategy, (5) test plan. Follow the specification exactly — do not add features not in the spec."

Step 3: Generate Tasks

From the implementation plan, derive a task list:

"Based on this implementation plan, generate an ordered task list. Each task should be atomic (completable in one step), testable (has a clear done condition), and traceable (links back to a specific requirement in the spec)."

Step 4: Generate Code

For each task, ask your AI to implement it:

"Implement task 1: [task description]. Follow the implementation plan exactly. Reference the specification for acceptance criteria. Write the test first, then the implementation."

Step 5: Validate Against Spec

After implementation, validate:

"Review the implemented code against the original specification. For each acceptance criterion, confirm whether it is met. List any acceptance criteria that are not yet satisfied."


Try With AI

Prompt 1: Practice the Inversion

"I want to practice the SDD power inversion. Give me a simple feature idea (something a typical web app would need). First, let me try to implement it with a vague one-sentence description. Then, help me write a proper specification for the same feature. Finally, implement it from the specification. We'll compare the two results to see the difference the specification makes."

Prompt 2: Trace Decisions to Spec

"Here is an implementation plan for a feature. For each technical decision in the plan, trace it back to a specific requirement in the specification. If any decision cannot be traced, flag it as an 'orphan decision' that needs specification support. Spec: [paste spec]. Plan: [paste plan]."

Prompt 3: The Regeneration Test

"I have a specification for a feature currently implemented in Python/Flask. Without changing the specification, regenerate the implementation in TypeScript/Express. The acceptance criteria and behavior should be identical — only the language and framework should change. This tests whether my specification is truly implementation-independent."


Practice Exercises

Exercise 1: Artifact Mapping

Take any feature in a project you've worked on. Create the four SDD artifacts (specification, implementation plan, tasks, code outline) retroactively. Note which information existed somewhere in the project vs. which was never documented.

Expected outcome: You will find that most projects have code and fragments of specifications, but rarely have explicit implementation plans or traceable task breakdowns.

Exercise 2: The Disposability Test

Choose a small module from an existing project. Write a complete specification for its current behavior (reverse-engineer the spec from the code). Then give the specification to an AI agent and ask it to reimplement the module from scratch without seeing the original code.

Compare: Does the regenerated code implement the same behavior? If not, what was missing from your specification?

Expected outcome: Gaps in your specification will become visible as behavioral differences between the original and regenerated code.

Exercise 3: Lifecycle Mapping

Draw two diagrams: (1) how a feature currently moves through your team's process, and (2) how it would move through an SDD process. Identify where time is spent in each, and where the bottlenecks are.

Expected outcome: You will typically find that traditional processes spend 20% of time on specification and 80% on implementation/debugging, while SDD targets 50% on specification and 50% on validation.


Key Takeaways

  1. The power inversion restructures the relationship between artifacts: specifications become the source of truth, and code becomes the generated output that serves the specification.

  2. Four core artifacts define an SDD system: Specification (what/why), Implementation Plan (how), Tasks (execution units), and Code (generated artifact).

  3. Code becomes disposable — not worthless, but regenerable. This transforms migrations, refactoring, and debugging from code-level activities into specification-level activities.

  4. The AI-native SDLC is a continuous loop, not a linear handoff. Specifications evolve, plans adapt, and code is regenerated as the team's understanding deepens.

  5. The bottleneck shifts from implementation speed to specification precision. The developers who specify best produce the best systems.


Chapter Quiz

  1. In SDD, what is the "source of truth"? How does this differ from traditional development?

  2. Name the four core artifacts in an SDD system and explain each one's purpose in a single sentence.

  3. What does "code is disposable" mean in practice? Give a concrete example.

  4. How does a technology migration differ between traditional development and SDD?

  5. Why is the AI-native SDLC described as a "continuous loop" rather than a linear process?

  6. A specification says "Users can upload profile photos (JPEG or PNG, max 5MB)." An AI agent adds WebP support. Is this correct behavior? Why or why not?

  7. What is an "orphan decision" in the context of implementation plans?

  8. How does the SDD approach change the role of debugging?