Skip to main content

Part IX: Validation Systems

How to Verify AI-Generated Code Automatically and Close the Loop Between Specification and Implementation

AI-generated code must be verified automatically. Validation is what closes the loop between specification and implementation. Without it, Spec-Driven Development is just hope—you specify, AI generates, and you pray it works. This part teaches three testing strategies that derive directly from specifications and turn SDD into a closed, verifiable system.

Spec-Driven Development rests on a simple promise: specifications are the source of truth. But a specification without verification is merely a wish list. Validation is the mechanism that proves the implementation satisfies the spec. It answers the question: "Did we build what we said we would build?"

This part answers three critical questions:

  1. How do tests derive from specifications? — Tests should flow from specs, not from implementation. When you change the spec, you know exactly which tests to update. When tests fail, you know which requirement is violated.

  2. How do we validate API contracts? — Microservices, APIs, and integrations require formal contracts. Contract testing ensures providers and consumers stay compatible without expensive integration environments.

  3. How do we test invariants, not just examples? — Example-based tests verify specific inputs. Property-based tests verify that any valid input satisfies the specification's invariants—catching edge cases that humans never think to write.

What You Will Learn

Chapter 26: Spec-Driven Testing

You will learn the fundamental principle: tests derive from specifications, not from implementation. You will explore four test types in SDD—unit, integration, contract, and end-to-end—and how each maps to specification elements: acceptance criteria → integration tests, edge cases → unit tests, API contracts → contract tests, user journeys → e2e tests. You will apply test-first in SDD (Article III of the constitution): write tests before implementation. A hands-on tutorial walks you through generating a complete test suite from a feature specification: parsing specs for testable criteria, writing contract tests from API spec, integration tests from acceptance criteria, and e2e tests from user journeys. You will run tests in the Red phase (expect failures), implement until Green, and establish traceability so every test links to a specification requirement. You will measure test coverage as specification coverage, not code coverage. Tools covered: Vitest, Playwright, Jest, Pytest, Cucumber.

Chapter 27: Contract Testing

You will learn what contract testing is: validating API behavior against a formal specification. You will distinguish consumer-driven contracts (consumers define expectations) from provider-driven contracts (providers publish schemas). You will master OpenAPI contract testing: validating responses against schema, detecting breaking changes, and generating mock servers from contracts. Tools covered: Pact, Specmatic, Prism, Dredd. You will apply contract testing in microservices to ensure service compatibility. A tutorial guides you through setting up contract testing for a running project: writing an OpenAPI specification, generating contract tests, running against a live API, detecting a breaking change, and fixing the violation. You will integrate contract testing into CI/CD pipelines and understand contract evolution: versioning and backwards compatibility. You will learn the resolution workflow when contracts break.

Chapter 28: Property-Based Testing

You will learn what property-based testing is: testing invariants instead of examples. You will contrast properties ("user IDs are always unique") with examples ("user 1 has ID abc"). You will identify when to use property-based testing in SDD: data model invariants, API idempotency, state machine transitions, mathematical properties. You will derive properties from specifications: constraints ("bookmarks are always user-scoped") → property test; NFRs ("response time < 200ms for any valid input") → property test; data models ("email format is always valid") → property test. Tools covered: fast-check (JS/TS), Hypothesis (Python), QuickCheck (Haskell). A tutorial walks you through writing property-based tests for a running project: identifying invariants from specs, writing generators for test data, writing property assertions, and analyzing shrinking on failures. You will combine property-based tests with spec-driven tests and learn common properties: roundtrip, idempotency, commutativity, invariant preservation. You will see when property-based testing catches bugs that example tests miss.

The Connection

The three chapters form a validation progression:

  1. Chapter 25 establishes how tests derive from specifications—the mapping from spec elements to test types and the test-first workflow that closes the SDD loop.

  2. Chapter 26 establishes how to validate interfaces—contract testing for APIs and microservices, ensuring compatibility without full integration environments.

  3. Chapter 27 establishes how to test invariants—property-based testing that verifies specification properties hold for any valid input, not just hand-picked examples.

Together, they transform SDD from a specification-to-code pipeline into a verified pipeline: specifications generate tests, tests validate implementation, and the loop closes. AI-generated code is no longer trusted on faith—it is proven against the spec.


Next: Chapter 26 — Spec-Driven Testing