Skip to main content

Chapter 24: Reusable Intelligence Patterns


Learning Objectives

By the end of this chapter, you will be able to:

  • Define Reusable Intelligence (RI) and explain why every SDD project produces two outputs
  • Distinguish and apply the five types of reusable intelligence: skills, subagents, ADRs, PHRs, and intelligence templates
  • Design intelligence using the P+Q+P pattern: Persona, Questions, Principles
  • Differentiate horizontal intelligence (cross-project) from vertical intelligence (domain-specific)
  • Extract reusable intelligence from a completed feature through a hands-on tutorial
  • Apply the intelligence maturity model: ad-hoc → documented → templated → automated
  • Create a skill, ADR, and PHR from real project work

What Is Reusable Intelligence?

Reusable Intelligence (RI) is the capture of patterns, decisions, and effective prompts so they accelerate future work. In traditional development, knowledge lives in people's heads or scattered documentation. In SDD with AI, knowledge must be externalized so that both humans and AI can apply it consistently.

Every Spec-Driven Development project produces two outputs:

  1. The product code — The feature, the API, the UI. What the user sees and uses.
  2. The reusable intelligence — The skills, ADRs, PHRs, and templates that make the next project faster.

The second output is often overlooked. Teams ship code and move on. But the patterns that worked—the prompts that produced good results, the architectural decisions that paid off, the workflows that reduced rework—are lost. Reusable Intelligence is the practice of capturing and reusing those patterns.

The Compounding Effect

Project 1: Build feature A. Learn patterns. Ship code. (No RI capture)
Project 2: Build feature B. Relearn patterns. Ship code. (No RI capture)
Project 3: Build feature C. Relearn again. Ship code.

vs.

Project 1: Build feature A. Extract skills, ADRs, PHRs. Ship code + RI.
Project 2: Build feature B. Use RI from Project 1. Extract new RI. Ship code + RI.
Project 3: Build feature C. Use RI from 1 and 2. Extract new RI. Ship code + RI.

With RI capture, each project builds on the last. Without it, each project starts from zero.


Types of Reusable Intelligence

Five primary types of RI serve different purposes and scales.

1. Skills (SKILL.md)

Purpose: Encode 2–4 key decisions with human-guided execution.

When to use: Recurring workflows that benefit from structured guidance but don't need full autonomy.

Format: SKILL.md file with clear instructions, decision points, and examples.

Example: A "Create Cursor Rule" skill guides the user through creating a .cursor/rules file with the right structure. The human runs the skill; the AI follows the steps.

Location: agents/skills/ or $CODEX_HOME/skills/ (for cross-project reuse)

2. Subagents

Purpose: Encode 5+ decisions with autonomous execution.

When to use: Complex, multi-step workflows that can run with minimal human intervention.

Format: AGENTS.md entry or dedicated agent configuration with persona, capabilities, and constraints.

Example: A "Spec-to-Plan" subagent reads a spec and produces a full implementation plan. It makes many decisions (technology, data model, phases) autonomously.

Location: agents/subagents/ or referenced in AGENTS.md

3. Architectural Decision Records (ADRs)

Purpose: Capture why decisions were made.

When to use: Any significant architectural or design decision that future developers (or AI) need to understand.

Format: Markdown document with Context, Decision, Rationale, Consequences.

Example: "ADR 0002: Use WebSockets for real-time chat" — documents why WebSockets over Server-Sent Events or polling.

Location: memory/adr/

4. Prompt History Records (PHRs)

Purpose: Capture effective prompts that produced good results.

When to use: When a prompt consistently yields high-quality output and you want to reuse it.

Format: Markdown with the prompt, context, outcome, and when to use.

Example: "PHR-001: Generate OpenAPI from data model" — the exact prompt that produced a correct contract.

Location: memory/phr/ or memory/context/

5. Intelligence Templates

Purpose: Reusable specification and plan templates.

When to use: When features share structure (e.g., CRUD APIs, event-driven flows).

Format: Template files with placeholders and instructions.

Example: specs/templates/crud-api-spec-template.md — template for any CRUD feature with standard sections.

Location: specs/templates/, agents/templates/


The P+Q+P Pattern for Designing Intelligence

The P+Q+P pattern structures how you design any reusable intelligence artifact: skills, subagents, or agent instructions.

Persona: Who Is the Agent?

Define the agent's identity and expertise.

Bad: "You are helpful."

Good: "You are a backend architect specializing in REST APIs and PostgreSQL. You prefer simplicity over abstraction. You always consider security and performance."

Why it matters: Persona shapes tone, depth, and default assumptions. A "security specialist" agent will surface different concerns than a "rapid prototyping" agent.

Questions: What to Ask Before Acting

Define what the agent should clarify before proceeding.

Bad: (No questions — agent guesses)

Good:

  • "What is the expected request volume?"
  • "Is this endpoint authenticated?"
  • "What error format does the client expect?"

Why it matters: Questions prevent wrong assumptions. They force the human (or upstream agent) to provide context that would otherwise be guessed incorrectly.

Principles: Rules for Execution

Define the rules the agent must follow.

Bad: "Write good code."

Good:

  • "Validate all inputs at the boundary."
  • "No business logic in controllers."
  • "Every endpoint has an integration test."

Why it matters: Principles constrain output. They ensure consistency across sessions and prevent common mistakes.

P+Q+P in Practice

Example: API Design Skill

# API Design Skill

## Persona
You are an API architect who designs RESTful APIs for production systems.
You prioritize clarity, consistency, and backward compatibility.

## Questions (Ask Before Designing)
1. What is the primary consumer of this API? (web, mobile, internal service?)
2. What are the rate limits or scalability requirements?
3. Is pagination required? What are typical list sizes?
4. What error format does the client expect?

## Principles
1. Use nouns for resources, HTTP verbs for actions
2. Return 201 for creation, 204 for successful delete
3. Use consistent error format: { "error": { "code": "...", "message": "..." } }
4. Document all endpoints in OpenAPI 3.0
5. No breaking changes without versioning

Horizontal vs. Vertical Intelligence

Horizontal Intelligence

Horizontal intelligence applies across projects and domains. It is generic enough to be reused anywhere.

PatternApplies ToExample
TestingAll projects"Write unit tests with Arrange-Act-Assert"
SecurityAll projects"Never log passwords; validate inputs"
API designAll REST APIs"Use consistent error format"
Git workflowAll repos"Commit messages: type(scope): description"
Spec structureAll SDD projects"Spec must have AC for every FR"

Location: Often in shared skill repositories, .cursor/rules that apply globally, or cross-project ADRs.

Vertical Intelligence

Vertical intelligence is domain-specific. It applies within a domain (fintech, healthcare, e-commerce) but not universally.

PatternDomainExample
PCI complianceFintech"Never store raw card numbers"
HIPAAHealthcare"Audit all PHI access"
Cart flowsE-commerce"Support guest checkout, merge on login"
Real-time chatCollaboration"Message ordering, delivery receipts"

Location: Project-specific skills, domain ADRs, memory/ in domain projects.

When to Use Each

  • Horizontal: Create once, reuse everywhere. Invest in quality; it pays off across many projects.
  • Vertical: Create when starting a domain project. Refine as you build. Share within the domain team.

Intelligence Acceleration: How Skills Compound

Skills compound when they build on each other.

Level 1: Single Skill

You create a skill for "Write unit tests." Every time you need tests, you use it. Saves time.

Level 2: Skill Chain

You have:

  • Skill A: "Generate spec from description"
  • Skill B: "Generate plan from spec"
  • Skill C: "Generate tasks from plan"

Using A → B → C creates a pipeline. Each skill feeds the next.

Level 3: Skill + ADR + PHR

You have:

  • Skill: "Design REST endpoint"
  • ADR: "Why we use this error format"
  • PHR: "Prompt that generates correct OpenAPI from our conventions"

The skill tells you what to do. The ADR tells you why. The PHR gives you the exact prompt that works. Together, they produce consistent, high-quality output faster.

Level 4: Domain Stack

In a domain (e.g., fintech), you accumulate:

  • Skills: PCI-safe validation, audit logging, idempotency
  • ADRs: Why we use event sourcing, why we chose this payment provider
  • PHRs: Prompts for compliance checks, reconciliation reports
  • Templates: Spec template for payment flows, plan template for financial features

New features in the domain reuse the stack. Onboarding is faster. Consistency is higher.


Tutorial: Extract Reusable Intelligence from a Completed Feature

This tutorial walks you through extracting RI from a feature you've already built. We'll use "user registration" as the example.

Prerequisites

  • A completed feature (spec, plan, implementation)
  • Access to the project's memory/ and agents/ directories

Step 1: Identify Recurring Patterns

Review the feature end-to-end. Ask:

  1. What decisions did we make? (technology, structure, conventions)
  2. What prompts worked well? (exact prompts that produced good output)
  3. What would we do again? (patterns worth reusing)
  4. What would we do differently? (learnings for next time)

Example for user registration:

  • Decisions: JWT for auth, bcrypt for passwords, email validation with regex + DNS check
  • Prompts that worked: "Generate registration endpoint from this spec and contract"
  • Patterns: Validation at boundary, error format, test structure
  • Learnings: Should have added rate limiting earlier

Step 2: Create an ADR for the Architectural Decision

For each significant decision, create an ADR.

Example: memory/adr/0003-registration-auth-flow.md

# ADR 0003: User Registration and Authentication Flow

## Status
Accepted

## Context
We need user registration with secure password handling and session management.
Options: JWT vs session cookies, bcrypt vs Argon2, email verification timing.

## Decision
- JWT (RS256) for tokens: access 15 min, refresh 7 days
- bcrypt cost 12 for password hashing
- Email verification: send link, allow 24h to verify; user can log in with unverified email but with limited access
- Rate limiting: 5 failed attempts per IP per 15 min

## Rationale
- JWT: Stateless, works across services, team familiarity
- bcrypt: Battle-tested, cost 12 balances security and performance
- Email verification: Reduces spam; 24h window balances UX and security
- Rate limiting: Prevents brute force; 5 attempts is industry standard

## Consequences
- Must manage token refresh flow in client
- Must store refresh tokens (DB or Redis) for revocation
- Must implement rate limiting middleware
- Unverified users need "limited access" logic in authorization

Step 3: Create a Skill from the Pattern

Identify a workflow that recurs. Create a skill.

Example: agents/skills/user-registration-skill/SKILL.md

# User Registration Implementation Skill

## Persona
You are a backend developer implementing user registration for a production system.
You prioritize security, validation, and testability.

## Questions (Ask Before Implementing)
1. What fields are required? (email, password, name, etc.)
2. What validation rules? (password strength, email format)
3. Is email verification required? When?
4. What's the rate limiting policy?
5. What error format does the API use?

## Principles
1. Validate all inputs at the boundary (controller/service entry)
2. Hash passwords with bcrypt cost 12; never log or return passwords
3. Use parameterized queries; no SQL injection
4. Return 201 for success, 400 for validation errors, 409 for duplicate email
5. Write unit tests for validation, integration tests for full flow
6. Follow ADR 0003 for auth flow decisions

## Steps
1. Read spec and data model
2. Create User entity with required fields
3. Create migration
4. Implement RegistrationService with validation
5. Implement POST /register endpoint
6. Add rate limiting middleware
7. Write tests
8. Update API contract

Step 4: Save a PHR for the Most Effective Prompt

When a prompt produced excellent results, save it.

Example: memory/phr/phr-001-registration-endpoint.md

# PHR-001: Generate Registration Endpoint from Spec

## When to Use
When implementing a user registration endpoint and you have spec.md and contracts/register.yaml.

## Context Required
- spec.md (feature spec)
- contracts/register.yaml (API contract)
- data-model.md (User entity)
- memory/constitution.md (project conventions)

## Prompt

Implement the user registration endpoint per specs/001-user-registration/spec.md.

Requirements:

  • Follow the contract in contracts/register.yaml exactly
  • Use the User entity from data-model.md
  • Apply validation: email format (RFC 5322), password min 12 chars, 1 upper, 1 lower, 1 digit, 1 special
  • Hash password with bcrypt cost 12
  • Return 201 with user (no password) on success
  • Return 400 with field-level errors on validation failure
  • Return 409 if email already exists
  • Write unit test for validation, integration test for full flow

Reference memory/constitution.md for error format and coding standards.


## Outcome
- Produced correct endpoint matching contract
- Validation logic correct
- Tests passed on first run
- Error format consistent with project

## Variations
- For different validation rules: adjust the "Apply validation" line
- For different error format: reference project's api-conventions.md

Step 5: Update the Intelligence Maturity

Reflect on where your project sits:

LevelStateNext Step
Ad-hocNo captureDocument one ADR, one PHR
DocumentedADRs, PHRs existCreate first skill
TemplatedSkills, templatesChain skills; create subagent
AutomatedSubagents, pipelinesRefine; share across projects

The Intelligence Maturity Model

Level 1: Ad-hoc

  • No systematic capture
  • Knowledge in people's heads
  • Each project starts fresh
  • High variability in output quality

Transition: Document one significant decision (ADR) and one effective prompt (PHR).

Level 2: Documented

  • ADRs for key decisions
  • PHRs for effective prompts
  • Some templates (spec, plan)
  • Still human-driven; AI uses docs as reference

Transition: Create first skill for a recurring workflow.

Level 3: Templated

  • Skills for common workflows
  • Templates for specs, plans, contracts
  • Consistent structure across features
  • AI follows templates; human guides

Transition: Chain skills; consider subagent for multi-step flows.

Level 4: Automated

  • Subagents for complex flows
  • Pipelines: specify → plan → tasks → implement
  • Minimal human intervention for routine work
  • Human focuses on review, exceptions, strategy

Transition: Refine based on feedback; share horizontal intelligence across projects.


Creating a Skill: Deep Dive

Skill Anatomy

A well-structured skill has:

  1. Title and description — What it does, when to use it
  2. Persona — Who the agent is (P+Q+P)
  3. Questions — What to ask before acting (P+Q+P)
  4. Principles — Rules for execution (P+Q+P)
  5. Steps — Ordered actions (optional but helpful)
  6. Examples — Input/output samples (optional)

Skill Template

# [Skill Name]

## Description
[One paragraph: what this skill does, when to use it]

## Persona
[Who the agent is; expertise; default stance]

## Questions (Ask Before Acting)
1. [Question 1]
2. [Question 2]
3. [Question 3]

## Principles
1. [Rule 1]
2. [Rule 2]
3. [Rule 3]

## Steps
1. [Step 1]
2. [Step 2]
3. [Step 3]

## Examples
### Input
[Example input]

### Output
[Example output]

Skill vs. Subagent: When to Use Which

CriterionSkillSubagent
Decisions2–45+
Human involvementHigh (human runs it)Low (autonomous)
ComplexitySingle workflowMulti-step pipeline
Reuse scopeOften project-specificCan be cross-project
FormatSKILL.mdAGENTS.md + config

Creating an ADR: Deep Dive

ADR Template

# ADR [number]: [Title]

## Status
[Proposed | Accepted | Deprecated | Superseded by ADR-XXX]

## Context
[What is the issue? What forces are at play?]

## Decision
[What did we decide?]

## Rationale
[Why did we decide this?]

## Consequences
[What are the implications? Positive and negative.]

When to Write an ADR

  • Technology choices (database, framework, auth)
  • Architectural patterns (layering, event sourcing)
  • API design decisions (REST vs GraphQL, error format)
  • Process decisions (branching strategy, deployment)
  • Security or compliance decisions

When NOT to Write an ADR

  • Trivial choices ("we use 4 spaces for indentation")
  • Temporary decisions
  • Decisions that are fully documented elsewhere

Creating a PHR: Deep Dive

PHR Template

# PHR-[number]: [Short Title]

## When to Use
[Under what conditions does this prompt work well?]

## Context Required
[What files, specs, or info must be loaded?]

## Prompt
[The exact prompt—copy-paste ready]

## Outcome
[What did it produce? Quality? Any fixes needed?]

## Variations
[How to adapt for similar but different cases]

What Makes a Good PHR

  • Specific: The prompt is exact, not vague
  • Context-aware: Lists what must be loaded
  • Reproducible: Someone else can get similar results
  • Honest: Notes limitations and when it didn't work

Horizontal Intelligence: Cross-Project Skills

Some skills are valuable across all your projects. Consider creating a shared skills repository.

Example: Testing Skill (Horizontal)

# Unit Test Generation Skill

## Persona
You are a test engineer who writes clear, maintainable unit tests.
You use Arrange-Act-Assert. You mock external dependencies.

## Questions
1. What testing framework? (Jest, pytest, etc.)
2. What's the expected behavior? (from spec or docstring)
3. What are the edge cases?
4. What should be mocked?

## Principles
1. One assertion focus per test (or one logical unit)
2. Test behavior, not implementation
3. Descriptive test names: "should return 400 when email is invalid"
4. No flaky tests (no randomness, no time dependence unless tested)
5. Fast tests (no real DB, no real network)

This skill applies to any project. Put it in a shared location (e.g., $CODEX_HOME/skills/testing/).

Example: API Error Format (Horizontal)

# Consistent API Error Format

## Standard Format
{
"error": {
"code": "ERR_XXX",
"message": "Human-readable message",
"details": {} // optional, for validation errors
}
}

## HTTP Status Mapping
- 400: Validation error (ERR_VALIDATION)
- 401: Unauthorized (ERR_UNAUTHORIZED)
- 403: Forbidden (ERR_FORBIDDEN)
- 404: Not found (ERR_NOT_FOUND)
- 409: Conflict (ERR_CONFLICT)
- 500: Server error (ERR_INTERNAL)

This is a principle that can be referenced in any API project.


Vertical Intelligence: Domain-Specific Patterns

Example: Fintech — Payment Idempotency

# Payment Idempotency Skill

## Persona
You implement payment flows for a fintech application.
You prioritize correctness, auditability, and idempotency.

## Questions
1. What idempotency key does the client send? (header? body?)
2. What's the idempotency window? (24h? 7 days?)
3. What payment provider? (Stripe, Adyen, etc.)
4. What's the reconciliation process?

## Principles
1. Same idempotency key + same params = same result (no double charge)
2. Store idempotency key with result; return cached result on replay
3. Log all payment attempts (success and failure) for audit
4. Never expose raw card data; use tokens
5. Follow PCI DSS: no card data in logs, memory, or errors

Example: Healthcare — PHI Handling

# PHI Access Skill

## Persona
You implement features that touch Protected Health Information (PHI).
You prioritize HIPAA compliance and minimal access.

## Questions
1. What PHI is involved? (name, DOB, diagnosis, etc.)
2. Who needs access? (role-based?)
3. What's the audit requirement?
4. Is data at rest encrypted?

## Principles
1. Access only what's needed for the operation
2. Log all PHI access: who, when, what
3. No PHI in logs (mask or hash)
4. Use parameterized queries; no PHI in URLs
5. Session timeout for inactive users

Try With AI

Prompt 1: Extract ADR from Discussion

"We just decided to use WebSockets for real-time updates instead of polling. The reasons were: lower latency, less server load, better UX. Our alternative was Server-Sent Events. Help me write an ADR (memory/adr/000X-websockets.md) capturing this decision. Use the standard ADR format."

Prompt 2: Create Skill from Workflow

"I repeatedly do this workflow: (1) read a spec, (2) generate an OpenAPI contract, (3) implement the endpoint. Create a skill (SKILL.md) that guides an AI through this. Use the P+Q+P pattern. Include the questions I should answer before we start."

Prompt 3: Save Effective Prompt as PHR

"This prompt worked really well: [paste your prompt]. The output was [describe]. Create a PHR (memory/phr/phr-XXX.md) with: when to use, context required, the exact prompt, outcome, and one variation for a similar case."

Prompt 4: Intelligence Maturity Assessment

"Review my project's memory/ and agents/ directories. Assess where we are on the intelligence maturity model (ad-hoc, documented, templated, automated). List 3 concrete steps to move to the next level. Prioritize by impact."


Practice Exercises

Exercise 1: Extract RI from a Completed Feature

Choose a feature you've built (or use the user registration example). Extract: (1) one ADR for a significant decision, (2) one PHR for an effective prompt, (3) one skill for a recurring pattern. Write all three artifacts. Reflect: What was easy? What was hard? What would you do differently next time?

Expected outcome: Three RI artifacts (ADR, PHR, skill) plus a brief reflection.

Exercise 2: Design a Horizontal Skill

Create a skill that applies to any project. Examples: "Write integration test," "Design REST endpoint," "Create database migration." Use the P+Q+P pattern. Include at least 3 questions and 5 principles. Test it: Use the skill with AI on a small task. Did it improve output?

Expected outcome: A horizontal skill (SKILL.md) and a short test report.

Exercise 3: Intelligence Maturity Roadmap

For your current project (or a hypothetical one), create a 4-step roadmap to move from your current maturity level to "automated." For each step: (1) what you'll create, (2) who will use it, (3) how you'll measure success. Be specific.

Expected outcome: A one-page roadmap with 4 concrete steps.


Key Takeaways

  1. Reusable Intelligence (RI) is the capture of patterns, decisions, and effective prompts so they accelerate future work. Every SDD project produces two outputs: product code and reusable intelligence.

  2. Five types of RI: Skills (2–4 decisions, human-guided), subagents (5+ decisions, autonomous), ADRs (why), PHRs (effective prompts), intelligence templates (reusable structure).

  3. P+Q+P pattern: Persona (who the agent is), Questions (what to ask before acting), Principles (rules for execution). Use it to design any RI artifact.

  4. Horizontal vs. vertical: Horizontal intelligence applies across projects (testing, security, API design). Vertical intelligence is domain-specific (fintech, healthcare, e-commerce). Both compound over time.

  5. Intelligence maturity model: Ad-hoc → documented (ADRs, PHRs) → templated (skills, templates) → automated (subagents, pipelines). Each level builds on the previous.

  6. Extraction process: Identify patterns → create ADR for decisions → create skill for workflow → save PHR for effective prompts. Start with one of each; iterate.


Chapter Quiz

  1. What are the two outputs of every SDD project? Why does the second one matter?

  2. What is the difference between a skill and a subagent? When would you use each?

  3. What does the P+Q+P pattern stand for? Give an example of each component for an "API design" skill.

  4. What is horizontal intelligence? What is vertical intelligence? Give one example of each.

  5. What are the four levels of the intelligence maturity model? What characterizes each?

  6. What should an ADR contain? When should you write one vs. when should you skip it?

  7. What makes a good PHR (Prompt History Record)? What sections should it have?

  8. How do skills compound? Describe the progression from single skill to domain stack.