Skip to main content

Appendix A: Spec Templates

This appendix provides complete, copy-pasteable templates for Spec-Driven Development. Each template is a ready-to-use Markdown document with [BRACKETS] placeholders indicating what to fill in. Use these as starting points for your specifications, plans, and governance documents.


1. Feature Specification Template (feature-spec.md)

Location: specs/[branch-name]/spec.md

# Feature: [FEATURE_NAME]

## Metadata
- **Feature ID**: [FEATURE_NUMBER]
- **Branch**: [BRANCH_NAME]
- **Created**: [DATE]
- **Status**: [Draft | In Review | Approved]
- **Owner**: [NAME_OR_TEAM]

---

## Problem Statement

[1-3 sentences describing the problem this feature solves. Focus on WHO experiences the pain and WHAT the impact is of not solving it.]

- **Who**: [User role or persona experiencing the problem]
- **Pain**: [What they experience — be specific]
- **Impact**: [Consequence of not solving — support costs, churn, security risk, etc.]

**Anti-pattern check**: Ensure this describes the PROBLEM, not the solution. "We need X" is a solution; "Users cannot Y without Z" is a problem.

---

## User Stories

### US-1: [Story Title]
**As a** [user role], **I want** [capability], **so that** [benefit].

**Acceptance Criteria**:
- **AC-1.1**: Given [precondition], When [action], Then [expected result]
- **AC-1.2**: Given [precondition], When [action], Then [expected result]

### US-2: [Story Title]
**As a** [user role], **I want** [capability], **so that** [benefit].

**Acceptance Criteria**:
- **AC-2.1**: Given [precondition], When [action], Then [expected result]

[Add more user stories as needed. Each must have testable acceptance criteria.]

---

## Functional Requirements

### FR-1: [Capability Name]
[Precise description of the capability. Include validation rules, data flow, and behavior. Avoid "appropriate" or "properly" — be specific.]

### FR-2: [Capability Name]
[Description]

### FR-3: [Capability Name]
[Description]

[Continue for all capabilities. Each FR should be implementable without guessing.]

---

## Non-Functional Requirements

### NFR-1: [Attribute] — [Measurable Target]
[Description. Include specific numbers: latency in ms, throughput, availability %.]

### NFR-2: [Attribute] — [Measurable Target]
[Description]

### NFR-3: [Attribute] — [Measurable Target]
[Description]

**Common NFR categories**: Latency, Throughput, Availability, Security (token entropy, encryption), Scalability, Bundle size.

---

## Edge Cases

| ID | Condition | Expected Behavior |
|----|-----------|-------------------|
| EC-1 | [Boundary or error condition] | [Exact behavior] |
| EC-2 | [Empty input, invalid format, etc.] | [Exact behavior] |
| EC-3 | [Concurrent access, race condition] | [Exact behavior] |
| EC-4 | [Token/link expiry, rate limit] | [Exact behavior] |
| EC-5 | [Double-submit, back button] | [Exact behavior] |

[Add all boundary conditions, error paths, and unusual inputs. AI tends to implement happy paths well; edge cases are where bugs hide.]

---

## Constraints

- **C-1**: [What the system must NOT do — hard limit]
- **C-2**: [Inviolable rule]
- **C-3**: [Scope boundary]
- **C-4**: [Technical constraint]
- **C-5**: [Compliance or policy constraint]

[Constraints prevent scope creep and enforce architectural boundaries. Be explicit about what is out of scope.]

---

## Non-Goals

- [Explicitly NOT building this — prevents AI from adding "related" features]
- [Feature or capability we are deliberately excluding]
- [Future consideration we are deferring]

---

## Dependencies

### Internal
- **[Service/Module Name]**: [What is required — e.g., `getUserByEmail(email)`, `invalidateSession(userId)`]
- **[Service/Module Name]**: [What is required]

### External
- **[External Service]**: [What is required — API, SLA, format]
- **[Third-Party System]**: [What is required]

---

## Observability Requirements

### Logs
- **[Event name]**: [What to log — include fields, exclude PII. Use hashed identifiers for sensitive data.]
- **[Event name]**: [What to log]

### Metrics
- `[metric_name]_total`: [Description — counter]
- `[metric_name]_seconds`: [Description — histogram]
- `[metric_name]_rate`: [Description — gauge]

### Alerts
- **[Condition]**: [Action — e.g., "If >50% rate-limited in 5 min, page on-call"]
- **[Condition]**: [Action]

---

## Security Requirements

- **SEC-1**: [Authentication requirement]
- **SEC-2**: [Authorization requirement]
- **SEC-3**: [Data protection — encryption, hashing]
- **SEC-4**: [Token handling — entropy, expiry, storage]
- **SEC-5**: [Logging — no sensitive data in logs]
- **SEC-6**: [Audit — what to record]

---

## Completeness Checklist

Before implementation, verify:

- [ ] No `[NEEDS CLARIFICATION]` markers remain
- [ ] Problem statement describes WHY, not WHAT
- [ ] Every user story has testable acceptance criteria
- [ ] Functional requirements are precise (no "appropriate," "properly")
- [ ] Non-functional requirements are measurable
- [ ] Edge cases cover boundaries, errors, empty inputs
- [ ] Constraints explicitly state what NOT to do
- [ ] Non-goals listed to prevent scope creep
- [ ] Dependencies listed (internal and external)
- [ ] Observability: logs, metrics, alerts defined
- [ ] Security requirements cover auth, data, audit

2. Implementation Plan Template (plan.md)

Location: specs/[branch-name]/plan.md

# Implementation Plan: [FEATURE_NAME]

## Metadata
- **Feature**: [FEATURE_NUMBER]
- **Branch**: [BRANCH_NAME]
- **Created**: [DATE]
- **Spec**: spec.md
- **Constitution**: [PATH_TO_CONSTITUTION or N/A]

---

## Technology Decisions

### TD-001: [Technology Choice]
- **Choice**: [Specific technology — e.g., Socket.io, PostgreSQL, Vitest]
- **Alternatives Considered**: [Option A, Option B]
- **Rationale**: [Why this choice — performance, consistency, team standard]
- **Constitution Alignment**: [Library-First / Simplicity / etc. — how it satisfies]

### TD-002: [Technology Choice]
- **Choice**: [Technology]
- **Alternatives Considered**: [Alternatives]
- **Rationale**: [Rationale]
- **Constitution Alignment**: [Alignment]

[Every significant technology choice must have documented rationale.]

---

## Data Model

### [Entity Name]
| Field | Type | Constraints |
|-------|------|--------------|
| id | [UUID | string | int] | PK, [generated | required] |
| [field_name] | [type] | [required | optional, max N chars, FK → Entity] |
| created_at | timestamp | required |
| updated_at | timestamp | required |

### [Entity Name]
| Field | Type | Constraints |
|-------|------|--------------|
| [Define all entities with fields, types, and constraints] |

**Relationships**:
- [Entity A] 1 — N [Entity B]
- [Entity B] N — 1 [Entity C]

**Indexes**:
- [index_name]: [columns] — [purpose]

---

## API Contracts

**Reference**: `contracts/[api-name].yaml` (OpenAPI) or `contracts/events.yaml` (AsyncAPI)

### Endpoints Summary
| Method | Path | Purpose |
|--------|------|---------|
| POST | [path] | [Purpose] |
| GET | [path] | [Purpose] |
| PUT | [path] | [Purpose] |
| DELETE | [path] | [Purpose] |

### Request/Response Schemas
[Reference schemas from contracts/ or define key shapes]

### Error Responses
| Code | Condition | Response Shape |
|------|-----------|----------------|
| 400 | Validation error | `{ errors: [{ field, message }] }` |
| 401 | Unauthenticated | `{ error: "Unauthorized" }` |
| 403 | Forbidden | `{ error: "Forbidden" }` |
| 404 | Not found | `{ error: "Not found" }` |
| 429 | Rate limited | `{ error: "Rate limited", retryAfter: N }` |
| 500 | Server error | `{ error: "Internal error" }` |

---

## Phase Gates

### Simplicity Gate (Article VII)
- [ ] Using ≤3 projects?
- [ ] No future-proofing or speculative features?
- [ ] YAGNI — no "might need" additions?

### Anti-Abstraction Gate (Article VIII)
- [ ] Using framework features directly (no unnecessary wrappers)?
- [ ] Single model representation (no DTO + Entity + API model duplication without cause)?
- [ ] No "might switch X someday" abstractions?

### Integration-First Gate (Article IX)
- [ ] Contracts defined before implementation?
- [ ] Contract tests written?
- [ ] Integration tests use real DB (e.g., Testcontainers)?
- [ ] E2E tests for critical paths?

**Complexity Tracking**: [If any gate fails, document justification and approval here. Otherwise: "All gates passed."]

---

## File Creation Order

1. `contracts/[api-name].yaml` — API contract
2. `contracts/events.yaml` — [If async events]
3. `data-model.md` — [Refined from plan]
4. `tests/contract/[api]-contract.test.[ts|js|py]` — Contract tests
5. `src/entities/[entity].ts` — Entity definitions
6. `src/repositories/[entity]-repository.ts` — Data access
7. `src/services/[feature]-service.ts` — Business logic
8. `src/api/[feature]-routes.ts` — API routes
9. `tests/integration/[feature]-flow.test.[ts|js|py]` — Integration tests
10. `tests/e2e/[feature]-e2e.test.[ts|js|py]` — E2E tests

[Adjust order for your stack. Dependencies flow: contracts → tests → source.]

---

## Test Strategy

| Test Type | Scope | Tools | When to Run |
|-----------|-------|-------|-------------|
| Contract | API shape matches OpenAPI | [Pact | Dredd | custom] | Before implementation |
| Integration | Real DB, real services | [Vitest | Jest | Pytest] + Testcontainers | Every commit |
| E2E | Full user flow | [Playwright | Cypress] | PR merge |
| Property-based | Invariants, edge cases | [fast-check | Hypothesis] | CI |
| Performance | Latency, throughput | [k6 | Artillery] | Release gate |

---

## Complexity Tracking

[Document any phase gate exceptions or justified complexity here. Include: violation, rationale, approved by, date.]

| Item | Violation | Justification | Approved |
|------|-----------|---------------|----------|
| [If none, leave empty] | | | |

3. Constraint Document Template (constraints.md)

Location: specs/global/constraints.md or constraints/[domain]-constraints.md

# Constraints: [DOMAIN_OR_PROJECT]

## Metadata
- **Scope**: [Project-wide | Feature-specific | Domain]
- **Effective**: [DATE]
- **Review**: [DATE or "On change"]

---

## Architecture Constraints

### AC-1: [Constraint Name]
[Description of the structural rule. E.g., "Controllers must not contain business logic."]

**Rationale**: [Why this constraint exists]
**Enforcement**: [How to verify — code review, linter, architecture tests]

### AC-2: [Layered Architecture]
[Description — e.g., "API → Service → Repository. No cross-layer skipping."]

### AC-3: [Dependency Direction]
[Description — e.g., "Domain must not depend on infrastructure."]

### AC-4: [Project/Module Limits]
[Description — e.g., "Maximum 3 projects for initial implementation."]

---

## Security Constraints

### SC-1: [Authentication]
[Rule — e.g., "All API endpoints except health check require authentication."]

### SC-2: [Authorization]
[Rule — e.g., "Resource access must check ownership or role."]

### SC-3: [Data Protection]
[Rule — e.g., "Passwords hashed with bcrypt cost 12. Tokens hashed before storage."]

### SC-4: [Sensitive Data in Logs]
[Rule — e.g., "No PII, passwords, or tokens in logs. Use hashed identifiers."]

### SC-5: [Input Validation]
[Rule — e.g., "All user input validated and sanitized. Use parameterized queries."]

### SC-6: [HTTPS]
[Rule — e.g., "All external endpoints HTTPS only."]

---

## Performance Constraints

### PC-1: [Latency]
[Rule — e.g., "p95 API latency < 200ms for read endpoints."]

### PC-2: [Throughput]
[Rule — e.g., "Support N concurrent users per instance."]

### PC-3: [Database]
[Rule — e.g., "No N+1 queries. Use batch loading."]

### PC-4: [Bundle Size]
[Rule — e.g., "Initial JS bundle < 200KB gzipped."]

### PC-5: [Caching]
[Rule — e.g., "Cache immutable data. TTL documented."]

---

## Data Constraints

### DC-1: [Data Model]
[Rule — e.g., "Soft delete only. No hard deletes of user data."]

### DC-2: [Retention]
[Rule — e.g., "Audit logs retained 90 days."]

### DC-3: [Consistency]
[Rule — e.g., "Eventual consistency acceptable for [X]. Strong consistency for [Y]."]

### DC-4: [PII Handling]
[Rule — e.g., "PII encrypted at rest. Access logged."]

---

## Compliance Constraints

### CC-1: [Regulation]
[Rule — e.g., "GDPR: User data export and deletion within 30 days."]

### CC-2: [Audit]
[Rule — e.g., "All sensitive operations logged with userId, timestamp, action."]

### CC-3: [Data Residency]
[Rule — e.g., "EU user data stored in EU region only."]

---

## Constraint Violation Process

1. **Detection**: [How violations are found — review, tooling]
2. **Exception**: Document in [COMPLEXITY.md | ADR] with rationale and approval
3. **Review**: Exceptions reviewed [quarterly | per-release]

4. Test Plan Template (test-plan.md)

Location: specs/[branch-name]/test-plan.md or tests/TEST-PLAN.md

# Test Plan: [FEATURE_NAME]

## Metadata
- **Feature**: [FEATURE_NUMBER]
- **Created**: [DATE]
- **Spec**: spec.md
- **Plan**: plan.md

---

## Test Strategy Summary

| Test Type | Purpose | Tools | Coverage Target |
|-----------|---------|-------|-----------------|
| Contract | API shape matches spec | [Tool] | 100% endpoints |
| Integration | Real components work together | [Tool] | Critical paths |
| E2E | User flows work end-to-end | [Tool] | Happy path + key errors |
| Property-based | Invariants hold for all inputs | [Tool] | Core logic |
| Performance | Meets NFRs | [Tool] | Latency, throughput |

---

## Contract Tests

**Purpose**: Validate API shape matches OpenAPI/AsyncAPI before implementation.

### Scope
- [ ] All endpoints defined in contracts/
- [ ] Request schema validation
- [ ] Response schema validation
- [ ] Error response shapes (400, 401, 403, 404, 429, 500)

### Implementation

[Tool]: [Pact | Dredd | Prism | custom OpenAPI validator] [Command]: [e.g., npx dredd contracts/api.yaml http://localhost:3000]


### Test Cases
| Endpoint | Method | Scenario | Expected |
|----------|--------|----------|----------|
| [path] | POST | Valid request | 201, schema match |
| [path] | POST | Invalid body | 400, error shape |
| [path] | GET | Unauthenticated | 401 |

---

## Integration Tests

**Purpose**: Verify components work together with real DB, real services.

### Scope
- [ ] Service + Repository + DB
- [ ] API routes + Service
- [ ] External service integration (mocked or test double)

### Setup
- [ ] Testcontainers for [PostgreSQL | MySQL | etc.]
- [ ] Seed data: [describe]
- [ ] Cleanup: [per-test | per-suite]

### Test Cases
| ID | Scenario | Precondition | Action | Expected |
|----|----------|--------------|--------|----------|
| IT-1 | [Scenario] | [State] | [Action] | [Result] |
| IT-2 | [Scenario] | [State] | [Action] | [Result] |

### Mapping to Acceptance Criteria
- IT-1 → AC-1.1, AC-1.2
- IT-2 → AC-2.1

---

## E2E Tests

**Purpose**: Validate full user flows through real UI/API.

### Scope
- [ ] Happy path: [describe]
- [ ] Key error paths: [describe]
- [ ] Critical user journeys from spec

### Environment
- [ ] Staging / local with full stack
- [ ] Browser: [Playwright | Cypress]
- [ ] Headless: [yes | no for debug]

### Test Cases
| ID | Journey | Steps | Validates |
|----|---------|-------|-----------|
| E2E-1 | [Journey name] | 1. [step] 2. [step] | AC-X, AC-Y |
| E2E-2 | [Error journey] | 1. [step] 2. [step] | EC-Z |

---

## Property-Based Tests

**Purpose**: Verify invariants hold for arbitrary inputs.

### Properties to Test
| Property | Description | Generator |
|----------|-------------|-----------|
| P1 | [Invariant — e.g., "sort is idempotent"] | [fast-check | Hypothesis] |
| P2 | [Invariant — e.g., "output always valid"] | [generator config] |

### Example (fast-check)
```javascript
// [Pseudo-code — adapt to your stack]
fc.assert(fc.property(fc.array(fc.integer()), (arr) => {
const sorted = sort(arr);
return isSorted(sorted) && sameElements(arr, sorted);
}));

Performance Tests

Purpose: Verify NFRs (latency, throughput).

Scenarios

ScenarioMetricTargetTool
[Load scenario]p95 latency< 200msk6
[Load scenario]ThroughputN req/sArtillery
[Stress scenario]DegradationGracefulk6

Thresholds

  • Pass: p95 < [N]ms, error rate < 0.1%
  • Fail: Triggers [alert | block release]

Test Data

Fixtures

  • [Path to fixtures]
  • [Format: JSON, SQL, etc.]

Sensitive Data

  • No real PII in fixtures
  • Use [faker | factory] for test data

---

## 5. Architecture Decision Record Template (adr.md)

**Location**: `docs/adr/ADR-XXX-[title].md` or `specs/global/adr/`

```markdown
# ADR-XXX: [Decision Title]

## Status
[Proposed | Accepted | Deprecated | Superseded by ADR-YYY]

## Context
[What is the issue we're facing? What forces are at play? What constraints exist? Describe the situation that motivates this decision in 2-4 paragraphs.]

## Decision
[What have we decided? State the decision clearly and concisely. Use active voice: "We will use X" not "X will be used."]

## Consequences

### Positive
- [Benefit 1]
- [Benefit 2]

### Negative
- [Drawback 1]
- [Drawback 2]

### Neutral
- [Trade-off or side effect]

## Alternatives Considered

### Alternative 1: [Name]
[Description. Why we didn't choose it.]

### Alternative 2: [Name]
[Description. Why we didn't choose it.]

### Alternative 3: [Name]
[Description. Why we didn't choose it.]

## References
- [Link to spec, RFC, discussion]
- [Link to documentation]

6. Constitution Template (constitution.md)

Location: specs/global/constitution.md or memory/constitution.md

# [Project Name] Constitution

## Preamble

This constitution defines the immutable principles governing how specifications become code in [PROJECT_NAME]. All implementation plans, generated code, and development decisions must align with these articles. The constitution sits above individual constraints—it defines *how* we build.

---

## Article I: Library-First Principle

Every feature MUST begin as a standalone library. No feature shall be implemented directly in application code without first being abstracted into a reusable component.

### Sections
- **I.1**: New features create new library modules (or extend existing ones)
- **I.2**: Application code imports from libraries; libraries do not import from application
- **I.3**: Libraries have minimal dependencies; application wires them together

---

## Article II: CLI Interface Mandate

All libraries MUST expose functionality through a command-line interface.

### Sections
- **II.1**: Input: stdin, arguments, or files (text)
- **II.2**: Output: stdout (text)
- **II.3**: Structured data: JSON format supported
- **II.4**: Every library has a `main` or `cli` entry point

---

## Article III: Test-First Imperative

All implementation MUST follow strict Test-Driven Development.

### Sections
- **III.1**: No implementation code shall be written before unit tests are written
- **III.2**: Tests must be validated and approved by the user
- **III.3**: Tests must be confirmed to FAIL (Red phase) before implementation
- **III.4**: File creation order: tests before source in every task

---

## Article IV: [Custom Article — Optional]
[Add project-specific articles as needed]

---

## Article VII: Simplicity

### Sections
- **VII.1**: Maximum 3 projects for initial implementation
- **VII.2**: Additional projects require documented justification
- **VII.3**: No future-proofing or speculative features
- **VII.4**: YAGNI: You Aren't Gonna Need It

---

## Article VIII: Anti-Abstraction

### Sections
- **VIII.1**: Use framework features directly; do not wrap them
- **VIII.2**: Single model representation (no DTO + Entity + API model duplication without cause)
- **VIII.3**: No abstraction for "might switch X someday"
- **VIII.4**: Prefer concrete over abstract

---

## Article IX: Integration-First Testing

### Sections
- **IX.1**: Prefer real databases over mocks
- **IX.2**: Use actual service instances over stubs
- **IX.3**: Contract tests mandatory before implementation
- **IX.4**: E2E tests for critical paths
- **IX.5**: Mock only when necessary (external APIs, time-dependent logic)

---

## Phase Gates

The following gates must pass before implementation. Document exceptions in Complexity Tracking.

### Simplicity Gate (Article VII)
- [ ] Using ≤3 projects?
- [ ] No future-proofing?
- [ ] No speculative features?

### Anti-Abstraction Gate (Article VIII)
- [ ] Using framework directly?
- [ ] Single model representation?
- [ ] No unnecessary wrappers?

### Integration-First Gate (Article IX)
- [ ] Contracts defined?
- [ ] Contract tests written?
- [ ] Integration tests use real DB?

---

## Amendment Process

Modifications to this constitution require:

1. **Rationale**: Explicit documentation of why the change is needed
2. **Approval**: Review and approval by [project maintainers | tech lead | team]
3. **Compatibility**: Backwards compatibility assessment
4. **Record**: Dated amendment entry in Amendment History

---

## Amendment History

| Date | Article | Change | Rationale |
|------|---------|--------|-----------|
| [DATE] | [Article] | [Summary of change] | [Why] |

7. Telemetry/Observability Spec Template (telemetry.md)

Location: specs/[branch-name]/telemetry.md or specs/global/observability.md

# Telemetry Specification: [FEATURE_OR_SERVICE]

## Metadata
- **Scope**: [Feature | Service | Project-wide]
- **Created**: [DATE]
- **Review**: [DATE]

---

## Metrics to Collect

### Counters
| Metric Name | Description | Labels | Use Case |
|-------------|-------------|--------|----------|
| [feature]_requests_total | Total requests | method, path, status | Rate, error rate |
| [feature]_errors_total | Total errors | type, path | Error tracking |
| [feature]_[action]_total | [Action] count | [labels] | [Business metric] |

### Histograms
| Metric Name | Description | Buckets | Use Case |
|-------------|-------------|---------|----------|
| [feature]_request_duration_seconds | Request latency | 0.01, 0.05, 0.1, 0.5, 1, 5 | p50, p95, p99 |
| [feature]_[operation]_seconds | [Operation] duration | [buckets] | Performance |

### Gauges
| Metric Name | Description | Use Case |
|-------------|-------------|----------|
| [feature]_active_connections | Current connections | Capacity |
| [feature]_queue_size | Pending items | Backlog |

---

## Log Levels and Formats

### Log Level Policy
| Level | When to Use | Example |
|------|-------------|---------|
| ERROR | Unrecoverable failure | "Database connection lost" |
| WARN | Recoverable issue | "Rate limit approaching" |
| INFO | Normal operation milestones | "Request completed" |
| DEBUG | Detailed flow (dev/staging) | "Cache hit for key X" |
| TRACE | Verbose (troubleshooting only) | "Entering function Y" |

### Log Format (Structured JSON)
```json
{
"timestamp": "ISO8601",
"level": "INFO",
"message": "Human-readable message",
"service": "[SERVICE_NAME]",
"trace_id": "[optional]",
"span_id": "[optional]",
"fields": {
"request_id": "[uuid]",
"user_id": "[hashed or redacted]",
"duration_ms": 123
}
}

Fields to Include

  • timestamp (ISO8601)
  • level
  • message
  • service/component
  • request_id (for correlation)
  • duration_ms (for requests)
  • [feature-specific fields]

Fields to Exclude (Never Log)

  • Passwords, tokens, API keys
  • Full PII (email, name, address)
  • Credit card numbers
  • Session IDs (use hashed if needed)

Alert Thresholds

Alert NameConditionSeverityAction
[Feature]HighErrorRateError rate > 5% for 5 minCriticalPage on-call
[Feature]HighLatencyp95 > 500ms for 5 minWarningNotify team
[Feature]RateLimitSpikeRate limit hits > 100/minWarningInvestigate
[Feature]QueueBacklogQueue size > 1000WarningScale or investigate
[Feature]ServiceDownNo successful requests in 2 minCriticalPage on-call

Alert Routing

  • Critical: [PagerDuty | Opsgenie | Slack #incidents]
  • Warning: [Slack #alerts | Email]
  • Info: [Slack #monitoring]

Dashboard Specifications

Dashboard: [Feature] Overview

Purpose: At-a-glance health of [feature]

Panels:

PanelMetric/QueryVisualization
Request Raterate([feature]_requests_total[5m])Graph
Error Raterate([feature]_errors_total[5m]) / rate([feature]_requests_total[5m])Graph
p95 Latencyhistogram_quantile(0.95, [feature]_request_duration_seconds)Graph
Active Users[custom metric]Single stat

Dashboard: [Feature] Deep Dive

Purpose: Debugging and investigation

Panels:

PanelMetric/QueryVisualization
[Add detailed panels]

Distributed Tracing

Trace Configuration

  • Sample rate: [1.0 for critical | 0.1 for high-volume]
  • Propagation: [W3C Trace Context | B3]
  • Backend: [Jaeger | Zipkin | vendor]

Spans to Create

Span NameParentAttributes
[operation]request[key attributes]
[db.query][operation]query_type, table

Completeness Checklist

  • All critical paths have metrics
  • Error paths logged with context
  • No sensitive data in logs
  • Alerts have clear thresholds and actions
  • Dashboards cover operational needs
  • Tracing configured for debugging

---

## Quick Reference: Template Locations

| Template | Typical Path | When to Use |
|----------|--------------|-------------|
| Feature Spec | `specs/[branch]/spec.md` | Start of every feature |
| Implementation Plan | `specs/[branch]/plan.md` | After spec, before tasks |
| Constraints | `specs/global/constraints.md` | Project setup, per-domain |
| Test Plan | `specs/[branch]/test-plan.md` | With plan, before implementation |
| ADR | `docs/adr/ADR-XXX-[title].md` | Significant technical decisions |
| Constitution | `specs/global/constitution.md` | Project setup |
| Telemetry | `specs/[branch]/telemetry.md` | Features with observability needs |