Skip to main content

Appendix C: 2026 Updates for SDD

This appendix captures high-impact updates that keep an SDD implementation current in 2026.


1. Agent Interoperability: MCP and A2A

MCP (Model Context Protocol)

Use MCP to standardize how agents connect to tools and context sources:

  • Repositories and documentation
  • Issue trackers and project management tools
  • Internal APIs and knowledge bases

SDD impact:

  • Improves repeatability of spec-to-plan workflows across tools.
  • Reduces one-off tool integrations.
  • Makes agent behavior more auditable.

A2A (Agent2Agent)

Use A2A for multi-agent orchestration where specialized agents collaborate:

  • Spec agent (requirements refinement)
  • Plan agent (architecture and tradeoffs)
  • Task agent (decomposition and sequencing)
  • Implementation/review agents

SDD impact:

  • Cleaner handoffs between artifacts (spec.md -> plan.md -> tasks.md).
  • Better isolation of responsibilities and review boundaries.

2. Modern API Specification Stack

OpenAPI 3.2

Use as the baseline for REST API contracts.

AsyncAPI 3.1

Use for event-driven and asynchronous interfaces.

Arazzo

Use to define and validate multi-step API workflows (end-to-end business flows).

Overlay

Use overlays to apply environment-specific or team-specific patches to a base API spec without forking the source contract.

SDD guidance:

  • Keep source contracts minimal and canonical.
  • Apply overlays in CI per environment.
  • Validate both base and composed contracts.

3. Eval-Driven Delivery

Add an eval layer between generation and deployment:

  1. Spec quality evals: clarity, ambiguity, completeness checks.
  2. Implementation evals: contract conformance, regression, behavior checks.
  3. Risk evals: security and policy checks for agent/tool actions.

Minimum Eval Gate

  • Block merge if any critical eval fails.
  • Track pass/fail trend per feature branch.
  • Keep a stable eval set and a canary eval set.

4. LLM-Specific Security Controls

Traditional SAST/DAST must be extended with agentic controls:

  • Prompt injection tests on untrusted inputs.
  • Tool allowlists and argument validation.
  • Data loss prevention (redaction before model/tool calls).
  • Retrieval trust levels and provenance tracking.

Practical Policy

llm-security:
prompt-injection-tests: required
tool-allowlist: required
pii-redaction-before-model: required
retrieval-source-trust-levels: required
fail-on-policy-violation: true

Reference frameworks: OWASP LLM Top 10, NIST AI RMF.


5. Supply Chain and Provenance

Treat generated code as a software supply chain concern:

  • Generate SBOMs (SPDX or CycloneDX).
  • Sign build artifacts (Sigstore/cosign or equivalent).
  • Track provenance and build attestations.
  • Align CI with SLSA guidance.

SDD impact:

  • Stronger auditability from specification to deployed artifact.
  • Reduced risk in dependency and artifact tampering scenarios.

6. Cost and Latency Governance

AI-native SDLC needs explicit economic controls:

  • Token budgets per pipeline stage.
  • Context-size limits by task type.
  • Retry and fallback policies by risk tier.
  • Caching strategy for deterministic transformations.

Example Policy

ai-budget:
max_tokens_per_task: 30000
max_model_cost_per_pr_usd: 25
fallback_policy: "smaller-model-then-escalate"
high_risk_paths_require_stronger_model: true

7. Migration Checklist (Existing SDD Repos)

  • Replace legacy model/API examples with current provider APIs.
  • Add MCP-compatible tool integration strategy.
  • Add A2A-compatible multi-agent handoff rules.
  • Upgrade API contracts to OpenAPI 3.2 / AsyncAPI 3.1 where applicable.
  • Add eval gates to CI before deployment.
  • Add LLM-specific threat controls and policy tests.
  • Add SBOM + provenance checks to release pipeline.
  • Add cost/latency guardrails for AI workflows.

Previous: Appendix B - Recommended Tool Stack