Skip to main content

Chapter 19: Model Context Protocol (MCP)


Learning Objectives

By the end of this chapter, you will be able to:

  • Explain what the Model Context Protocol (MCP) is and why it's essential for AI-native engineering
  • Build and configure MCP servers to connect AI agents to external context
  • Integrate issue trackers (Jira, Linear) directly into your SDD workflow via MCP
  • Replace custom scripts with standardized MCP tools for context loading
  • Set up progressive context loading using MCP resources

What is the Model Context Protocol?

The Model Context Protocol (MCP) is an open standard that enables AI agents to connect to external data sources and tools. Introduced in 2024, it has become the standard mechanism for context integration in 2026.

Without MCP, agents operate in a silo. They can only see what you explicitly paste into the prompt or what exists in the local file system. If your specifications are in Notion, your tasks in Linear, and your API documentation in a private portal, the agent is flying blind.

MCP solves this by defining a standard client-server architecture:

  • MCP Client: The AI agent (Cursor, Roo Code, Claude Code).
  • MCP Server: A lightweight service that exposes local or remote resources and tools.

Why MCP is Critical for SDD

In Spec-Driven Development, the specification is the source of truth, and context is the fuel that powers AI implementation. MCP automates the flow of context into the agent's working memory.

1. Direct Integration with Issue Trackers

Instead of manually copying a Jira ticket or Linear issue into a local spec.md file, an MCP server can expose those issues directly. An agent can query: "Fetch the acceptance criteria for ticket PROJ-123" and receive structured context.

2. Live API Documentation

If your project depends on an external API, pasting the OpenAPI spec into your prompt consumes massive amounts of tokens. An MCP server can expose the API documentation as a queryable resource. The agent asks: "What is the endpoint for fetching a user profile?" and the server returns only the relevant snippet.

3. Dynamic Context Loading

In Chapter 24, we discussed progressive context loading. MCP makes this seamless. An MCP server can expose a list_resources tool that allows the agent to browse available context before deciding what to load, preventing context window bloat.


Architecture of an MCP Server

An MCP server exposes three main capabilities:

  1. Resources: Read-only data sources (e.g., API docs, database schemas).
  2. Prompts: Pre-configured prompt templates.
  3. Tools: Executable functions (e.g., query a database, fetch a Jira ticket, trigger a CI build).

Example: Linear MCP Server

Imagine you are using Linear for task management. Your workflow looks like this:

  1. You configure the Linear MCP Server in your agent (e.g., Cursor).
  2. You prompt the agent: "Implement the feature described in LIN-456."
  3. The agent calls the get_issue tool provided by the Linear MCP server.
  4. The server returns the issue description, which contains the SDD-compliant specification.
  5. The agent implements the feature based on the spec.

This eliminates the "copy-paste" intermediate step and ensures the agent is always working from the latest version of the specification.


Building a Custom MCP Server

While many off-the-shelf MCP servers exist (for GitHub, Postgres, Jira, etc.), building a custom MCP server allows you to expose proprietary internal systems to your agents.

The SDK

MCP provides SDKs for Node.js and Python. A basic server involves defining a tool and its handler:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
name: "internal-api-docs",
version: "1.0.0"
});

server.tool("search_docs",
"Search internal API documentation",
{ query: z.string() },
async ({ query }) => {
const results = await searchInternalWiki(query);
return {
content: [{ type: "text", text: JSON.stringify(results) }]
};
}
);

const transport = new StdioServerTransport();
await server.connect(transport);

Configuring the Agent

Once the server is built, you configure your agent to use it. In Cursor or Roo Code, you provide the path to the executable or the command to run the server.

{
"mcpServers": {
"internal-docs": {
"command": "node",
"args": ["/path/to/server/index.js"]
}
}
}

The P+Q+P Pattern with MCP

In Chapter 18, we introduced the P+Q+P pattern (Persona + Questions + Principles) for subagents. MCP supercharges this.

Instead of the subagent asking you questions to gather requirements, it can use MCP tools to gather them autonomously:

  1. Agent: "I need to know the database schema for the users table."
  2. Agent calls MCP Tool: query_schema({ table: "users" })
  3. MCP Server: Returns the schema.
  4. Agent: Proceeds to write the database migration.

The agent shifts from being a passive recipient of context to an active researcher.


Try With AI

Prompt 1: Explore Available MCP Servers

"What are the most common open-source MCP servers available for connecting to issue trackers and databases? List 3 and explain how they would improve a Spec-Driven Development workflow."

Prompt 2: Design a Custom MCP Server

"I have an internal system that stores our API contracts in a proprietary format. I want to build an MCP server in TypeScript to expose these contracts to my agent. Generate the boilerplate code for an MCP server with a single tool called get_contract that takes a service name as an argument."


Practice Exercises

Exercise 1: Configure an Existing MCP Server

Install and configure an existing MCP server (like the official GitHub or SQLite server) in your agent. Use it to query an issue or table schema without leaving the chat interface.

Expected outcome: The agent successfully retrieves context using the MCP tool.

Exercise 2: Design an MCP-Enabled Workflow

Map out a workflow where an agent takes a feature from a Jira ticket to a PR, using MCP at every step. Document which tools the agent would need (e.g., get_jira_issue, create_branch, create_pr).

Expected outcome: A workflow diagram showing the interplay between the agent and MCP servers.


Key Takeaways

  1. MCP bridges the context gap. It allows agents to autonomously fetch external data rather than relying on humans to copy-paste.
  2. Tools over scripts. Instead of writing custom bash scripts for context loading, expose those scripts as standardized MCP tools.
  3. Active research. MCP enables agents to actively research their environment, drastically reducing the amount of context you need to manually load.
  4. Integrate with issue trackers. Use MCP to pull specifications directly from Jira, Linear, or Notion, keeping the source of truth in your issue tracker.

Chapter Quiz

  1. What does MCP stand for, and what problem does it solve?
  2. Explain the difference between an MCP Client and an MCP Server.
  3. How does MCP improve the progressive context loading strategy?
  4. What are the three main capabilities an MCP server can expose?
  5. How does MCP change the way the P+Q+P subagent pattern operates?

Back to: Part VI Overview | Next: Chapter 20 — /speckit.specify — Feature Specification