This guide provides implementation patterns and standards for building MCP servers. For WHAT to build, see the PRP (Product Requirement Prompt) documents.
IMPORTANT: You MUST follow these principles in all code changes:
- Simplicity should be a key goal in design
- Choose straightforward solutions over complex ones whenever possible
- Simple solutions are easier to understand, maintain, and debug
- Avoid building functionality on speculation
- Implement features only when they are needed, not when you anticipate they might be useful in the future
- High-level modules should not depend on low-level modules
- Both should depend on abstractions
- This principle enables flexibility and testability
- Software entities should be open for extension but closed for modification
- Design systems so that new functionality can be added with minimal changes to existing code
CRITICAL: This project uses UV for Python package management. NEVER use pip or other package managers.
# Create virtual environment
uv venv
# Install dependencies from pyproject.toml
uv sync
# Install a specific package
uv add requests
# Remove a package
uv remove requests
# Run a Python script or command
uv run python script.py
uv run pytest
# Install editable packages
uv pip install -e .
# Run the main application
uv run python src/main_server.pyIMPORTANT: We follow strict vertical slice architecture where each feature is self-contained with co-located tests. Tests MUST be placed directly next to their related feature components.
├── ai_docs/ # AI documentation and context
├── claude_desktop_config.json.example # Example Claude Desktop configuration
├── CLAUDE.md # This file - project context
├── docker-compose.yml # Docker Compose configuration
├── Dockerfile # Docker image definition
├── docs/ # Documentation
│ └── docker-deployment.md # Docker deployment guide
├── prps/ # Product Requirement Prompts
│ └── prp_base_template_v1.md # Base MCP PRP template
├── pyproject.toml # UV package configuration
├── README.md # Project documentation
├── scripts/ # Utility scripts
│ └── docker-run.sh # Docker helper script
├── src/ # Source code
│ ├── __init__.py
│ ├── main_server.py # Main server entry point (routes to features)
│ ├── logging_config.py # Standardized logging module
│ ├── data_types.py # Standard data types and response formats
│ ├── features/ # Feature modules (vertical slices)
│ │ ├── __init__.py
│ │ └── hello_world/ # Hello World example feature
│ │ ├── __init__.py
│ │ ├── config.py # Feature-specific configuration
│ │ ├── hello_world_server.py # Feature's MCP server implementation
│ │ ├── api/ # External API integrations for this feature
│ │ │ ├── __init__.py
│ │ │ └── tests/ # API tests (currently empty for hello_world)
│ │ │ └── __init__.py
│ │ ├── tools/ # MCP Tools (actions LLMs can call)
│ │ │ ├── __init__.py
│ │ │ ├── greeting.py # Greeting tool implementations
│ │ │ └── tests/ # Tool tests (co-located)
│ │ │ ├── __init__.py
│ │ │ └── test_greeting.py
│ │ ├── resources/ # MCP Resources (read-only data)
│ │ │ ├── __init__.py
│ │ │ ├── status.py # Status resource implementations
│ │ │ └── tests/ # Resource tests (co-located)
│ │ │ ├── __init__.py
│ │ │ └── test_status.py
│ │ └── prompts/ # MCP Prompts (reusable templates)
│ │ ├── __init__.py
│ │ ├── greeting_template.py # Greeting prompt templates
│ │ └── tests/ # Prompt tests (co-located)
│ │ ├── __init__.py
│ │ └── test_greeting_template.py
│ └── tests/ # Root-level integration tests
│ ├── __init__.py
│ └── test_logging_config.py # Logging system tests
└── uv.lock # Dependency lock file (auto-generated)
CRITICAL: Every component MUST have its test file in the same directory:
- Tools:
src/features/feature_name/tools/tool.py→src/features/feature_name/tools/tests/test_tool.py - Resources:
src/features/feature_name/resources/resource.py→src/features/feature_name/resources/tests/test_resource.py - Prompts:
src/features/feature_name/prompts/prompt.py→src/features/feature_name/prompts/tests/test_prompt.py - APIs:
src/features/feature_name/api/feature_api.py→src/features/feature_name/api/tests/test_feature_api.py
IMPORTANT: Follow these naming patterns for clarity and consistency:
- Main Server Entry Point:
src/main_server.py- Routes requests to feature servers - Feature Servers:
src/features/{feature_name}/{feature_name}_server.py- Individual feature implementations - Feature Config:
src/features/{feature_name}/config.py- Feature-specific configuration - Component Files: Use descriptive names (e.g.,
greeting.py,status.py,greeting_template.py) - Test Files: Always prefix with
test_and match the component name exactly
When adding a new MCP feature:
- Create feature directory:
src/features/{feature_name}/ - Add to
src/main_server.pyrouting - Follow the hello_world example structure
- Include co-located tests for all components
# Start development (CRITICAL: Install in editable mode first)
uv sync && uv pip install -e . && uv run python src/main_server.py
# Run tests
uv run pytest
# Run specific test file
uv run pytest tests/test_specific.py
# Run with verbose output
uv run pytest -v
# Format code (if we add formatting tools)
uv run ruff format .
# Lint code (if we add linting tools)
uv run ruff check .CRITICAL: Tests MUST be placed next to the code they test, not in a separate test directory.
# Run all tests
uv run pytest
# Run tests for a specific feature
uv run pytest src/features/github_integration/
# Run tests for a specific component type
uv run pytest src/features/*/tools/tests/
# Run a specific test file
uv run pytest src/features/github_integration/tools/tests/test_create_issue.py
# Run with verbose output to see test structure
uv run pytest -v src/features/- Every .py file MUST have a corresponding test file in the same directory structure
- Test files MUST be in a
tests/subdirectory next to the code - Test file names MUST start with
test_and match the module name - Each
tests/directory MUST have__init__.py
IMPORTANT: This project builds MCP (Model Context Protocol) servers using our vertical slice architecture.
- Only use Python MCP SDK for low-level protocol customization (rare cases)
- Follow our feature-based organization for all MCP servers
Each MCP feature follows our vertical slice pattern:
src/features/weather_api/ # Example MCP feature
├── api/ # External API integration
│ ├── weather_client.py # API client
│ └── tests/test_weather_client.py
├── tools/ # MCP Tools (LLM can call)
│ ├── get_forecast.py # Tool implementation
│ └── tests/test_get_forecast.py # Co-located test
├── resources/ # MCP Resources (read-only data)
│ ├── current_weather.py # Resource implementation
│ └── tests/test_current_weather.py
└── prompts/ # MCP Prompts (templates)
├── weather_summary.py # Prompt implementation
└── tests/test_weather_summary.py
CRITICAL: Use these exact commands for MCP development in this project:
# Install MCP server in Claude Desktop for testing
mcp install src/main_server.pyClaude Desktop Configuration Pattern:
{
"mcpServers": {
"{feature_name}": {
"command": "uv",
"args": ["run", "python", "src/features/{feature_name}/{feature_name}_server.py"],
"cwd": "/absolute/path/to/mcp_builder",
"env": {
"API_KEY": "your-api-key-if-needed"
}
}
}
}Testing Workflow:
- Develop MCP server using our vertical slice architecture
- Test with
mcp install src/main_server.py - Integration test with Claude Desktop configuration
- Tools: Functions LLMs can call (like POST endpoints) - implement actions
- Resources: Read-only data sources (like GET endpoints) - provide data
- Prompts: Reusable templates for LLM interactions - guide usage
- Transport: stdio (local testing), SSE (remote deployment)
- Context: MCP Context parameter provides client communication capabilities
CRITICAL: All MCP tools SHOULD include the Context parameter to enable rich client communication.
This project uses a dual logging approach for comprehensive observability:
- MCP Context Logging - For client-visible messages and progress
- Custom Structured Logging - For server-side debugging and metrics
The MCP Context provides powerful capabilities for client communication:
from mcp.server.fastmcp import Context
@mcp.tool()
@log_performance(logger) # Server-side performance tracking
async def my_tool(param: str, *, ctx: Context) -> str:
"""Tool with full Context capabilities."""
# Server-side logging (not visible to client)
logger.tool_called("my_tool", param_length=len(param))
# Client-visible logging via Context
# Log levels for client visibility
await ctx.debug("Detailed debug information")
await ctx.info(f"Processing {param}")
await ctx.warning("Non-critical issue detected")
await ctx.error("Error occurred but recovered")
# Progress reporting for long operations
await ctx.report_progress(0.0, "Starting...")
await ctx.report_progress(0.5, "Halfway complete...")
await ctx.report_progress(1.0, "Finished!")
# Advanced features (when applicable)
# Read resources from the server
data = await ctx.read_resource("resource://uri")
# Request LLM generation (requires client support)
response = await ctx.sample("Generate a summary of...")- Always include Context parameter in tool signatures as required (using
*, ctx: Context) - Use dual logging - Context for client visibility, custom logging for server debugging
- Report progress for operations taking more than 1-2 seconds
- Use appropriate log levels:
debug: Detailed technical informationinfo: General operational messageswarning: Important notices that don't stop executionerror: Error conditions (use before returning error response)
- Leverage advanced Context features where applicable:
ctx.read_resource()for accessing server resourcesctx.sample()for LLM-powered content generation
@mcp.tool()
@log_performance(logger)
async def process_data_tool(
input_data: dict,
options: dict = None,
*,
ctx: Context
) -> str:
"""Process data with full observability."""
# Server-side logging
logger.tool_called("process_data_tool", input_size=len(str(input_data)))
try:
# Client communication
await ctx.info("Starting data processing...")
await ctx.report_progress(0.1, "Validating input...")
# Validate input
validated = validate_tool_input(DataModel, input_data)
# Long operation with progress
await ctx.report_progress(0.3, "Processing phase 1...")
result = await phase1_processing(validated)
await ctx.report_progress(0.6, "Processing phase 2...")
final_result = await phase2_processing(result, options)
# Success logging
await ctx.report_progress(1.0, "Complete")
await ctx.info("Data processed successfully")
return success_response(
data=final_result,
message="Processing complete"
).to_json_string()
except Exception as e:
# Dual error logging
logger.tool_failed("process_data_tool", str(e), 0)
await ctx.error(f"Processing failed: {str(e)}")
return error_response(
ErrorCode.INTERNAL_ERROR,
str(e)
).to_json_string()CRITICAL: All MCP servers MUST use the standardized logging module for consistency and observability.
Use the centralized logging configuration from src/logging_config.py:
from logging_config import setup_mcp_logging, MCPLogger, log_performance, log_context
# Setup logging in your feature's {feature_name}_server.py
logger = setup_mcp_logging(config)
# Use in tools, resources, and prompts
@mcp.tool()
@log_performance(logger)
async def my_tool(param: str) -> str:
"""Tool with automatic performance logging."""
logger.tool_called("my_tool", param_length=len(param))
try:
result = process_data(param)
return success_response(data=result).to_json_string()
except Exception as e:
logger.tool_failed("my_tool", str(e), 0)
return error_response(
ErrorCode.INTERNAL_ERROR,
"Tool execution failed"
).to_json_string()MANDATORY for all MCP implementations:
- Tool Execution Logging: Every tool MUST log start, completion, and errors
- Resource Access Logging: Track all resource accesses with timing
- External API Logging: Log all external API calls with duration and status
- Security Event Logging: Log authentication and authorization events
- Performance Metrics: Track execution times and resource usage
# Use context for related operations
with log_context(logger, user_id="user123", session_id="session456"):
result = await some_operation()
# All logs in this context will include user_id and session_id- DEBUG: Detailed diagnostic information (development only)
- INFO: General information about normal operations
- WARNING: Potentially problematic situations that don't stop execution
- ERROR: Error events that still allow the application to continue
- CRITICAL: Serious error events that may cause the application to abort
CRITICAL: All MCP servers MUST use standardized data types from src/data_types.py for consistency.
ALL tools MUST return standardized JSON responses:
from data_types import MCPToolResponse, success_response, error_response, ErrorCode
@mcp.tool()
async def standardized_tool(name: str) -> str:
"""Tool following standard response format."""
try:
# Input validation
if not name.strip():
return error_response(
ErrorCode.VALIDATION_ERROR,
"Name cannot be empty"
).to_json_string()
# Process request
result = process_name(name)
# Return standardized success
return success_response(
data={"processed_name": result},
message=f"Successfully processed {name}",
metadata={"processing_time": "0.5s"}
).to_json_string()
except Exception as e:
return error_response(
ErrorCode.INTERNAL_ERROR,
f"Processing failed: {str(e)}"
).to_json_string()ALL tool inputs MUST be validated using Pydantic models:
from data_types import TextInput, UserIdentifier, validate_tool_input
@mcp.tool()
async def validated_tool(name: str, email: str = None) -> str:
"""Tool with proper input validation."""
try:
# Validate input using standard models
user_data = validate_tool_input(
UserIdentifier,
{"name": name, "email": email}
)
# Use validated data
result = process_user(user_data)
return success_response(data=result).to_json_string()
except ValueError as e:
return error_response(
ErrorCode.VALIDATION_ERROR,
str(e)
).to_json_string()ALL resources MUST extend MCPResourceData:
from data_types import MCPResourceData
class WeatherData(MCPResourceData):
"""Weather resource data model."""
temperature: float
humidity: int
conditions: str
location: str
@mcp.resource("weather://current/{location}")
async def get_weather(location: str) -> str:
"""Resource with standardized data structure."""
weather_data = WeatherData(
temperature=72.5,
humidity=65,
conditions="sunny",
location=location,
cache_ttl=300 # 5 minutes
)
return weather_data.model_dump_json(indent=2)ALL errors MUST use standardized error codes and responses:
from data_types import ErrorCode, error_response
# Standard error patterns:
try:
api_result = await external_api_call()
except TimeoutError:
return error_response(
ErrorCode.TIMEOUT_ERROR,
"External API request timed out",
details={"timeout_seconds": 30}
).to_json_string()
except PermissionError:
return error_response(
ErrorCode.PERMISSION_DENIED,
"Insufficient permissions for this operation"
).to_json_string()MANDATORY rules for all MCP implementations:
- Consistent Response Format: All tools return MCPToolResponse JSON strings
- Input Validation: All inputs validated with Pydantic models
- Error Standardization: All errors use standard ErrorCode enumeration
- Resource Structure: All resources extend MCPResourceData
- Type Hints: All functions have complete type annotations
- JSON Serialization: Use
serialize_for_llm()for complex data
- Use type hints for all function parameters and return types
- Use docstrings for all public functions and classes
- Prefer async/await for I/O operations
- MANDATORY: Use standardized data types from
src/data_types.py - MANDATORY: Use logging from
src/logging_config.py - Keep functions small and focused (single responsibility)
- Each feature should be self-contained in its own module
- Import statements should be organized: standard library, third-party, local imports
- Use absolute imports from src/ directory
- Import logging and data types in all feature modules
- Test files must start with
test_ - Test functions must start with
test_ - Use descriptive test names that explain what is being tested
- Mock external dependencies in tests
- Test both success and error response formats
- Test input validation thoroughly
See docs/docker-deployment.md for containerization details. Basic usage:
# Development
./scripts/docker-run.sh dev
# Production
docker-compose up -d mcp-server
# Testing
./scripts/docker-run.sh test- NEVER commit without running tests first
- NEVER build complex solutions when simple ones will work
- NEVER add features without corresponding tests
- NEVER break the vertical slice architecture
- ALWAYS run
uv run pytestbefore committing - ALWAYS add tests for new features
- ALWAYS use type hints
- ALWAYS follow the core principles (KISS, YAGNI, etc.)
- ALWAYS use uv commands for package management
# Before committing, always run:
uv run pytest # Ensure tests pass
uv run ruff check . # Check linting (if configured)
# Commit with descriptive messages
git add .
git commit -m "feat: add new MCP tool for X functionality"- Create feature directory:
src/features/{feature_name}/ - Copy structure from hello_world example
- Implement tools, resources, prompts with tests
- Register in
src/main_server.py - Test:
uv run pytest src/features/{feature_name}/
# Add a new dependency
uv add package-name
# Add development dependency
uv add --dev package-name
# Update dependencies
uv sync# Install UV if not already installed
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone and setup project
git clone <repo-url>
cd mcp_builder
uv sync
uv pip install -e . # CRITICAL: Install in editable mode for imports
uv run python src/main_server.py- Configure your IDE to use the UV virtual environment
- Set Python interpreter to
.venv/bin/python - Enable type checking and linting if available
- ImportError: Run
uv syncafter pulling changes - Module not found: Use
uv runprefix for commands - Test failures: Check
uv syncand run tests with-vfor details - MCP connection issues: Test with MCP Inspector first
This guide covers HOW to implement MCP servers using project patterns. For WHAT to build:
- Create a PRP document using the template in
prps/prp_base_template_v1.md - Focus the PRP on requirements and specifications
- Reference this guide for implementation patterns