Skip to main content
Glama

MCP Boilerplate Server

by shazaaly
MCP_RESOURCES_AND_PROMPTS.md8.06 kB
# MCP Resources and Prompts Guide This document explains what MCP Resources and Prompts are, how they work, and how users can benefit from them in the Model Context Protocol (MCP) ecosystem. ## Table of Contents - [What are MCP Resources?](#what-are-mcp-resources) - [What are MCP Prompts?](#what-are-mcp-prompts) - [How Users Access Resources](#how-users-access-resources) - [How Users Use Prompts](#how-users-use-prompts) - [Benefits for Users](#benefits-for-users) - [Example Server Implementation](#example-server-implementation) - [Real-world Usage Examples](#real-world-usage-examples) ## What are MCP Resources? **Resources** are data sources that MCP clients can access to load contextual information directly into the LLM's working memory. ### Key Characteristics: - Act like "GET endpoints" for structured data - Provide real-time access to server-side information - Load data directly into the LLM's context window - Support both static and dynamic (parameterized) content ### Resource Types: 1. **Static Resources**: Fixed data that doesn't change based on parameters 2. **Dynamic Resources**: Template-based resources with parameters (e.g., `data://user/{user_id}`) ### Resource URI Schemes: - `resource://` - General resource data - `data://` - Structured data sources - `config://` - Configuration information - `file://` - File-based resources - Custom schemes as needed ## What are MCP Prompts? **Prompts** are reusable, parameterized templates that generate well-structured text for common tasks. ### Key Characteristics: - Pre-defined prompt templates with customizable parameters - Generate consistent, optimized prompts for specific use cases - Support optional parameters with default values - Can be synchronous or asynchronous ### Prompt Benefits: - Consistency across interactions - Optimized prompt engineering - Reduced user effort - Parameterization for flexibility ## How Users Access Resources ### In Claude Code (Desktop App) 1. **Resource Panel**: Click the resources icon in the sidebar 2. **Browse Resources**: View all available resources from connected servers 3. **Load Resource**: Click any resource to load it into the conversation context 4. **Context Integration**: Data becomes available to the LLM immediately ### In Other MCP Clients Users can access resources through: - **Direct URI requests**: `Load resource: data://users` - **Resource browsers**: GUI interfaces for exploring available resources - **Context menus**: Right-click options to load specific resources - **Quick access panels**: Shortcuts to frequently used resources ### Resource Loading Process ``` 1. User Request: "Load resource: data://user/1" 2. Client → Server: Fetch resource request 3. Server Response: Executes @mcp.resource function 4. Data Loading: Resource data injected into LLM context 5. Ready for Use: LLM can reference loaded data ``` ## How Users Use Prompts ### Prompt Invocation Users can invoke prompts through: - **Prompt templates**: Select from available prompts in client UI - **Parameter input**: Fill in required and optional parameters - **Direct invocation**: Call prompts programmatically ### Prompt Execution Flow ``` 1. User selects prompt: "analyze_user_data" 2. User provides parameters: user_id=1, analysis_type="detailed" 3. Server generates prompt: "Please analyze the data for user ID 1..." 4. Generated prompt sent to LLM for processing ``` ## Benefits for Users ### Resource Benefits - **🔄 Real-time Data**: Access to fresh, up-to-date information - **📊 Context Loading**: Relevant data loaded directly into LLM memory - **🎯 Structured Access**: Well-defined URIs for specific data types - **⚡ Performance**: No need for multiple tool calls to gather data - **🔗 Integration**: Seamless data flow between server and LLM ### Prompt Benefits - **📝 Consistency**: Same prompt structure and quality every time - **⚡ Efficiency**: No need to craft prompts from scratch - **🎛️ Customization**: Parameters allow for specific use cases - **🏆 Best Practices**: Server provides optimized prompt templates - **🔄 Reusability**: Same prompts work across different contexts ### Combined Benefits When used together, resources and prompts create powerful workflows: 1. **Load relevant data** via resources 2. **Generate optimized prompts** for that data 3. **Get high-quality responses** from the LLM ## Example Server Implementation ### Resources Implementation ```python from fastmcp import FastMCP import datetime mcp = FastMCP("Demo MCP Server") # Static Resource @mcp.resource("resource://server-info") def get_server_info() -> dict: """Provides information about this MCP server""" return { "name": "Demo MCP Server", "version": "1.0.0", "capabilities": ["tools", "resources", "prompts"], "uptime": str(datetime.datetime.now()) } # Dynamic Resource with Parameters @mcp.resource("data://user/{user_id}") def get_user_resource(user_id: str) -> dict: """Provides detailed information about a specific user""" # Implementation logic here return user_data ``` ### Prompts Implementation ```python # Basic Prompt @mcp.prompt def analyze_user_data(user_id: int, analysis_type: str = "summary") -> str: """Generate a prompt to analyze user data""" return f"Please analyze the data for user ID {user_id} and provide a {analysis_type} analysis." # Advanced Prompt with Multiple Parameters @mcp.prompt def generate_user_report(user_ids: str, report_format: str = "detailed") -> str: """Generate a prompt to create a user report""" return f"Create a {report_format} report for users with IDs: {user_ids}." ``` ## Real-world Usage Examples ### Example 1: User Analysis Workflow ``` Step 1: Load user data User: "Load resource: data://user/1" Result: Alice's profile loaded into context Step 2: Generate analysis prompt User: Invoke prompt "analyze_user_data" with parameters: - user_id: 1 - analysis_type: "security_audit" Step 3: LLM Response LLM has both Alice's data AND optimized prompt for security analysis ``` ### Example 2: System Health Check ``` Step 1: Load system configuration User: "Load resource: config://settings" Result: Server settings loaded into context Step 2: Generate health check prompt User: Invoke prompt "system_health_check" with parameter: - component: "database" Step 3: Comprehensive Analysis LLM analyzes system health using current configuration data ``` ### Example 3: Troubleshooting Session ``` Step 1: Load relevant system info User: "Load resource: resource://server-info" Result: Current server status and capabilities loaded Step 2: Generate troubleshooting prompt User: Invoke prompt "troubleshoot_issue" with parameters: - issue_description: "Database connection timeouts" - severity: "high" Step 3: Guided Troubleshooting LLM provides step-by-step debugging with server context ``` ## Best Practices ### For Server Developers - **Design intuitive URIs**: Use clear, hierarchical resource naming - **Provide comprehensive data**: Include all relevant fields in resources - **Optimize prompts**: Craft prompts that generate high-quality responses - **Handle errors gracefully**: Return helpful error messages for invalid requests - **Document resources**: Clear descriptions for each resource and prompt ### For Users - **Load relevant resources first**: Get context before asking questions - **Use specific prompts**: Choose prompts that match your use case - **Combine resources and prompts**: Use them together for best results - **Explore available resources**: Discover what data is available - **Parameterize prompts**: Customize prompts for your specific needs ## Conclusion MCP Resources and Prompts work together to create a powerful, efficient workflow: - **Resources** provide the data - **Prompts** provide the questions - **LLMs** provide the answers This combination eliminates the need for users to manually gather data and craft prompts, leading to more consistent, efficient, and higher-quality interactions with AI systems.

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/shazaaly/mcp-boilerplate'

If you have feedback or need assistance with the MCP directory API, please join our Discord server