PURE_AGENTIC_MCP_GUIDE.md•10.6 kB
# Pure Agentic MCP Implementation Guide
## Overview
This document provides a complete guide to the **Pure Agentic MCP Implementation** - a clean, extensible architecture where all functionality is provided through MCP tools organized by intelligent agents.
## Key Benefits
### 1. True MCP Compliance
- ✅ **Pure MCP Protocol**: Server only speaks MCP (stdio/JSON-RPC)
- ✅ **Standard Tools**: All functionality exposed as standard MCP tools
- ✅ **Universal Compatibility**: Works with any MCP client (Claude Desktop, etc.)
### 2. Agentic Organization
- ✅ **Domain Separation**: AI, file, graph, vector operations in separate agents
- ✅ **Logical Grouping**: Related tools grouped under appropriate agents
- ✅ **Clear Responsibilities**: Each agent handles its specific domain
### 3. Extensible Architecture
- ✅ **Dynamic Registration**: Add/remove agents at runtime
- ✅ **Plugin System**: Easy to create new agents
- ✅ **Type Safety**: Clear schemas for all tools and parameters
## Architecture Components
### Core Components
```
Pure MCP Server (pure_mcp_server.py)
├── AgenticToolRegistry (registry.py)
│ ├── Agent Registration
│ ├── Tool Routing
│ └── Status Management
├── Base Agent (agents/base.py)
│ ├── Tool Definition Interface
│ ├── Tool Call Handler
│ └── Availability Check
└── Specific Agents
├── OpenAI Agent (agents/openai_agent.py)
├── Ollama Agent (agents/ollama_agent.py)
├── File Agent (agents/file_agent.py)
└── [Future agents...]
```
### Agent-Tool Mapping
| Agent | Tools | Description |
|-------|-------|-------------|
| **OpenAI** | `openai_chat`, `openai_analysis`, `openai_completion`, `openai_summarize` | GPT-powered AI capabilities |
| **Ollama** | `ollama_chat`, `ollama_analysis`, `ollama_completion`, `ollama_summarize` | Local AI with Llama models |
| **File** | `file_read`, `file_write`, `file_list`, `file_search`, `file_info`, `file_create_directory` | File system operations |
| **Graph** | `graph_add_node`, `graph_add_edge`, `graph_query`, `graph_visualize` | Knowledge graph operations |
| **Vector** | `vector_embed`, `vector_search`, `vector_cluster`, `vector_similarity` | Semantic search & embeddings |
## Implementation Details
### 1. Base Agent Interface
```python
class BaseAgent(ABC):
@abstractmethod
def get_tools(self) -> Dict[str, Any]:
"""Return MCP tool definitions this agent provides"""
pass
@abstractmethod
async def handle_tool_call(self, tool_name: str, params: Dict[str, Any]) -> Any:
"""Handle a specific tool call"""
pass
@abstractmethod
def is_available(self) -> bool:
"""Check if this agent is available for use"""
pass
```
### 2. Tool Registry System
```python
class AgenticToolRegistry:
def register_agent(self, name: str, agent: BaseAgent):
"""Register an agent and its tools"""
async def call_tool(self, tool_name: str, params: Dict[str, Any]) -> Any:
"""Route tool call to appropriate agent"""
def get_all_tools(self) -> Dict[str, Any]:
"""Get all available tools from all agents"""
```
### 3. Pure MCP Server
```python
class PureAgenticMCPServer:
def __init__(self):
self.registry = AgenticToolRegistry()
self._register_agents()
async def handle_request(self, request_data: Dict[str, Any]) -> Dict[str, Any]:
"""Handle incoming MCP requests"""
# Routes to tools/list, tools/call, etc.
```
## Usage Patterns
### From Claude Desktop (Direct MCP)
```
1. Claude sends MCP request: tools/list
2. Server returns: All available tools from all agents
3. Claude sends MCP request: tools/call with tool_name="openai_chat"
4. Registry routes to OpenAI agent
5. Agent handles tool call and returns result
6. Server wraps result in MCP response
7. Claude receives and processes response
```
### From Web UI via MCP Host
```
1. Streamlit sends HTTP POST: /tools/call
2. MCP Host translates to MCP request: tools/call
3. Pure MCP Server processes via registry → agent
4. MCP Host receives MCP response
5. MCP Host translates to HTTP response
6. Streamlit receives JSON response
```
## File Structure
```
mcp_server_full/
├── pure_mcp_server.py # Pure MCP server with agentic architecture
├── registry.py # Agentic tool registry
├── mcp_host.py # HTTP proxy for web access
├── demo_agentic_architecture.py # Demonstration script
├── agents/
│ ├── base.py # Base agent interface
│ ├── openai_agent.py # OpenAI integration
│ ├── ollama_agent.py # Ollama integration
│ ├── file_agent.py # File operations
│ └── [other agents...]
├── config.py # Configuration management
├── protocol.py # MCP protocol definitions
└── requirements.txt # Dependencies
```
## Running the System
### 1. Pure MCP Server (for Claude Desktop)
```bash
# Configure environment
cp .env.example .env
# Edit .env with your API keys
# Run pure MCP server
python pure_mcp_server.py
```
**Claude Desktop Configuration:**
```json
{
"mcpServers": {
"agentic-server": {
"command": "python",
"args": ["path/to/pure_mcp_server.py"],
"env": {}
}
}
}
```
### 2. MCP Host (for Web Applications)
```bash
# Install additional dependencies
pip install aiohttp aiohttp-cors
# Run MCP host
python simple_mcp_host.py
```
**Available Endpoints:**
- `GET /tools` - List all available tools
- `POST /tools/call` - Call any tool
- `POST /openai/chat` - Direct OpenAI chat
- `POST /ollama/chat` - Direct Ollama chat
- `POST /file` - File operations
### 3. Demonstration
```bash
# Run the demonstration
python streamlit_app.py
```
## Creating New Agents
### Step 1: Create Agent Class
```python
from agents.base import BaseAgent
class MyCustomAgent(BaseAgent):
def is_available(self) -> bool:
# Check if dependencies are available
return True
def get_tools(self) -> Dict[str, Any]:
return {
"my_tool": {
"description": "My custom tool",
"inputSchema": {
"type": "object",
"properties": {
"input": {"type": "string"}
},
"required": ["input"]
}
}
}
async def handle_tool_call(self, tool_name: str, params: Dict[str, Any]) -> Any:
if tool_name == "my_tool":
return {"result": f"Processed: {params['input']}"}
raise ValueError(f"Unknown tool: {tool_name}")
```
### Step 2: Register Agent
```python
# In pure_mcp_server.py _register_agents method:
my_agent = MyCustomAgent()
self.registry.register_agent("custom", my_agent)
```
## Testing
### Unit Tests for Agents
```python
import pytest
from agents.my_agent import MyCustomAgent
@pytest.mark.asyncio
async def test_my_agent():
agent = MyCustomAgent()
assert agent.is_available()
tools = agent.get_tools()
assert "my_tool" in tools
result = await agent.handle_tool_call("my_tool", {"input": "test"})
assert result["result"] == "Processed: test"
```
### Integration Tests
```python
@pytest.mark.asyncio
async def test_registry_integration():
registry = AgenticToolRegistry()
agent = MyCustomAgent()
registry.register_agent("test", agent)
result = await registry.call_tool("my_tool", {"input": "test"})
assert result["result"] == "Processed: test"
```
## Troubleshooting
### Common Issues
1. **Agent Not Available**
- Check dependencies (API keys, services running)
- Verify `is_available()` method logic
- Check logs for initialization errors
2. **Tool Not Found**
- Ensure agent is registered: `registry.list_agents()`
- Check tool name spelling
- Verify tool is in agent's `get_tools()` output
3. **MCP Communication Issues**
- Verify JSON-RPC format
- Check stdio buffering issues
- Ensure proper error handling
### Debugging
```python
# Enable debug logging
logging.basicConfig(level=logging.DEBUG)
# Check registry status
status = registry.get_agent_status()
print(json.dumps(status, indent=2))
# List available tools
tools = registry.list_tools()
print(f"Available tools: {tools}")
```
## Best Practices
### 1. Agent Design
- ✅ Single responsibility per agent
- ✅ Clear tool naming conventions
- ✅ Comprehensive error handling
- ✅ Proper availability checking
### 2. Tool Schemas
- ✅ Detailed descriptions
- ✅ Complete parameter schemas
- ✅ Required vs optional parameters
- ✅ Input validation
### 3. Error Handling
- ✅ Graceful degradation
- ✅ Informative error messages
- ✅ Proper exception types
- ✅ Logging for debugging
### 4. Performance
- ✅ Async/await for I/O operations
- ✅ Efficient tool routing
- ✅ Resource cleanup
- ✅ Connection pooling where appropriate
## Future Enhancements
### Planned Features
- 🔄 **Hot Agent Reloading**: Update agents without server restart
- 📊 **Tool Analytics**: Usage statistics and performance metrics
- 🔐 **Authentication**: Agent-level and tool-level access control
- 🌐 **Distributed Agents**: Agents running on different machines
- 📝 **Tool Composition**: Chaining tools for complex workflows
### Extension Points
- **Custom Transport**: Beyond stdio (WebSocket, gRPC)
- **Agent Discovery**: Automatic agent detection and loading
- **Tool Versioning**: Multiple versions of the same tool
- **Caching Layer**: Cache tool results for performance
## Conclusion
The Pure Agentic MCP Implementation provides a robust, extensible foundation for building AI-powered tools that integrate seamlessly with MCP clients like Claude Desktop while remaining accessible to web applications through the MCP Host proxy.
This architecture ensures:
- **Future-proof design** that can adapt to new AI capabilities
- **Clean separation of concerns** between protocol and functionality
- **Developer-friendly patterns** for extending and maintaining the system
- **Production-ready reliability** with proper error handling and logging
The agentic pattern transforms the traditional server-endpoint model into a more intelligent, organized, and maintainable system where each agent acts as a specialized expert in its domain, all working together through the unified MCP protocol.