# MCP Features Plan
## Research Summary
### MCP Protocol Features
The Model Context Protocol supports these server-side features:
| Feature | Description | SDK Support | Claude.ai Support |
|---------|-------------|-------------|-------------------|
| **Tools** | Functions the LLM can call | `server.tool()` | ✅ Yes |
| **Resources** | Read-only data exposure (files, schemas, etc.) | `server.resource()` | ❌ Not yet |
| **Prompts** | Templated prompt definitions | `server.prompt()` | ❌ Not yet |
| **Completions** | Autocomplete for prompt/resource arguments | `server.setRequestHandler()` | ❌ Not yet |
### Client-Side Features (server requests from client)
| Feature | Description | Use Case |
|---------|-------------|----------|
| **Sampling** | Server requests LLM completion from client | Agentic workflows where server needs AI |
| **Roots** | Client provides filesystem roots | File-based operations |
| **Elicitation** | Server asks user for input mid-tool | Human-in-the-loop |
### Advanced Tool Features (Anthropic API Beta)
| Feature | Status | Benefit |
|---------|--------|---------|
| **Tool Search** | Beta (Nov 2025) | 85% token reduction with `defer_loading: true` |
| **Programmatic Tool Calling** | Beta | 37% token reduction via code execution |
| **Tool Use Examples** | Beta | 72% → 90% accuracy improvement |
## Current Claude.ai Limitations
As of Dec 2025, Claude.ai remote MCP servers only support **tools**. Resources and prompts are not yet exposed in the Claude.ai interface.
**However:** The API supports connecting to MCP servers and accessing their full capabilities, including resources and prompts, via the `mcp_servers` parameter with beta headers.
## Recommendation
### Phase 1: Add to Template Now (Low effort, future-ready)
**Resources** - Add example resource registration:
```typescript
// Expose server metadata as a resource
this.server.resource(
'server_info',
'mcp://my-mcp/info',
{
description: 'Server configuration and status',
mimeType: 'application/json',
},
async (uri) => ({
contents: [{
uri,
mimeType: 'application/json',
text: JSON.stringify({
name: 'My MCP Server',
version: '1.0.0',
user: this.props?.email,
tools: ['tool1', 'tool2'],
}),
}],
})
);
```
**Prompts** - Add example prompt registration:
```typescript
// Pre-defined prompt template
this.server.prompt(
'summarize',
'Summarize content with specific focus',
{
content: z.string().describe('The content to summarize'),
focus: z.string().optional().describe('Specific aspect to focus on'),
},
async ({ content, focus }) => ({
messages: [{
role: 'user',
content: {
type: 'text',
text: focus
? `Please summarize this content, focusing on ${focus}:\n\n${content}`
: `Please summarize this content:\n\n${content}`,
},
}],
})
);
```
### Phase 2: Tool Search Optimization (When needed)
For servers with 10+ tools, add `defer_loading` support:
```typescript
// In the tool definition, mark as deferrable
this.server.tool(
'rarely_used_tool',
'This tool is rarely needed',
{ param: z.string() },
async ({ param }) => { ... },
{ defer_loading: true } // Only load when discovered via search
);
```
**Note:** This requires the API client to use the `advanced-tool-use-2025-11-20` beta header.
### Phase 3: Sampling Support (Future)
When building agentic MCP servers that need to call LLMs themselves:
```typescript
// Request LLM completion from the client
const result = await this.server.requestSampling({
messages: [{ role: 'user', content: { type: 'text', text: 'Analyze this...' } }],
modelPreferences: { intelligencePriority: 0.8 },
maxTokens: 500,
});
```
**Use cases:**
- MCP server that summarizes fetched data
- Server that classifies/categorizes content
- Agentic workflows with nested AI calls
## Implementation Priority
1. **Now:** Add resources and prompts examples to template (future-proof)
2. **When scaling:** Add defer_loading for tool search optimization
3. **When needed:** Add sampling for agentic server workflows
## Template Changes
### Files to Update
1. `src/index.ts` - Add example resources and prompts in `init()`
2. `README.md` - Document resources and prompts features
3. `src/oauth/google-handler.ts` - Update homepage to show resources/prompts counts
### Example Structure
```typescript
async init() {
// ===== TOOLS =====
this.server.tool('example_tool', ...);
// ===== RESOURCES =====
this.server.resource('server_info', ...);
this.server.resource('user_profile', ...);
// ===== PROMPTS =====
this.server.prompt('summarize', ...);
this.server.prompt('analyze', ...);
}
```
## Timeline Prediction
- **Q1 2026:** Claude.ai likely adds resources support
- **Q2 2026:** Prompts support in Claude.ai interface
- **2026:** Other AI providers (OpenAI, Google) adopt MCP
Being ready now means our servers work immediately when support lands.
## References
- [MCP Specification](https://modelcontextprotocol.io/specification/draft)
- [Anthropic Advanced Tool Use](https://www.anthropic.com/engineering/advanced-tool-use)
- [Tool Search Tool](https://platform.claude.com/docs/en/agents-and-tools/tool-use/tool-search-tool)
- [Cloudflare Agents SDK](https://developers.cloudflare.com/agents/model-context-protocol/)