# think-mcp v2.1.0 Release Notes
**Release Date:** January 1, 2026
**Previous Version:** 2.0.0
---
## Overview
This release delivers significant performance improvements, enhanced developer experience, and new MCP capabilities. All 11 thinking tools now respond **97% faster** while maintaining full semantic quality.
---
## Highlights
### Performance
- **97.4% faster average response time** (211ms → 5.6ms)
- **Zero regressions** across all tools
- Optimized handler architecture for sub-10ms responses
### New Capabilities
- **MCP Resources**: 33 discoverable resources for mental models, patterns, paradigms, and debugging approaches
- **MCP Prompts**: 4 guided workflow templates with A/B variant support
- **Standardized Response Format**: Consistent structure across all tool outputs
### Developer Experience
- Full Zod schema validation with descriptive error messages
- Request tracking with unique IDs and timestamps
- Tool icons for UI integration
---
## What's New
### MCP Resources (PR 2-4)
Expose thinking frameworks as discoverable MCP resources:
| Resource URI | Description |
|--------------|-------------|
| `think://models` | 6 mental models catalog |
| `think://patterns` | 7 design patterns catalog |
| `think://paradigms` | 10 programming paradigms catalog |
| `think://debug-approaches` | 6 debugging methods catalog |
**Plus 29 individual resource URIs** for direct access to specific items (e.g., `think://models/first_principles`).
```typescript
// Example: Reading mental models
const models = await mcp.readResource('think://models');
// Returns: { type: 'mental-models', version: '2.0.0', count: 6, items: [...] }
```
### MCP Prompts (PR 5-6)
Guided workflow templates for common reasoning tasks:
| Prompt | Description |
|--------|-------------|
| `analyze-problem` | Structured problem analysis with tool recommendations |
| `debug-issue` | Systematic debugging workflow |
| `design-decision` | Architecture decision framework |
| `review-architecture` | System design review checklist |
**A/B Variant Support**: Each prompt supports detailed (A) and minimal (B) variants, configurable via Vercel Edge Config.
```typescript
// Example: Getting a prompt
const prompt = await mcp.getPrompt('analyze-problem', {
problem: 'Database queries are slow',
context: 'PostgreSQL with 10M rows'
});
```
### Standardized Response Format (PR 12)
All tools now return a consistent response structure:
```json
{
"success": true,
"tool": "model",
"data": {
"modelName": "first_principles",
"problem": "Designing scalable microservices",
"status": "success"
},
"metadata": {
"processingTimeMs": 4,
"version": "2.0.0",
"timestamp": "2026-01-01T09:43:45.856Z",
"requestId": "req_mjv9ci4g_4wlou6u"
}
}
```
**Benefits:**
- Predictable response parsing
- Built-in performance metrics
- Request tracing for debugging
- Version identification
### Enhanced Zod Validation (PR 7-10)
All 11 tools now feature comprehensive input validation:
- **Type safety**: Full TypeScript integration
- **Descriptive errors**: Clear messages for invalid inputs
- **Enum validation**: Strict enforcement for tool-specific types
- **Range validation**: Confidence scores (0-1), iterations (≥0)
```typescript
// Example: Validation error
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Invalid enum value. Expected 'first_principles' | 'opportunity_cost' | ..."
}
}
```
### Tool Icons (PR 13)
SVG icons for all 11 tools, enabling rich UI integration:
| Tool | Icon Path |
|------|-----------|
| trace | `/icons/trace.svg` |
| model | `/icons/model.svg` |
| pattern | `/icons/pattern.svg` |
| paradigm | `/icons/paradigm.svg` |
| debug | `/icons/debug.svg` |
| council | `/icons/council.svg` |
| decide | `/icons/decide.svg` |
| reflect | `/icons/reflect.svg` |
| hypothesis | `/icons/hypothesis.svg` |
| debate | `/icons/debate.svg` |
| map | `/icons/map.svg` |
---
## Performance Improvements
### Response Time Comparison
| Tool | v2.0.0 | v2.1.0 | Improvement |
|------|--------|--------|-------------|
| trace | 177ms | 16ms | -91% |
| model | 196ms | 6ms | -97% |
| pattern | 199ms | 4ms | -98% |
| paradigm | 189ms | 5ms | -97% |
| debug | 194ms | 4ms | -98% |
| council | 300ms | 5ms | -98% |
| decide | 229ms | 4ms | -98% |
| reflect | 204ms | 4ms | -98% |
| hypothesis | 249ms | 5ms | -98% |
| debate | 187ms | 4ms | -98% |
| map | 206ms | 4ms | -98% |
### Why Faster?
1. **Stateless handlers**: Lightweight request processing
2. **Optimized wrapper**: Minimal overhead response formatting
3. **No I/O blocking**: Pure computation paths
4. **Module caching**: Hot paths after first request
---
## Breaking Changes
None. This release is fully backward compatible with v2.0.0.
---
## Migration Guide
### From v2.0.0
No migration required. Existing tool calls continue to work unchanged.
### Adopting New Features
**To use Resources:**
```typescript
// List all resources
const list = await mcp.listResources();
// Read a specific resource
const models = await mcp.readResource('think://models');
```
**To use Prompts:**
```typescript
// List all prompts
const prompts = await mcp.listPrompts();
// Get a prompt with arguments
const prompt = await mcp.getPrompt('analyze-problem', {
problem: 'Your problem description',
context: 'Additional context'
});
```
**To use Standardized Responses:**
```typescript
const response = await mcp.callTool('model', {
modelName: 'first_principles',
problem: 'test'
});
if (response.success) {
console.log(`Tool: ${response.tool}`);
console.log(`Data: ${JSON.stringify(response.data)}`);
console.log(`Processing time: ${response.metadata.processingTimeMs}ms`);
}
```
---
## Known Issues
1. **Cold Start Latency**: First request to each tool may take ~40ms due to module loading
2. **Lenient Validation**: `trace` tool accepts negative `thoughtNumber` values (legacy behavior)
---
## Testing
### Validation Results
| Test Category | Passed | Failed |
|---------------|--------|--------|
| Tool Calls | 37 | 0 |
| Response Format | 37 | 0 |
| Resources | 2 | 0 |
| Prompts | 2 | 0 |
### Semantic Quality (Maintained)
| Metric | v2.0.0 | v2.1.0 |
|--------|--------|--------|
| Coherence Score | 0.90 | 0.90 |
| Usefulness Score | 0.87 | 0.87 |
| Tool Selection Accuracy | 93.9% | 93.9% |
---
## Files Changed
### New Files
- `web/lib/resources/` - MCP Resources implementation
- `web/lib/prompts/` - MCP Prompts implementation
- `web/lib/responses/` - Standardized response wrapper
- `web/lib/experiments/` - A/B testing infrastructure
- `web/lib/progress/` - Progress notification system
- `web/public/icons/` - Tool icons (11 SVGs)
- `web/benchmark-mcp.mjs` - Performance benchmark script
### Modified Files
- `web/lib/mcp-tools.ts` - Tool registration with response wrapper
- `web/lib/tools/*.ts` - Enhanced Zod schemas
- `web/app/api/[transport]/route.ts` - Resources & Prompts registration
---
## Contributors
This release was developed as part of the Phase 2+ enhancement initiative.
---
## What's Next
- Additional prompt templates for specialized workflows
- Enhanced progress notifications during long-running operations
- Extended resource catalogs with community contributions
---
## Links
- [Performance Comparison Report](../test-results/performance-comparison-report.md)
- [Benchmark Results](../test-results/benchmark-2026-01-01.json)
- [Semantic Evaluation Summary](../test-results/comprehensive-evaluation-summary.md)