This MCP server provides conversational access to AI developer tool intelligence through Claude, acting as a bridge to REST API data and enabling natural language queries about adoption metrics, trends, and comparisons.
Capabilities:
Compare tools side-by-side - Compare adoption metrics between 2-3 AI developer tools (OpenAI, Anthropic, Cursor, Copilot, LangChain) over different time periods (7d, 30d, 90d)
Discover trending tools - Identify the fastest-growing AI developer tools ranked by growth rate, with filtering by category (LLM APIs, editors, assistants, frameworks) and customizable result limits (3-10 tools)
Analyze historical growth - View historical adoption data and growth trends for specific tools over 3-12 months, including downloads, GitHub stars, and community engagement metrics
Search and filter tools - Find AI developer tools by keyword, category, or minimum download thresholds, with sorting options by downloads, GitHub stars, or name
Get natural language insights - Receive formatted conversational responses with growth analysis and trend insights instead of raw JSON data, making interpretation easier for non-technical users
Access multiple data sources - Draw from NPM download statistics, GitHub repository metrics, community engagement data (Stack Overflow, Reddit), and tool metadata
Bridge existing REST APIs - Use as a template to wrap existing REST APIs with conversational interfaces while maintaining a single source of truth for business logic
Retrieves GitHub repository metrics including stars and activity data for AI development tools.
Queries real-time adoption metrics, trends, and comparisons for GitHub Copilot including download statistics, community engagement, and historical growth data.
Queries real-time adoption metrics, trends, and comparisons for LangChain including NPM downloads, GitHub stars, community engagement, and historical growth data.
Retrieves NPM download statistics (weekly/monthly) for AI development tools and frameworks.
Queries real-time adoption metrics, trends, and comparisons for the OpenAI SDK including NPM downloads, GitHub stars, community engagement, and historical growth data.
Retrieves PyPI package statistics as part of the production implementation's multiple data source integration.
Tracks Reddit mentions and community engagement metrics for AI development tools.
Retrieves Stack Overflow question counts and community engagement metrics for AI development tools.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@AI Developer Tools MCP Servercompare GitHub Copilot and Cursor adoption trends"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
AI Developer Tools MCP Server
Educational reference implementation demonstrating how to build MCP servers as bridges to existing REST APIs.
This MCP server shows how to expose AI development tool intelligence through the Model Context Protocol (MCP) by wrapping a REST API with a conversational interface. It's designed as a learning resource for developers who want to understand the MCP-as-API-bridge pattern.
Architecture: MCP as an API Bridge
This project demonstrates the recommended pattern for MCP servers: wrapping an existing REST API to provide conversational access, rather than building everything from scratch.
The Pattern
Why This Architecture?
Single Source of Truth
All business logic lives in the REST API
Authentication, rate limiting, caching happen once
API changes automatically flow through to MCP
Dual Access Patterns
Developers use API directly for programmatic access
Non-technical users get conversational access via Claude
Same data, different interfaces for different needs
Thin Wrapper
MCP server is ~200 lines of formatting code
Calls existing API endpoints
Transforms JSON responses → natural language
Easy to maintain and extend
What It Does
This MCP server makes AI development tool intelligence accessible through natural conversation with Claude. Instead of manually querying APIs or clicking through dashboards, you can ask:
Example Queries:
"Compare the adoption of OpenAI SDK vs Anthropic SDK"
"What are the fastest-growing AI coding tools this month?"
"Show me the growth history of Cursor over the last 6 months"
"Find all LLM API frameworks with over 5M downloads"
Claude uses the exposed tools to fetch data and present insights in natural language, complete with growth trends, community metrics, and comparative analysis.
What Data Is Exposed:
NPM download statistics (weekly/monthly)
GitHub repository metrics (stars, activity)
Community engagement (Stack Overflow questions, Reddit mentions)
Historical growth trends
Tool metadata (descriptions, categories, package names)
Code Structure
Key Layers
1. MCP Tools (
Define what Claude can call
Receive structured parameters from Claude
Call the API client
Return formatted responses
2. API Client ( ⭐ THE BRIDGE
Simulates REST API calls in this demo
In production: makes real HTTP requests
Handles authentication, errors, timeouts
Returns JSON responses
3. Formatters (
Transform JSON → Natural language
Add insights and context
Make data conversational
This is where MCP adds value
4. Mock Data (
Simulates database responses
Representative sample data
In production: replaced by real database
Production vs Demo
Demo (This Repo)
API Client (
What it demonstrates:
MCP tool → API client → Data source pattern
Request/response flow
Error handling
Response formatting
Production (vibe-data.com)
API Client:
What changes:
fetch()instead of mock dataReal authentication headers
Actual timeout handling
Production error handling
Rate limiting
Retry logic
Everything else stays the same:
MCP tool definitions ✓
Response formatters ✓
Tool parameter schemas ✓
MCP server setup ✓
Quick Start
Prerequisites
Node.js 18 or higher
Claude Desktop app (or any MCP-compatible client)
Installation
Running the Server
Option 1: Standalone Testing
Option 2: Connect to Claude Desktop
Add this configuration to your Claude Desktop config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Restart Claude Desktop. You should see the server listed in the MCP section.
Testing It Works
Ask Claude:
"What are the most popular AI coding tools right now?"
Claude will use the get_trending_tools tool to fetch current data and present it to you.
How The Bridge Pattern Works
Request Flow
Code Example
User asks: "What's Cursor's adoption trend?"
1. Claude decides to call the tool:
2. MCP tool receives call:
3. API client makes request:
4. Formatter transforms response:
5. Claude presents to user: "Based on the data, Cursor has shown strong growth over the last 6 months, with downloads increasing from 5.2M to 8.1M (55.8% growth)..."
Available Tools
1. compare_tools
Compare adoption metrics between 2-3 AI developer tools
Parameters:
Returns: Side-by-side comparison with growth indicators and key insights
2. get_trending_tools
Get the fastest-growing AI developer tools
Parameters:
Returns: Ranked list by growth percentage with current metrics
3. get_tool_history
Get historical adoption data for a specific tool
Parameters:
Returns: Monthly timeline with growth analysis
4. search_tools
Search and filter tools by criteria
Parameters:
Returns: Filtered list with full details and summary stats
Migrating to Production
Step 1: Update API Client
Replace mock calls with real HTTP requests:
Step 2: Add Authentication
Step 3: Add Rate Limiting
Step 4: Add Retry Logic
Step 5: Test
Everything else (tools, formatters, MCP server) stays the same.
Design Decisions
Why Wrap an API Instead of Direct Database Access?
Separation of Concerns:
API handles business logic, auth, rate limiting
MCP server focuses on conversation formatting
Don't duplicate logic in both places
Security:
API is your security boundary
MCP server doesn't need database credentials
Same security rules apply to all clients
Maintainability:
One codebase for business logic
API changes flow through automatically
MCP layer is thin and simple
Why Format Responses as Text?
Claude Excels at Language:
Language models work best with text, not JSON
No parsing needed - Claude can directly quote or summarize
More flexible - Claude can adapt presentation to context
Better User Experience:
Users see insights, not data structures
Natural conversation flow
Context and interpretation included
Example:
JSON response:
Formatted text:
Same data, but one is for machines and one is for humans.
Why One Tool Per Function?
Claude Performs Better:
Clear, focused tools are easier for Claude to choose
Simpler parameter schemas
More predictable behavior
Easier to Maintain:
Each tool has one responsibility
Independent testing
Clear documentation
Composable:
Claude can chain multiple tool calls
"Compare top 3 trending tools" =
get_trending+compare_tools
Real-World Use Cases
This pattern works for any product with queryable data:
B2B SaaS
API: Analytics platforms, customer dashboards
MCP: "How's our MRR trending?" "Which customers churned?"
E-commerce
API: Inventory systems, order management
MCP: "What products are low stock?" "Show me returns this week"
Internal Tools
API: Automated reports, integrations
MCP: "Find pending invoices" "Compare Q3 vs Q4 sales"
Contributing
Contributions welcome! This is an educational project, so quality over quantity.
Good Contributions:
Additional tools with clear use cases
Better mock data demonstrating edge cases
Documentation improvements
Examples of production implementations
Testing improvements
Please Open an Issue First to discuss:
Major architectural changes
New dependencies
Breaking changes to tool interfaces
License
MIT License - see LICENSE file for details.
Acknowledgments
Built with the Model Context Protocol by Anthropic
Inspired by real production data platform at Vibe Data
Created as an educational resource for the AI developer community
Learn More
Questions? Issues? Ideas? Open an issue or reach out!