Provides a proxy for the GitHub API, allowing AI agents to access GitHub endpoints as MCP tools with integrated caching, rate limiting, and retry logic.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MCP API GatewayShow me the gateway statistics and current cache usage"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP API Gateway
A unified local API gateway with caching, rate limiting, and full MCP (Model Context Protocol) compatibility.
Features
π Unified API Aggregation - Manage multiple API endpoints through a single gateway
πΎ Multi-Strategy Caching - LRU, LFU, FIFO, and TTL cache eviction policies
β‘ Rate Limiting - Token bucket and sliding window algorithms
π MCP Protocol - Full Model Context Protocol support for AI agent integration
π Observability - Built-in statistics and metrics
π Retry Logic - Automatic retry with exponential backoff
Installation
# Clone the repository
git clone https://github.com/bandageok/mcp-api-gateway.git
cd mcp-api-gateway
# Install dependencies
pip install -r requirements.txt
# Or install directly
pip install aiohttp pyyamlQuick Start
1. Create a Configuration File
python gateway.py --create-configThis creates a config.yaml with sample endpoints:
host: localhost
port: 8080
cache:
enabled: true
max_size: 1000
ttl: 300
strategy: lru
rate_limit:
enabled: true
requests_per_minute: 60
apis:
- name: github-api
url: https://api.github.com
method: GET2. Run the Gateway
# With config file
python gateway.py -c config.yaml
# Or with command line arguments
python gateway.py --host 0.0.0.0 --port 80803. Use the Gateway
# Call an API endpoint
curl http://localhost:8080/api/github-api/users/bandageok
# Check health
curl http://localhost:8080/health
# Get statistics
curl http://localhost:8080/stats
# Clear cache
curl -X DELETE http://localhost:8080/cache/clear
# Get configuration
curl http://localhost:8080/configMCP Protocol Integration
The gateway provides full MCP protocol support for AI agents:
MCP Tools
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {}
}Response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "github-api",
"description": "Call GET https://api.github.com",
"inputSchema": {
"type": "object",
"properties": {
"params": {"type": "object"},
"data": {"type": "object"}
}
}
}
]
}
}Call a Tool
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "github-api",
"arguments": {
"params": {"path": "/users/bandageok"}
}
}
}MCP Resources
{
"jsonrpc": "2.0",
"id": 3,
"method": "resources/list",
"params": {}
}Configuration Options
Option | Type | Default | Description |
| string | localhost | Host to bind to |
| int | 8080 | Port to bind to |
| bool | false | Enable debug mode |
| string | INFO | Logging level |
| bool | true | Enable caching |
| int | 1000 | Maximum cache entries |
| int | 300 | Cache TTL in seconds |
| string | lru | Cache strategy (lru/lfu/fifo/ttl) |
| bool | true | Enable rate limiting |
| int | 60 | Rate limit threshold |
API Endpoints
Endpoint | Method | Description |
| GET | Health check |
| GET | Detailed health status |
| GET | Gateway statistics |
| GET | Current configuration |
| DELETE | Clear the cache |
| * | Proxy to configured API |
| POST | MCP protocol endpoint |
Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MCP API Gateway β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β βββββββββββββββ βββββββββββββββ βββββββββββββββββ β
β β Cache β βRate Limiter β β MCP Handler β β
β β (LRU/LFU) β β (Token) β β β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β API Client Pool β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ β
β β GitHub β β Weather β β Stocks β β Custom β β
β β API β β API β β API β β API β β
β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββUse Cases
1. AI Agent Integration
Connect AI agents to external APIs through MCP:
import requests
# Initialize MCP
response = requests.post("http://localhost:8080/mcp", json={
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {}
})
# List available tools
response = requests.post("http://localhost:8080/mcp", json={
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list",
"params": {}
})2. API Rate Limiting
Protect external APIs from being overwhelmed:
rate_limit:
enabled: true
requests_per_minute: 60 # Max 60 requests per minute3. Response Caching
Cache expensive API responses:
cache:
enabled: true
max_size: 1000
ttl: 300 # Cache for 5 minutes
strategy: lru # Evict least recently usedExamples
Python Client
import aiohttp
import asyncio
async def call_gateway():
async with aiohttp.ClientSession() as session:
# Call an API
async with session.get("http://localhost:8080/api/github-api/users/bandageok") as resp:
data = await resp.json()
print(data)
# Check stats
async with session.get("http://localhost:8080/stats") as resp:
stats = await resp.json()
print(f"Cache hit rate: {stats['cache_hit_rate']}")
asyncio.run(call_gateway())Add Custom API Endpoint
apis:
- name: my-api
url: https://api.example.com
method: GET
headers:
Authorization: Bearer YOUR_TOKEN
timeout: 30
retry_count: 3Performance
Throughput: ~1000 requests/second (with caching)
Latency: <10ms overhead (cache hit), <100ms overhead (cache miss)
Memory: ~50MB base + cache size
License
MIT License - See LICENSE for details.
Author
BandageOK - GitHub
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
β Star us on GitHub if you find this useful!
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.