Deploys MCP servers as serverless functions with global edge distribution, D1 database support, Durable Objects for session storage, and Workers AI for AI model inference.
Provides OAuth authentication for GitHub services through better-auth social login integration.
Enables integration with Gmail API for email operations through OAuth-authenticated requests.
Provides OAuth authentication for Google services and enables authorized API calls to Google services using user credentials.
Enables integration with Google Calendar API for calendar operations through OAuth-authenticated requests.
Enables integration with Google Tasks API for task management functionality through OAuth-authenticated requests.
Enables AI chat testing in the admin dashboard using OpenAI models (GPT-4o, GPT-4o-mini, o1) through AI Gateway integration.
MCP Server Template for Cloudflare Workers
A production-ready template for building MCP (Model Context Protocol) servers on Cloudflare Workers with better-auth social login.
Features
better-auth Social Login - Google, Microsoft, and GitHub OAuth with automatic session management
MCP OAuth Provider - Dynamic client registration for Claude.ai and Claude Code
Cloudflare Workers - Serverless deployment with global edge distribution
D1 Database - SQLite-compatible database with Drizzle ORM
Durable Objects - Persistent MCP session storage
MCP Tools - Example tools with error handling patterns
MCP Resources - Read-only data exposure (future-ready for Claude.ai)
MCP Prompts - Templated prompt definitions (future-ready for Claude.ai)
Marketing homepage - Professional landing page in Jezweb style
Admin Dashboard - Manage tokens, view tools/resources/prompts
AI Chat Testing - Built-in AI chat to test MCP tools (Workers AI + external providers)
Multi-Provider AI - Workers AI (free), OpenAI, Anthropic, Google AI Studio
Conversation Memory - D1-backed persistent chat history with configurable TTL
Internal Agent - Optional
ask_agenttool with Workers AI gatekeeper for voice agents
Quick Start
1. Clone and Install
2. Configure Cloudflare
Update wrangler.jsonc:
3. Set Up OAuth Provider (Google)
Go to Google Cloud Console
Create a new project or select existing
Enable the APIs you need (e.g., Google Tasks API, Calendar API)
Go to "APIs & Services" → "Credentials"
Create OAuth 2.0 Client ID (Web application)
Add authorized redirect URI:
https://your-worker.workers.dev/api/auth/callback/google
Note: better-auth uses
/api/auth/callback/{provider}pattern for OAuth callbacks.
Alternative Providers (Microsoft Entra, GitHub):
See
docs/BETTER_AUTH_ARCHITECTURE.mdfor setup instructionsEach provider requires its own OAuth app and secrets
4. Set Secrets
5. Deploy
Customization
Update Server Identity
src/index.ts: Update class name, server name, and toolssrc/oauth/google-handler.ts: Update GOOGLE_SCOPES and homepage contentwrangler.jsonc: Update worker name and class references
Google OAuth Scopes
Edit GOOGLE_SCOPES in src/oauth/google-handler.ts:
Adding Tools
Add tools in the init() method of your MCP class:
Adding Resources
Resources expose read-only data that LLMs can access. Add resources in the init() method:
Note: Claude.ai doesn't support resources yet (as of Dec 2025), but the API does. Adding resources now future-proofs your server.
Adding Prompts
Prompts are templated prompt definitions (like slash commands). Add prompts in the init() method:
Note: Claude.ai doesn't support prompts yet (as of Dec 2025), but the API does. Adding prompts now future-proofs your server.
Admin Dashboard
Access the admin dashboard at /admin after logging in with Google OAuth.
Features:
View server info, tools, resources, and prompts
Create and manage Bearer auth tokens
AI Chat for testing MCP tools
Admin Setup:
Set admin emails (comma-separated):
AI Chat Testing
The admin dashboard includes an AI-powered tool tester:
Click the chat bubble icon in the bottom-right corner
Select an AI provider (Workers AI is free)
Ask the AI to test tools, e.g., "test the hello tool with name John"
Supported Providers:
Workers AI (Free) - Llama 3.3 70B and other models, no API key needed
OpenAI - GPT-4o, GPT-4o-mini, o1
Anthropic - Claude 3.5 Sonnet, Claude 3.5 Haiku
Google AI Studio - Gemini 2.5 Pro, Gemini 2.5 Flash
Groq - Fast inference with Llama 3.3 70B
All external providers use AI Gateway's Compat endpoint - a single OpenAI-compatible API that works for all providers. The gateway handles format conversion automatically.
Setting up External Providers (BYOK - Recommended):
The easiest way is to configure API keys in AI Gateway (no code changes needed):
Go to Cloudflare Dashboard → AI → AI Gateway
Create a gateway named
default(or use existing)Enable Authenticated Gateway (required for BYOK)
Go to Provider Keys → Add API Key
Select provider (OpenAI, Anthropic, etc.) and enter your API key
Create a Gateway Token: User API Tokens → Create → select AI Gateway permissions
Set the token as a Worker secret:
echo "token" | npx wrangler secret put CF_AIG_TOKENRedeploy:
npx wrangler deploy
Keys are securely stored and automatically injected into requests.
Alternative: Environment Secrets
You can also set API keys as Worker secrets (overrides BYOK):
Note: Workers AI is free and works out of the box. External providers go through Cloudflare AI Gateway for logging, caching, and centralized key management.
AI Gateway Features
The template uses AI Gateway's Compat endpoint - a single OpenAI-compatible API for all providers. Additional features available:
Per-Request Headers (add to your AI calls):
Header | Purpose | Example |
| Cache responses (seconds) |
|
| Bypass cache |
|
| Trigger fallback if slow (ms) |
|
| Tag for analytics |
|
Response Headers (check after AI calls):
Header | Meaning |
|
|
| Which fallback was used (0 = primary) |
Dashboard-Only Features (configure in Cloudflare dashboard):
Guardrails - Content filtering, prompt injection detection
DLP - Detect PII, secrets, source code in prompts/responses
Rate Limiting - Gateway-level request limits
Dynamic Routing - A/B testing, geographic routing, user-based routing
Analytics - Usage metrics, costs, error rates
See AI Gateway docs for details.
Architecture
Key Components
MyMCP: ExtendsMcpAgentwith your tools, resources, and promptsensureValidToken(): Automatically refreshes expired tokensauthorizedFetch(): Wrapper for API calls with authBetterAuthHandler: Hono app handling OAuth routes via better-authcreateAuth(): better-auth instance factory with D1 + Drizzle
Included Examples
Utility Tools (no auth required):
Name | Description |
| Simple greeting |
| Current date/time in various timezones |
| Generate UUID v4 (1-10) |
| Encode/decode Base64 strings |
| Count words, characters, lines |
| Generate random numbers in range |
| Format/validate JSON |
| Generate SHA-256 hash |
User Tools (require OAuth):
Name | Description |
| Returns authenticated user's Google info |
| List your conversation history |
Example Tools:
Name | Description |
| Demonstrates |
Resources (read-only data):
Name | Description |
| Server metadata and capabilities |
| Authenticated user's profile |
Prompts (templates):
Name | Description |
| Content summarization template |
| Content analysis template (sentiment/technical/business) |
Conversation Memory
The template includes optional D1-backed conversation memory for persistent chat history.
Setup:
Configuration:
Variable | Default | Description |
|
| Enable D1 conversation storage |
|
| Auto-delete conversations after 7 days |
|
| Max messages to load for context |
When disabled, falls back to KV storage (backwards compatible).
Internal Agent
The template includes an optional internal agent pattern for voice agents (e.g., ElevenLabs) and prompt injection protection.
When enabled, the server exposes an ask_agent tool that wraps all other tools behind a Workers AI gatekeeper.
Enable:
How it works:
External caller uses only
ask_agenttool with natural language queryWorkers AI gatekeeper validates the request
Internal agent selects and calls appropriate tools
Clean response returned (no raw tool exposure)
Benefits:
Security layer against prompt injection from audio
Minimal context passed to inner agent (fast)
All tools available through single interface
Configuration:
Variable | Default | Description |
|
| Enable |
|
| Workers AI gatekeeper model |
Token Refresh
The template includes automatic token refresh:
authorizedFetch()callsensureValidToken()before each requestIf token expires within 5 minutes, it's refreshed proactively
New tokens are persisted to Durable Object storage
401 responses invalidate the stored token
This prevents sessions from disconnecting after 1 hour (Google token expiry).
Testing
Local Development
Test with curl
Dependencies
@cloudflare/workers-oauth-provider- OAuth 2.0 provider for Workers@modelcontextprotocol/sdk- MCP protocol implementationagents- McpAgent base class with Durable Object integrationhono- Lightweight web framework for OAuth routeszod- Schema validation for tool parameters
Security
The template includes multiple security layers:
XSS Protection:
HTML escaping for all dynamic content in admin dashboard
Data attributes with event delegation (no inline onclick handlers)
Content Security Policy headers on admin routes
Session Security:
SameSite=Strictcookies prevent CSRF attacksTiming-safe token comparison prevents timing attacks
Secure, HttpOnly cookies for admin sessions
Input Validation:
All tool inputs have max length limits (1MB default)
Chat message size limit (100KB)
Safe JSON parsing with fallbacks
Rate Limiting:
Admin chat: 30 requests/minute per user
Token creation: 10/hour per user
Access Control:
User-owned conversations (OAuth email verification)
Admin-only dashboard routes
Bearer token authentication for programmatic access
Future MCP Features
See docs/MCP_FEATURES_PLAN.md for planning around:
Tool Search -
defer_loadingfor 85% token reduction (10+ tools)Sampling - Server requests LLM completion from client (agentic workflows)
Completions - Autocomplete for prompt/resource arguments
License
MIT License - See LICENSE for details.
Built by Jezweb — AI agents, MCP servers, and business automation.