The MCP AI Bridge server provides secure access to OpenAI and Google Gemini AI models through Claude Code.
Query OpenAI Models: Access GPT-4o, GPT-4o Mini, GPT-4 Turbo, GPT-3.5 Turbo, and other models using the
ask_openai
toolQuery Google Gemini Models: Interact with Gemini Pro, Gemini 1.5 Pro, and Gemini 1.5 Flash via the
ask_gemini
toolCustomization Options: Control parameters like temperature for response generation
Server Information: Retrieve status, configuration, available models, and security settings with the
server_info
toolSecurity Features: Input validation, rate limiting (100 requests/minute), API key validation, and secure error handling
Flexible Configuration: Support for environment variables,
.env
files, and Claude Code configuration
Integrates with the dotenv library to securely load API keys and configuration from environment files, supporting both global and local configurations.
Enables interaction with Google Gemini models including Gemini Pro, Gemini 1.5 Pro, and Gemini 1.5 Flash through the ask_gemini tool with customizable parameters.
Uses Node.js as the runtime environment for the MCP server, providing the foundation for API integrations and server functionality.
Provides access to OpenAI's language models including GPT-4, GPT-4 Turbo, and GPT-3.5 Turbo through the ask_openai tool with customizable parameters like temperature.
MCP AI Bridge
A secure Model Context Protocol (MCP) server that bridges Claude Code with OpenAI and Google Gemini APIs.
Features
OpenAI Integration: Access GPT-4o, GPT-4o Mini, GPT-4 Turbo, GPT-4, and reasoning models (o1, o1-mini, o1-pro, o3-mini)
Gemini Integration: Access Gemini 1.5 Pro, Gemini 1.5 Flash, and vision models with latest capabilities
Security Features:
Enhanced Input Validation: Multi-layer validation with sanitization
Content Filtering: Blocks explicit, harmful, and illegal content
Prompt Injection Detection: Identifies and blocks manipulation attempts
Rate Limiting: Prevents API abuse with configurable limits
Secure Error Handling: No sensitive information exposure
API Key Validation: Format validation for API keys
Configurable Security Levels: Basic, Moderate, and Strict modes
Robust Error Handling: Specific error types with detailed messages
Structured Logging: Winston-based logging with configurable levels
Flexible Configuration: Control temperature and model selection for each request
Installation
Clone or copy the
mcp-ai-bridge
directory to your preferred locationInstall dependencies:
Configure your API keys using ONE of these methods:
Option A: Use global .env file in your home directory (Recommended)
Create or edit
~/.env
fileAdd your API keys:
OPENAI_API_KEY=your_openai_api_key_here GOOGLE_AI_API_KEY=your_google_ai_api_key_here
Option B: Use local .env file
Create a
.env
file in the mcp-ai-bridge directory:cp .env.example .envAdd your API keys to this local
.env
file
Option C: Use environment variables in Claude Code config
Configure directly in the Claude Code settings (see Configuration section)
The server will check for environment variables in this order:
~/.env
(your home directory)./.env
(local to mcp-ai-bridge directory)System environment variables
Optional Configuration Variables:
# Logging level (error, warn, info, debug) LOG_LEVEL=info # Server identification MCP_SERVER_NAME=AI Bridge MCP_SERVER_VERSION=1.0.0 # Security Configuration SECURITY_LEVEL=moderate # disabled, basic, moderate, strict # Content Filtering (granular controls) BLOCK_EXPLICIT_CONTENT=true # Master content filter toggle BLOCK_VIOLENCE=true # Block violent content BLOCK_ILLEGAL_ACTIVITIES=true # Block illegal activity requests BLOCK_ADULT_CONTENT=true # Block adult/sexual content # Injection Detection (granular controls) DETECT_PROMPT_INJECTION=true # Master injection detection toggle DETECT_SYSTEM_PROMPTS=true # Detect system role injections DETECT_INSTRUCTION_OVERRIDE=true # Detect "ignore instructions" attempts # Input Sanitization (granular controls) SANITIZE_INPUT=true # Master sanitization toggle REMOVE_SCRIPTS=true # Remove script tags and JS LIMIT_REPEATED_CHARS=true # Limit DoS via repeated characters # Performance & Flexibility ENABLE_PATTERN_CACHING=true # Cache compiled patterns for speed MAX_PROMPT_LENGTH_FOR_DEEP_SCAN=1000 # Skip deep scanning for long prompts ALLOW_EDUCATIONAL_CONTENT=false # Whitelist educational content WHITELIST_PATTERNS= # Comma-separated regex patterns to allow
Configuration in Claude Code
Method 1: Using Claude Code CLI (Recommended)
Use the interactive MCP setup wizard:
Or add the server configuration directly:
Method 2: Manual Configuration
Add the following to your Claude Code MCP settings. The configuration file location depends on your environment:
Claude Code CLI: Uses
settings.json
in the configuration directory (typically~/.claude/
or$CLAUDE_CONFIG_DIR
)Claude Desktop: Uses
~/.claude/claude_desktop_config.json
For Claude Desktop compatibility:
Alternatively, if you have the .env
file configured, you can omit the env section:
Method 3: Import from Claude Desktop
If you already have this configured in Claude Desktop, you can import the configuration:
Available Tools
1. ask_openai
Query OpenAI models with full validation and security features.
Parameters:
prompt
(required): The question or prompt to send (max 10,000 characters)model
(optional): Choose from 'gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', 'gpt-4', 'o1', 'o1-mini', 'o1-pro', 'o3-mini', 'chatgpt-4o-latest', and other available models (default: 'gpt-4o-mini')temperature
(optional): Control randomness (0-2, default: 0.7)
Security Features:
Input validation for prompt length and type
Temperature range validation
Model validation
Rate limiting (100 requests per minute by default)
2. ask_gemini
Query Google Gemini models with full validation and security features.
Parameters:
prompt
(required): The question or prompt to send (max 10,000 characters)model
(optional): Choose from 'gemini-1.5-pro-latest', 'gemini-1.5-pro-002', 'gemini-1.5-pro', 'gemini-1.5-flash-latest', 'gemini-1.5-flash', 'gemini-1.5-flash-002', 'gemini-1.5-flash-8b', 'gemini-1.0-pro-vision-latest', 'gemini-pro-vision' (default: 'gemini-1.5-flash-latest')temperature
(optional): Control randomness (0-1, default: 0.7)
Security Features:
Input validation for prompt length and type
Temperature range validation
Model validation
Rate limiting (100 requests per minute by default)
3. server_info
Get comprehensive server status and configuration information.
Returns:
Server name and version
Available models for each service
Security settings (rate limits, validation status)
Configuration status for each API
Usage Examples
In Claude Code, you can use these tools like:
Debugging MCP Server
If you encounter issues with the MCP server, you can use Claude Code's debugging features:
Testing
The project includes comprehensive unit tests and security tests. To run tests:
Test Coverage
Unit tests for all server functionality
Security tests for input validation and rate limiting
Integration tests for API interactions
Error handling tests
Mock-based testing to avoid real API calls
Troubleshooting
Common Issues
"API key not configured" error: Make sure you've added the correct API keys to your
.env
file or Claude Code config"Invalid OpenAI API key format" error: OpenAI keys must start with 'sk-'
"Rate limit exceeded" error: Wait for the rate limit window to reset (default: 1 minute)
"Prompt too long" error: Keep prompts under 10,000 characters
Module not found errors: Run
npm install
in the mcp-ai-bridge directoryPermission errors: Ensure the index.js file has execute permissions
Logging issues: Set LOG_LEVEL environment variable (error, warn, info, debug)
Claude Code Specific Troubleshooting
MCP server not loading:
Use
claude --mcp-debug
to see detailed error messagesCheck server configuration with
/mcp
slash commandVerify the server path is correct and accessible
Ensure Node.js is installed and in your PATH
Configuration issues:
Use
claude mcp add
for interactive setupCheck
CLAUDE_CONFIG_DIR
environment variable if using custom config locationFor timeouts, configure
MCP_TIMEOUT
andMCP_TOOL_TIMEOUT
environment variables
Server startup failures:
Check if the server process can start independently:
node /path/to/mcp-ai-bridge/src/index.js
Verify all dependencies are installed
Check file permissions on the server directory
Security Features
Enhanced Security Protection
Multi-Layer Input Validation: Type, length, and content validation
Content Filtering: Blocks explicit, violent, illegal, and harmful content
Prompt Injection Detection: Identifies and prevents manipulation attempts including:
Instruction override attempts ("ignore previous instructions")
System role injection ("system: act as...")
Template injection ({{system}}, <|system|>, [INST])
Suspicious pattern detection
Input Sanitization: Removes control characters, scripts, and malicious patterns
Rate Limiting: 100 requests per minute by default to prevent API abuse
API Key Validation: Format validation for API keys before use
Secure Error Handling: No stack traces or sensitive information in error messages
Structured Logging: All operations are logged with appropriate levels
Security Levels
Basic: Minimal filtering, allows most content
Moderate (Default): Balanced protection with reasonable restrictions
Strict: Maximum protection, blocks borderline content
Granular Security Configuration
Security Levels:
disabled
- No security checks (maximum performance)basic
- Essential protection only (good performance)moderate
- Balanced protection (default, good balance)strict
- Maximum protection (may impact performance)
Individual Feature Controls:
Performance Considerations:
Pattern caching reduces regex compilation overhead
Long prompts (>1000 chars) get lighter scanning in basic mode
Early termination stops checking after finding issues
Granular controls let you disable unneeded checks
Best Practices
Never commit your
.env
file to version controlKeep your API keys secure and rotate them regularly
Consider setting usage limits on your API accounts
Monitor logs for unusual activity
Use the rate limiting feature to control costs
Validate the server configuration using the
server_info
tool
Rate Limiting
The server implements sliding window rate limiting:
Default: 100 requests per minute
Configurable via environment variables
Per-session tracking
Graceful error messages with reset time information
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
A secure Model Context Protocol server that enables Claude Code to connect with OpenAI and Google Gemini models, allowing users to query multiple AI providers through a standardized interface.
Related MCP Servers
- -securityAlicense-qualityA Model Context Protocol server that enables Claude to collaborate with Google's Gemini AI models, providing tools for question answering, code review, brainstorming, test generation, and explanations.Last updated -MIT License
- -securityFlicense-qualityA Model Context Protocol server that enables Claude to interact with Google's Gemini AI models, allowing users to ask Gemini questions directly from Claude.Last updated -1
- -securityFlicense-qualityA Model Context Protocol server that gives Claude access to multiple AI models (Gemini, OpenAI, OpenRouter) for enhanced code analysis, problem-solving, and collaborative development through AI orchestration with conversations that continue across tasks.Last updated -7,222
- -securityFlicense-qualityAn enhanced Model Context Protocol server that enables Claude to seamlessly collaborate with multiple AI models (Gemini, OpenAI, local models) for code analysis and development tasks, maintaining context across conversations.Last updated -4333