OpenAI MCP Server
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Integrations
Claude Code Python Edition
A powerful Python recreation of Claude Code with enhanced real-time visualization, cost management, and Model Context Protocol (MCP) server capabilities. This tool provides a natural language interface for software development tasks with support for multiple LLM providers.
Key Features
- Multi-Provider Support: Works with OpenAI, Anthropic, and other LLM providers
- Model Context Protocol Integration:
- Run as an MCP server for use with Claude Desktop and other clients
- Connect to any MCP server with the built-in MCP client
- Multi-agent synchronization for complex problem solving
- Real-Time Tool Visualization: See tool execution progress and results in real-time
- Cost Management: Track token usage and expenses with budget controls
- Comprehensive Tool Suite: File operations, search, command execution, and more
- Enhanced UI: Rich terminal interface with progress indicators and syntax highlighting
- Context Optimization: Smart conversation compaction and memory management
- Agent Coordination: Specialized agents with different roles can collaborate on tasks
Installation
- Clone this repository
- Install dependencies:
- Create a
.env
file with your API keys:
Usage
CLI Mode
Run the CLI with the default provider (determined from available API keys):
Specify a provider and model:
Set a budget limit to manage costs:
MCP Server Mode
Run as a Model Context Protocol server:
Start in development mode with the MCP Inspector:
Configure host and port:
Specify additional dependencies:
Load environment variables from file:
MCP Client Mode
Connect to an MCP server using Claude as the reasoning engine:
Specify a Claude model:
Try the included example server:
Multi-Agent MCP Mode
Launch a multi-agent client with synchronized agents:
Use a custom agent configuration file:
Example with the echo server:
Available Tools
- View: Read files with optional line limits
- Edit: Modify files with precise text replacement
- Replace: Create or overwrite files
- GlobTool: Find files by pattern matching
- GrepTool: Search file contents using regex
- LS: List directory contents
- Bash: Execute shell commands
Chat Commands
- /help: Show available commands
- /compact: Compress conversation history to save tokens
- /version: Show version information
- /providers: List available LLM providers
- /cost: Show cost and usage information
- /budget [amount]: Set a budget limit
- /quit, /exit: Exit the application
Architecture
Claude Code Python Edition is built with a modular architecture:
Using with Model Context Protocol
Using Claude Code as an MCP Server
Once the MCP server is running, you can connect to it from Claude Desktop or other MCP-compatible clients:
- Install and run the MCP server:Copy
- Open the configuration page in your browser:Copy
- Follow the instructions to configure Claude Desktop, including:
- Copy the JSON configuration
- Download the auto-configured JSON file
- Step-by-step setup instructions
Using Claude Code as an MCP Client
To connect to any MCP server using Claude Code:
- Ensure you have your Anthropic API key in the environment or .env file
- Start the MCP server you want to connect to
- Connect using the MCP client:Copy
- Type queries in the interactive chat interface
Using Multi-Agent Mode
For complex tasks, the multi-agent mode allows multiple specialized agents to collaborate:
- Create an agent configuration file or use the provided example
- Start your MCP server
- Launch the multi-agent client:Copy
- Use the command interface to interact with multiple agents:
- Type a message to broadcast to all agents
- Use
/talk Agent_Name message
for direct communication - Use
/agents
to see all available agents - Use
/history
to view the conversation history
Contributing
- Fork the repository
- Create a feature branch
- Implement your changes with tests
- Submit a pull request
License
MIT
Acknowledgments
This project is inspired by Anthropic's Claude Code CLI tool, reimplemented in Python with additional features for enhanced visibility, cost management, and MCP server capabilities.# OpenAI Code Assistant
A powerful command-line and API-based coding assistant that uses OpenAI APIs with function calling and streaming.
Features
- Interactive CLI for coding assistance
- Web API for integration with other applications
- Model Context Protocol (MCP) server implementation
- Replication support for high availability
- Tool-based architecture for extensibility
- Reinforcement learning for tool optimization
- Web client for browser-based interaction
Installation
- Clone the repository
- Install dependencies:Copy
- Set your OpenAI API key:Copy
Usage
CLI Mode
Run the assistant in interactive CLI mode:
Options:
--model
,-m
: Specify the model to use (default: gpt-4o)--temperature
,-t
: Set temperature for response generation (default: 0)--verbose
,-v
: Enable verbose output with additional information--enable-rl/--disable-rl
: Enable/disable reinforcement learning for tool optimization--rl-update
: Manually trigger an update of the RL model
API Server Mode
Run the assistant as an API server:
Options:
--host
: Host address to bind to (default: 127.0.0.1)--port
,-p
: Port to listen on (default: 8000)--workers
,-w
: Number of worker processes (default: 1)--enable-replication
: Enable replication across instances--primary/--secondary
: Whether this is a primary or secondary instance--peer
: Peer instances to replicate with (host:port), can be specified multiple times
MCP Server Mode
Run the assistant as a Model Context Protocol (MCP) server:
Options:
--host
: Host address to bind to (default: 127.0.0.1)--port
,-p
: Port to listen on (default: 8000)--dev
: Enable development mode with additional logging--dependencies
: Additional Python dependencies to install--env-file
: Path to .env file with environment variables
MCP Client Mode
Connect to an MCP server using the assistant as the reasoning engine:
Options:
--model
,-m
: Model to use for reasoning (default: gpt-4o)--host
: Host address for the MCP server (default: 127.0.0.1)--port
,-p
: Port for the MCP server (default: 8000)
Deployment Script
For easier deployment, use the provided script:
To enable replication:
Web Client
To use the web client, open web-client.html
in your browser. Make sure the API server is running.
API Endpoints
Standard API Endpoints
POST /conversation
: Create a new conversationPOST /conversation/{conversation_id}/message
: Send a message to a conversationPOST /conversation/{conversation_id}/message/stream
: Stream a message responseGET /conversation/{conversation_id}
: Get conversation detailsDELETE /conversation/{conversation_id}
: Delete a conversationGET /health
: Health check endpoint
MCP Protocol Endpoints
GET /
: Health check (MCP protocol)POST /context
: Get context for a prompt templateGET /prompts
: List available prompt templatesGET /prompts/{prompt_id}
: Get a specific prompt templatePOST /prompts
: Create a new prompt templatePUT /prompts/{prompt_id}
: Update an existing prompt templateDELETE /prompts/{prompt_id}
: Delete a prompt template
Replication
The replication system allows running multiple instances of the assistant with synchronized state. This provides:
- High availability
- Load balancing
- Fault tolerance
To set up replication:
- Start a primary instance with
--enable-replication
- Start secondary instances with
--enable-replication --secondary --peer [primary-host:port]
Tools
The assistant includes various tools:
- Weather: Get current weather for a location
- View: Read files from the filesystem
- Edit: Edit files
- Replace: Write files
- Bash: Execute bash commands
- GlobTool: File pattern matching
- GrepTool: Content search
- LS: List directory contents
- JinaSearch: Web search using Jina.ai
- JinaFactCheck: Fact checking using Jina.ai
- JinaReadURL: Read and summarize webpages
CLI Commands
/help
: Show help message/compact
: Compact the conversation to reduce token usage/status
: Show token usage and session information/config
: Show current configuration settings/rl-status
: Show RL tool optimizer status (if enabled)/rl-update
: Update the RL model manually (if enabled)/rl-stats
: Show tool usage statistics (if enabled)
This server cannot be installed
A Model Context Protocol server implementation that enables connection between OpenAI APIs and MCP clients for coding assistance with features like CLI interaction, web API integration, and tool-based architecture.
- Key Features
- Installation
- Usage
- Available Tools
- Chat Commands
- Architecture
- Using with Model Context Protocol
- Contributing
- License
- Acknowledgments
- Features
- Installation
- Usage
- API Endpoints
- Replication
- Tools
- CLI Commands