README.mdā¢13.3 kB
# ChatGPT Integration Example
This example demonstrates how to integrate your MCP server with ChatGPT and other OpenAI-based models.
**Note:** ChatGPT doesn't have native MCP support like Claude Desktop, so we'll create a custom integration using OpenAI's API and function calling features.
## Setup
### 1. Install Dependencies
```powershell
npm install openai express ws
# or for Python:
pip install openai fastapi websockets
```
### 2. Get OpenAI API Key
1. Sign up at [OpenAI](https://openai.com)
2. Get your API key from the dashboard
3. Add it to your `.env` file:
```env
OPENAI_API_KEY=your_api_key_here
```
### 3. ChatGPT Proxy Server
We'll create a proxy server that bridges ChatGPT and your MCP server:
#### Node.js Implementation
```javascript
// chatgpt-proxy.js
const express = require('express');
const { OpenAI } = require('openai');
const WebSocket = require('ws');
require('dotenv').config();
const app = express();
app.use(express.json());
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// MCP tools converted to OpenAI function format
const tools = [
{
type: 'function',
function: {
name: 'read_file',
description: 'Read the contents of a file',
parameters: {
type: 'object',
properties: {
path: {
type: 'string',
description: 'The path to the file to read',
},
},
required: ['path'],
},
},
},
{
type: 'function',
function: {
name: 'write_file',
description: 'Write content to a file',
parameters: {
type: 'object',
properties: {
path: {
type: 'string',
description: 'The path to the file to write',
},
content: {
type: 'string',
description: 'The content to write to the file',
},
},
required: ['path', 'content'],
},
},
},
// Add more tools as needed
];
// Chat endpoint
app.post('/chat', async (req, res) => {
try {
const { messages } = req.body;
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages,
tools,
tool_choice: 'auto',
});
const responseMessage = completion.choices[0].message;
// Handle tool calls
if (responseMessage.tool_calls) {
const toolResults = [];
for (const toolCall of responseMessage.tool_calls) {
const toolName = toolCall.function.name;
const toolArgs = JSON.parse(toolCall.function.arguments);
// Call your MCP server
const result = await callMCPTool(toolName, toolArgs);
toolResults.push({
tool_call_id: toolCall.id,
role: 'tool',
content: JSON.stringify(result),
});
}
// Get final response with tool results
const finalCompletion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [
...messages,
responseMessage,
...toolResults,
],
});
res.json(finalCompletion.choices[0].message);
} else {
res.json(responseMessage);
}
} catch (error) {
res.status(500).json({ error: error.message });
}
});
// Function to call MCP server
async function callMCPTool(toolName, args) {
// This would connect to your MCP server
// For HTTP interface:
const response = await fetch(`http://localhost:3000/mcp/tools/${toolName}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ arguments: args }),
});
return await response.json();
}
const PORT = process.env.CHATGPT_PROXY_PORT || 3001;
app.listen(PORT, () => {
console.log(`ChatGPT Proxy Server running on port ${PORT}`);
});
```
#### Python Implementation
```python
# chatgpt_proxy.py
import os
import json
import asyncio
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List, Dict, Any
import openai
import httpx
from dotenv import load_dotenv
load_dotenv()
app = FastAPI(title="ChatGPT MCP Proxy")
client = openai.AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
class Message(BaseModel):
role: str
content: str
class ChatRequest(BaseModel):
messages: List[Message]
# MCP tools in OpenAI format
tools = [
{
"type": "function",
"function": {
"name": "read_file",
"description": "Read the contents of a file",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The path to the file to read",
}
},
"required": ["path"],
},
},
},
# Add more tools...
]
async def call_mcp_tool(tool_name: str, args: Dict[str, Any]) -> Dict[str, Any]:
"""Call MCP server tool"""
async with httpx.AsyncClient() as client:
response = await client.post(
f"http://localhost:3000/mcp/tools/{tool_name}",
json={"arguments": args},
)
return response.json()
@app.post("/chat")
async def chat(request: ChatRequest):
try:
completion = await client.chat.completions.create(
model="gpt-4",
messages=[msg.dict() for msg in request.messages],
tools=tools,
tool_choice="auto",
)
response_message = completion.choices[0].message
# Handle tool calls
if response_message.tool_calls:
tool_results = []
for tool_call in response_message.tool_calls:
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
# Call MCP server
result = await call_mcp_tool(tool_name, tool_args)
tool_results.append({
"tool_call_id": tool_call.id,
"role": "tool",
"content": json.dumps(result),
})
# Get final response
final_completion = await client.chat.completions.create(
model="gpt-4",
messages=[
*[msg.dict() for msg in request.messages],
response_message.dict(),
*tool_results,
],
)
return final_completion.choices[0].message
else:
return response_message
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=3001)
```
## Web Interface
Create a simple web interface to chat with your integrated system:
```html
<!-- index.html -->
<!DOCTYPE html>
<html>
<head>
<title>ChatGPT with MCP Tools</title>
<style>
body { font-family: Arial, sans-serif; margin: 20px; }
#chat { height: 400px; border: 1px solid #ccc; overflow-y: auto; padding: 10px; margin-bottom: 10px; }
#input { width: 70%; padding: 10px; }
#send { padding: 10px 20px; }
.message { margin: 10px 0; }
.user { color: blue; }
.assistant { color: green; }
.tool { color: orange; font-style: italic; }
</style>
</head>
<body>
<h1>ChatGPT with MCP Tools</h1>
<div id="chat"></div>
<input type="text" id="input" placeholder="Type your message...">
<button id="send">Send</button>
<script>
const chat = document.getElementById('chat');
const input = document.getElementById('input');
const send = document.getElementById('send');
const messages = [];
function addMessage(role, content) {
const div = document.createElement('div');
div.className = `message ${role}`;
div.innerHTML = `<strong>${role}:</strong> ${content}`;
chat.appendChild(div);
chat.scrollTop = chat.scrollHeight;
}
async function sendMessage() {
const message = input.value.trim();
if (!message) return;
addMessage('user', message);
messages.push({ role: 'user', content: message });
input.value = '';
try {
const response = await fetch('/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ messages }),
});
const result = await response.json();
addMessage('assistant', result.content);
messages.push({ role: 'assistant', content: result.content });
} catch (error) {
addMessage('error', 'Error: ' + error.message);
}
}
send.addEventListener('click', sendMessage);
input.addEventListener('keypress', (e) => {
if (e.key === 'Enter') sendMessage();
});
</script>
</body>
</html>
```
## Usage Examples
### 1. Start the MCP Server
```powershell
# Terminal 1: Start your MCP server
node server.js
```
### 2. Start the ChatGPT Proxy
```powershell
# Terminal 2: Start the proxy server
node chatgpt-proxy.js
# or for Python:
python chatgpt_proxy.py
```
### 3. Use the Web Interface
Open your browser to `http://localhost:3001` and start chatting!
### Example Conversations
**File Operations:**
```
User: Can you read my package.json file and tell me about my project?
ChatGPT: I'll read your package.json file to understand your project better.
[Calls read_file tool]
Based on your package.json, this appears to be an MCP server implementation...
```
**System Information:**
```
User: What's my system information?
ChatGPT: Let me check your system information for you.
[Calls get_system_info tool]
Your system is running Windows 10 with 16GB of RAM...
```
## Advanced Features
### Custom GPT Integration
You can create a Custom GPT in ChatGPT Plus/Pro that connects to your MCP server:
1. Go to ChatGPT and click "Create a GPT"
2. In the "Actions" section, define your MCP tools as OpenAPI schema
3. Set the authentication to connect to your proxy server
### OpenAI Assistants API
For more advanced use cases, integrate with the Assistants API:
```javascript
// assistants-integration.js
const assistant = await openai.beta.assistants.create({
name: "MCP Assistant",
instructions: "You are a helpful assistant with access to file system and web tools.",
tools: [
{ type: "code_interpreter" },
...mcpTools // Your MCP tools converted to OpenAI format
],
model: "gpt-4-1106-preview"
});
```
### Streaming Responses
Implement streaming for better user experience:
```javascript
app.post('/chat-stream', async (req, res) => {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive'
});
const stream = await openai.chat.completions.create({
model: 'gpt-4',
messages: req.body.messages,
stream: true,
tools: mcpTools
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
res.write(`data: ${JSON.stringify({ content })}\n\n`);
}
res.end();
});
```
## Security Considerations
### API Key Management
- Store OpenAI API keys securely
- Use environment variables
- Consider key rotation
### Rate Limiting
- Implement rate limiting for API calls
- Monitor usage and costs
- Set up alerts for unusual activity
### Input Validation
- Validate all inputs to MCP tools
- Sanitize file paths
- Implement proper error handling
## Troubleshooting
### Common Issues
1. **API Key Errors:**
- Verify your OpenAI API key
- Check your account has sufficient credits
- Ensure proper environment variable setup
2. **Tool Call Failures:**
- Verify MCP server is running
- Check tool definitions match between systems
- Review error logs for debugging
3. **Network Issues:**
- Ensure all servers are accessible
- Check firewall settings
- Verify port configurations
### Debugging
Enable debug logging:
```javascript
// Add to your proxy server
const DEBUG = process.env.DEBUG === 'true';
function debug(message) {
if (DEBUG) {
console.log('[DEBUG]', new Date().toISOString(), message);
}
}
```
## Cost Optimization
### Token Management
- Monitor token usage
- Implement conversation length limits
- Use appropriate models (GPT-3.5 for simpler tasks)
### Caching
- Cache tool results when appropriate
- Implement response caching for repeated queries
- Use Redis or similar for persistent caching
### Batch Operations
- Group related tool calls when possible
- Use batch API endpoints where available
- Optimize tool descriptions for efficiency
## Next Steps
1. Customize tool definitions for your use case
2. Implement proper error handling and logging
3. Add authentication and authorization
4. Deploy to production environment
5. Monitor usage and optimize costs
For more integration examples and advanced features, see the other examples in this repository.