Skip to main content
Glama

SwiftOpenAI MCP Server

MIT License
16
2
  • Apple

SwiftOpenAI MCP Server

A universal MCP (Model Context Protocol) server that provides access to OpenAI's APIs through a standardized interface. Works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, Windsurf, VS Code, and more.

https://github.com/user-attachments/assets/d93f3700-f33d-42eb-8c23-4ad64fd359e1

What is this?

This MCP server enables any AI assistant or development tool that supports the Model Context Protocol to interact with OpenAI's APIs. Once configured, your AI assistant can:

  • Have conversations with GPT models
  • Generate images with DALL-E
  • Create embeddings for semantic search
  • List available models
  • Work with OpenAI-compatible providers (Groq, OpenRouter, etc.)

Built with Swift for high performance and reliability.

🚀 Features

  • Multi-Provider Support - Works with 9+ AI providers including OpenAI, Anthropic, Google Gemini, Ollama, Groq, and more
  • Chat Completions - Interact with gpt-4o, o3-mini, o3, Claude, Gemini, and other chat models
  • Image Generation - Create images using DALL-E 2 and DALL-E 3
  • Embeddings - Generate text embeddings for semantic search and analysis
  • Model Listing - Retrieve available models from any provider

🌐 Supported Providers

This server works with any OpenAI-compatible API endpoint:

Fully Compatible

  • OpenAI (default) - GPT-4o, o3-mini, o3, DALL-E, embeddings
  • Azure OpenAI - Enterprise OpenAI services with compatible endpoints
  • Ollama - Local LLMs with OpenAI-compatible API (/v1 endpoints)
  • Groq - Fast inference using their OpenAI-compatible endpoint
  • OpenRouter - Unified access to 100+ models via OpenAI format
  • DeepSeek - Coding models with OpenAI-compatible API

Requires Compatible Endpoints

These providers have their own APIs but may offer OpenAI-compatible endpoints:

  • Anthropic - Check if they provide an OpenAI-compatible endpoint
  • Google Gemini - May require specific configuration
  • xAI - Check for OpenAI-compatible access

Note: Image generation (DALL-E) only works with OpenAI. Other providers may support different image models.

📦 Installation

npm install -g swiftopenai-mcp

Prerequisites

  • Node.js 16 or higher

🔧 Configuration

Add this configuration to your MCP client:

OpenAI (default)

{ "mcpServers": { "swiftopenai": { "command": "npx", "args": ["-y", "swiftopenai-mcp"], "env": { "API_KEY": "sk-..." } } } }

Other Providers

Groq (fast open-source models):

"env": { "API_KEY": "gsk_...", "API_BASE_URL": "https://api.groq.com/openai/v1" }

Ollama (local models):

"env": { "API_KEY": "ollama", "API_BASE_URL": "http://localhost:11434/v1" }

OpenRouter (multiple providers):

"env": { "API_KEY": "sk-or-v1-...", "API_BASE_URL": "https://openrouter.ai/api/v1" }

Where to add this configuration:

  • Claude Desktop:
    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Claude Code: .claude/mcp.json in your project root
  • Cursor: Settings → Features → MCP Servers
  • Windsurf: MCP panel in settings
  • VS Code Continue: Add to your .continuerc.json under the models array with an mcpServers property

🛠️ Available Tools

chat_completion

Send messages to OpenAI GPT models and get responses.

Parameters:

  • messages (required) - Array of conversation messages, each with:
    • role: "system", "user", or "assistant"
    • content: The message text
  • model - Which model to use (default: "gpt-4o"). Examples: gpt-4o, o3-mini, o3
  • temperature - Creativity level from 0-2 (default: 0.7). Lower = more focused, higher = more creative
  • max_tokens - Maximum length of the response

Example usage: "Ask o3-mini to explain quantum computing in simple terms"

image_generation

Generate images using AI models.

Parameters:

  • prompt (required) - Text description of the image you want
  • model - Model to use (default: "dall-e-3"). Examples:
    • OpenAI: "dall-e-2", "dall-e-3"
    • Other providers: Use their specific model names
  • size - Image dimensions (default: "1024x1024")
  • quality - "standard" or "hd" (default: "standard")
  • n - Number of images to generate (default: 1)

Example usage: "Generate an HD image of a futuristic city at sunset"

Note: Image generation parameters like size and quality may vary by provider. Currently optimized for OpenAI's DALL-E models.

list_models

List available models from your provider.

Parameters:

  • filter - Optional text to filter model names (e.g., "gpt" to see only GPT models)

Example usage: "List all available models" or "Show me all GPT models"

create_embedding

Create embeddings for text.

Parameters:

  • input (required) - The text to create embeddings for
  • model - Embedding model to use (default: "text-embedding-ada-002")

Example usage: "Create embeddings for the text 'The quick brown fox jumps over the lazy dog'"

💡 Usage Examples

Note: The exact way to invoke these tools depends on your MCP client.

Chat Conversations

Powerful use cases:

Get a second opinion from another AI:

  • "Send this entire conversation to o3-mini and ask what it thinks"
  • "Have gpt-4o analyze what we've discussed and suggest improvements"

Deep analysis:

  • "Ask o3 to find any logical flaws in our reasoning so far"
  • "Have o3-mini summarize the key decisions we've made"

Cross-model collaboration:

  • "Get o3's perspective on this problem we're solving"
  • "Ask gpt-4o to critique the code we just wrote"
  • "Have o3-mini explain this differently for a beginner"

Context-aware help:

  • "Based on our conversation, have o3 create a step-by-step tutorial"
  • "Ask gpt-4o to generate test cases for the solution we discussed"

Role-playing scenarios:

  • "Have o3-mini act as a senior developer and review our approach"
  • "Ask gpt-4o to play devil's advocate on our architecture"
  • "Get o3 to explain this as if teaching a computer science class"

Image Generation

Quick generations:

  • "Generate an image of a sunset over mountains"
  • "Create a DALL-E 3 HD image of a futuristic city"

Specific requests:

  • "Make a 1792x1024 image of a cozy coffee shop interior"
  • "Generate a standard quality image of abstract art"

Model Discovery

  • "List all available models"
  • "Show me only the GPT models"
  • "What embedding models are available?"

Embeddings

  • "Create embeddings for: 'Revolutionary new smartphone with AI features'"
  • "Generate embeddings for this product description: [your text]"

🔒 Security Best Practices

  1. Never share your API key in public repositories or chat messages
  2. Use environment variables when possible instead of hardcoding keys
  3. Rotate keys regularly through the OpenAI dashboard
  4. Set usage limits in your OpenAI account to prevent unexpected charges

🐛 Troubleshooting

Server not starting

  1. Check API key: Ensure your API key is correctly set in the configuration
  2. Restart your client: Most MCP clients require a restart after configuration changes
  3. Verify installation: Check if the package is installed: npm list -g swiftopenai-mcp
  4. Check permissions: Ensure the npm global directory has proper permissions

No response from tools

  1. API key permissions: Verify your API key has the necessary permissions
  2. API credits: Check if you have available API credits in your OpenAI account
  3. Alternative providers: For non-OpenAI providers, ensure the base URL is correct
  4. Network issues: Check if you can reach the API endpoint from your network

Debugging

Check MCP server output

Most MCP clients provide ways to view server logs. For example:

Claude Desktop logs:

  • macOS: ~/Library/Logs/Claude/mcp-*.log
  • Windows: %APPDATA%\Claude\logs\mcp-*.log

Other clients: Check your client's documentation for log locations.

Test the server directly

You can test if the server starts correctly:

npx swiftopenai-mcp

This should output the MCP initialization message.

Common Issues

  • "Missing API key" error: Set the API_KEY environment variable in your configuration
  • "Invalid API key" error: Double-check your API key is correct and active
  • Timeout errors: Some operations (like image generation) can take time; be patient
  • Rate limit errors: You may be hitting your provider's rate limits; wait a bit and try again

🏗️ Building from Source

If you want to build the server yourself:

git clone https://github.com/jamesrochabrun/SwiftOpenAIMCP.git cd SwiftOpenAIMCP swift build -c release

The binary will be at .build/release/swiftopenai-mcp

🤝 Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

📄 License

MIT License - see LICENSE file for details

🙏 Acknowledgments

Support


Related MCP Servers

  • -
    security
    A
    license
    -
    quality
    A simple MCP server for interacting with OpenAI assistants. This server allows other tools (like Claude Desktop) to create and interact with OpenAI assistants through the Model Context Protocol.
    Last updated -
    33
    Python
    MIT License
    • Apple
  • -
    security
    F
    license
    -
    quality
    An OpenAI API-based MCP server that provides deep thinking and analysis capabilities, integrating with AI editor models to deliver comprehensive insights and practical solutions.
    Last updated -
  • -
    security
    F
    license
    -
    quality
    A server that exposes OpenAI agents (web search, file search, computer actions, and multi-agent orchestration) through the Model Context Protocol, making them accessible to any MCP client including Claude Desktop.
    Last updated -
    Python
  • -
    security
    F
    license
    -
    quality
    An auto-generated MCP server that enables interaction with the OpenAI API, allowing users to access OpenAI's models and capabilities through the Multi-Agent Conversation Protocol.
    Last updated -
    Python

View all related MCP servers

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jamesrochabrun/SwiftOpenAIMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server