Skip to main content
Glama

AutoGen MCP Server

smithery.yaml1.67 kB
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml startCommand: type: stdio configSchema: # JSON Schema defining the configuration options for the enhanced AutoGen MCP server. type: object properties: openaiApiKey: type: string default: "" description: OpenAI API key for LLM usage transport: type: string enum: ["stdio", "http"] default: "stdio" description: Transport mode for the MCP server port: type: number default: 3001 description: Port for HTTP transport mode pythonPath: type: string default: "python" description: Path to Python executable for AutoGen agents enableStreaming: type: boolean default: true description: Enable streaming responses maxAgents: type: number default: 10 description: Maximum number of concurrent agents commandFunction: # A JS function that produces the CLI command based on the given config to start the enhanced MCP on stdio. |- (config) => ({ command: 'node', args: ['build/enhanced_index.js', '--transport=' + (config.transport || 'stdio'), '--port=' + (config.port || '3001')], env: { OPENAI_API_KEY: config.openaiApiKey || '', PYTHON_PATH: config.pythonPath || 'python', ENABLE_STREAMING: config.enableStreaming !== false ? 'true' : 'false', MAX_AGENTS: config.maxAgents || '10' } }) exampleConfig: openaiApiKey: your-openai-api-key transport: stdio port: 3001 enableStreaming: true maxAgents: 10

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/DynamicEndpoints/Autogen_MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server