ask-openai
Ask direct questions to OpenAI assistant models through MCP integration to get concise or detailed responses for Claude Desktop workflows.
Instructions
Ask my assistant models a direct question
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Ask assistant | |
| model | No | o3-mini |
Implementation Reference
- src/mcp_server_openai/llm.py:10-33 (handler)Executes the core tool logic by calling the OpenAI chat completions API with predefined system prompts and the user query.async def ask_openai(self, query: str, model: str = "o3-mini") -> str: try: messages = [ { "role": "developer", "content": "You are a helpful assistant that provides clear and accurate technical responses." }, { "role": "system", "content": "Ensure responses are well-structured and technically precise." }, { "role": "user", "content": query } ] response = await self.client.chat.completions.create( messages=messages, model=model ) return response.choices[0].message.content except Exception as e: logger.error(f"Failed to query OpenAI: {str(e)}") raise
- Input schema definition for the 'ask-openai' tool, specifying required 'query' and optional 'model'.inputSchema={ "type": "object", "properties": { "query": {"type": "string", "description": "Ask assistant"}, "model": {"type": "string", "default": "o3-mini", "enum": ["o3-mini", "gpt-4o-mini"]} }, "required": ["query"] }
- src/mcp_server_openai/server.py:21-36 (registration)Registers the 'ask-openai' tool with MCP server including its name, description, and schema.@server.list_tools() async def handle_list_tools() -> list[types.Tool]: return [ types.Tool( name="ask-openai", description="Ask my assistant models a direct question", inputSchema={ "type": "object", "properties": { "query": {"type": "string", "description": "Ask assistant"}, "model": {"type": "string", "default": "o3-mini", "enum": ["o3-mini", "gpt-4o-mini"]} }, "required": ["query"] } ) ]
- src/mcp_server_openai/server.py:38-55 (handler)MCP server tool call handler that dispatches 'ask-openai' calls to the LLMConnector and returns the response as TextContent.@server.call_tool() async def handle_tool_call(name: str, arguments: dict | None) -> list[types.TextContent]: try: if not arguments: raise ValueError("No arguments provided") if name == "ask-openai": response = await connector.ask_openai( query=arguments["query"], model=arguments.get("model", "o3-mini") ) return [types.TextContent(type="text", text=response)] raise ValueError(f"Unknown tool: {name}") except Exception as e: logger.error(f"Tool call failed: {str(e)}") return [types.TextContent(type="text", text=f"Error: {str(e)}")]
- src/mcp_server_openai/llm.py:6-9 (helper)Helper class that initializes the AsyncOpenAI client used by the tool handler.class LLMConnector: def __init__(self, openai_api_key: str): self.client = AsyncOpenAI(api_key=openai_api_key)