ask_chatgpt
Get answers from ChatGPT through the MCP server. Submit prompts to receive AI-generated responses using various models with configurable parameters like temperature and reasoning effort.
Instructions
Ask ChatGPT a question and get a response. Supports both regular models (with temperature) and reasoning models (with effort/verbosity).
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The prompt to send to ChatGPT | |
| model | No | The model to use (default: from OPENAI_DEFAULT_MODEL env var or gpt-5). Unless specified by the user, you should not set this model parameter. Supported models: gpt-5, gpt-5-mini, gpt-5-nano, o3, o3-pro, o4-mini, gpt-4.1, gpt-4.1-mini | gpt-5 |
| system | No | System prompt to set context and behavior for the AI | |
| temperature | No | Temperature for response generation (0-2). Not available for reasoning models (gpt-5, o1, o3, etc.) | |
| effort | No | Reasoning effort level: minimal, low, medium, high (default: from REASONING_EFFORT env var). For reasoning models only. | |
| verbosity | No | Output verbosity level: low, medium, high (default: from VERBOSITY env var). For reasoning models only. | |
| searchContextSize | No | Search context size: low, medium, high (default: from SEARCH_CONTEXT_SIZE env var). For reasoning models only. | |
| maxTokens | No | Maximum number of output tokens | |
| maxRetries | No | Maximum number of API retry attempts (default: from OPENAI_MAX_RETRIES env var or 3) | |
| timeoutMs | No | Request timeout in milliseconds. Auto-adjusts based on effort level: high=300s, medium=120s, low/minimal=60s. Can be overridden with OPENAI_API_TIMEOUT env var. | |
| useStreaming | No | Force streaming mode to prevent timeouts during long reasoning tasks. Defaults to auto (true for medium/high effort reasoning models). |