Skip to main content
Glama

openai_chat

Send a prompt to OpenAI through Codex for non-interactive queries with a 180-second timeout. Returns clear error messages when quota limits are exceeded.

Instructions

Send a prompt to OpenAI via Codex exec. Non-interactive, fast startup (no MCP servers loaded), 180s default timeout. Returns clear error on quota limits. For code review, use openai_review instead.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe prompt to send
modelNoModel override (optional). Note: some models may not be available on ChatGPT Plus
timeoutNoTimeout in seconds (default 180)
cwdNoWorking directory for codex

Implementation Reference

  • Registration of the 'openai_chat' tool via mcpServer.registerTool(), including input schema (prompt, model, timeout, cwd) and the handler callback.
    mcpServer.registerTool(
      "openai_chat",
      {
        description:
          "Send a prompt to OpenAI via Codex exec. Non-interactive, fast startup (no MCP servers loaded), 180s default timeout. Returns clear error on quota limits. For code review, use openai_review instead.",
        inputSchema: {
          prompt: z.string().describe("The prompt to send"),
          model: z
            .string()
            .optional()
            .describe("Model override (optional). Note: some models may not be available on ChatGPT Plus"),
          timeout: z
            .number()
            .default(180)
            .describe("Timeout in seconds (default 180)"),
          cwd: z
            .string()
            .optional()
            .describe("Working directory for codex"),
        },
      },
      async ({ prompt, model, timeout = 180, cwd }) => {
        const timeoutMs = timeout * 1000;
        const outputFile = tempFile("codex-chat");
    
        try {
          log(`Chat: ${prompt.length} chars, timeout ${timeout}s`);
          const startTime = Date.now();
    
          const args = [
            "exec",
            "--sandbox", "read-only",
            "--ephemeral",
            "-o", outputFile,
          ];
    
          if (model) {
            args.push("-m", model);
          }
    
          args.push("-");
    
          const { stdout, stderr, exitCode } = await runCodex(args, {
            timeoutMs,
            stdin: prompt,
            cwd,
          });
    
          const elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
          const combined = stdout + stderr;
    
          const error = detectError(combined);
          if (error) {
            log(`${error.errorType}: ${error.message}`);
            try { await unlink(outputFile); } catch {}
            return {
              content: [{ type: "text", text: error.message }],
              isError: true,
            };
          }
    
          let outputFileContent = "";
          try {
            outputFileContent = await readFile(outputFile, "utf-8");
          } catch {}
    
          try { await unlink(outputFile); } catch {}
    
          const response = extractResponse(stdout, outputFileContent);
    
          if (!response) {
            log(`No response (exit: ${exitCode}, stdout: ${stdout.length}, stderr: ${stderr.length})`);
            return {
              content: [{ type: "text", text: `No response from Codex. Exit code: ${exitCode}. Output: ${combined.slice(-300)}` }],
              isError: true,
            };
          }
    
          log(`OK in ${elapsed}s (${response.length} chars)`);
    
          return {
            content: [{ type: "text", text: response }],
          };
        } catch (error) {
          try { await unlink(outputFile); } catch {}
    
          const knownError = detectError(error.message);
          if (knownError) {
            log(`${knownError.errorType}: ${knownError.message}`);
            return {
              content: [{ type: "text", text: knownError.message }],
              isError: true,
            };
          }
    
          log(`Error: ${error.message}`);
          return {
            content: [{ type: "text", text: `Codex error: ${error.message}` }],
            isError: true,
          };
        }
      }
    );
  • Async handler function for 'openai_chat' tool. Sends prompt to OpenAI via Codex CLI exec in non-interactive sandbox mode, handles timeouts, error detection, and output extraction.
    async ({ prompt, model, timeout = 180, cwd }) => {
      const timeoutMs = timeout * 1000;
      const outputFile = tempFile("codex-chat");
    
      try {
        log(`Chat: ${prompt.length} chars, timeout ${timeout}s`);
        const startTime = Date.now();
    
        const args = [
          "exec",
          "--sandbox", "read-only",
          "--ephemeral",
          "-o", outputFile,
        ];
    
        if (model) {
          args.push("-m", model);
        }
    
        args.push("-");
    
        const { stdout, stderr, exitCode } = await runCodex(args, {
          timeoutMs,
          stdin: prompt,
          cwd,
        });
    
        const elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
        const combined = stdout + stderr;
    
        const error = detectError(combined);
        if (error) {
          log(`${error.errorType}: ${error.message}`);
          try { await unlink(outputFile); } catch {}
          return {
            content: [{ type: "text", text: error.message }],
            isError: true,
          };
        }
    
        let outputFileContent = "";
        try {
          outputFileContent = await readFile(outputFile, "utf-8");
        } catch {}
    
        try { await unlink(outputFile); } catch {}
    
        const response = extractResponse(stdout, outputFileContent);
    
        if (!response) {
          log(`No response (exit: ${exitCode}, stdout: ${stdout.length}, stderr: ${stderr.length})`);
          return {
            content: [{ type: "text", text: `No response from Codex. Exit code: ${exitCode}. Output: ${combined.slice(-300)}` }],
            isError: true,
          };
        }
    
        log(`OK in ${elapsed}s (${response.length} chars)`);
    
        return {
          content: [{ type: "text", text: response }],
        };
      } catch (error) {
        try { await unlink(outputFile); } catch {}
    
        const knownError = detectError(error.message);
        if (knownError) {
          log(`${knownError.errorType}: ${knownError.message}`);
          return {
            content: [{ type: "text", text: knownError.message }],
            isError: true,
          };
        }
    
        log(`Error: ${error.message}`);
        return {
          content: [{ type: "text", text: `Codex error: ${error.message}` }],
          isError: true,
        };
      }
    }
  • Input schema for 'openai_chat': prompt (required string), model (optional string), timeout (optional number, default 180s), cwd (optional string).
    {
      description:
        "Send a prompt to OpenAI via Codex exec. Non-interactive, fast startup (no MCP servers loaded), 180s default timeout. Returns clear error on quota limits. For code review, use openai_review instead.",
      inputSchema: {
        prompt: z.string().describe("The prompt to send"),
        model: z
          .string()
          .optional()
          .describe("Model override (optional). Note: some models may not be available on ChatGPT Plus"),
        timeout: z
          .number()
          .default(180)
          .describe("Timeout in seconds (default 180)"),
        cwd: z
          .string()
          .optional()
          .describe("Working directory for codex"),
      },
  • detectError() - checks output for usage limit, model not supported, and auth expired errors, returning structured error info.
    function detectError(output) {
      const combined = output.toLowerCase();
    
      if (combined.includes("usage limit") || combined.includes("hit your usage limit")) {
        const match = output.match(/try again at (.+?)[\.\n]/);
        const resetDate = match ? match[1] : "unknown";
        return {
          isError: true,
          errorType: "QUOTA_EXCEEDED",
          message: `Codex usage limit reached. Credits reset at: ${resetDate}. Use a fallback provider.`,
        };
      }
    
      if (combined.includes("not supported when using codex with a chatgpt account")) {
        return {
          isError: true,
          errorType: "MODEL_NOT_SUPPORTED",
          message: "This model is not available with ChatGPT Plus. Use the default model.",
        };
      }
    
      if (combined.includes("auth") && (combined.includes("expired") || combined.includes("login"))) {
        return {
          isError: true,
          errorType: "AUTH_EXPIRED",
          message: "Codex auth token expired. Run 'codex login' to re-authenticate.",
        };
      }
    
      return null;
    }
  • extractResponse() - extracts the AI response from codex exec output, preferring output file content over stdout parsing.
    function extractResponse(stdout, outputFileContent) {
      if (outputFileContent && outputFileContent.trim()) {
        return outputFileContent.trim();
      }
    
      const lines = stdout.split("\n");
      let inResponse = false;
      let response = [];
    
      for (const line of lines) {
        if (line.trim() === "codex") {
          inResponse = true;
          continue;
        }
        if (inResponse && line.startsWith("tokens used")) {
          break;
        }
        if (inResponse) {
          response.push(line);
        }
      }
    
      if (response.length > 0) {
        return response.join("\n").trim();
      }
    
      return stdout.trim();
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses non-interactive nature, fast startup, 180s timeout, and clear error handling on quota limits. Lacks minor details like return format, but overall strong given no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each providing essential information without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers key aspects: purpose, behavior, timeout, error handling, and alternative. Lacks output format details, but sufficient for a simple prompt tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions, and the description adds no additional semantics beyond restating the default timeout.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool sends a prompt to OpenAI via Codex exec, and distinguishes from sibling openai_review by specifying not to use for code review.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly mentions when to use (non-interactive, fast startup) and specifies an alternative for code review (openai_review), providing clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/spyrae/claude-concilium'

If you have feedback or need assistance with the MCP directory API, please join our Discord server