Skip to main content
Glama

ask-codex

Execute code analysis and editing tasks using file references with @ syntax, model selection, and safety controls. Supports automated refactoring with structured change tracking.

Instructions

Execute Codex CLI with file analysis (@syntax), model selection, and safety controls. Supports changeMode.

Input Schema

NameRequiredDescriptionDefault
promptYesTask or question. Use @ to include files (e.g., '@largefile.ts explain').
modelNoModel: gpt-5-codex, gpt-5, o3, o4-mini, codex-1, codex-mini-latest, gpt-4.1. Default: gpt-5-codex
sandboxNoQuick automation mode: enables workspace-write + on-failure approval. Alias for fullAuto.
fullAutoNoFull automation mode
approvalPolicyNoApproval: never, on-request, on-failure, untrusted
approvalNoApproval policy: untrusted, on-failure, on-request, never
sandboxModeNoAccess: read-only, workspace-write, danger-full-access
yoloNo⚠️ Bypass all safety (dangerous)
cdNoWorking directory
workingDirNoWorking directory for execution
changeModeNoReturn structured OLD/NEW edits for refactoring
chunkIndexNoChunk index (1-based)
chunkCacheKeyNoCache key for continuation
imageNoOptional image file path(s) to include with the prompt
configNoConfiguration overrides as 'key=value' string or object
profileNoConfiguration profile to use from ~/.codex/config.toml
timeoutNoMaximum execution time in milliseconds (optional)
includeThinkingNoInclude reasoning/thinking section in response
includeMetadataNoInclude configuration metadata in response
searchNoEnable web search by activating web_search_request feature flag. Requires network access - automatically sets sandbox to workspace-write if not specified.
ossNoUse local Ollama server (convenience for -c model_provider=oss). Requires Ollama running locally. Automatically sets sandbox to workspace-write if not specified.
enableFeaturesNoEnable feature flags (repeatable). Equivalent to -c features.<name>=true
disableFeaturesNoDisable feature flags (repeatable). Equivalent to -c features.<name>=false

Input Schema (JSON Schema)

{ "properties": { "approval": { "description": "Approval policy: untrusted, on-failure, on-request, never", "type": "string" }, "approvalPolicy": { "description": "Approval: never, on-request, on-failure, untrusted", "enum": [ "never", "on-request", "on-failure", "untrusted" ], "type": "string" }, "cd": { "description": "Working directory", "type": "string" }, "changeMode": { "default": false, "description": "Return structured OLD/NEW edits for refactoring", "type": "boolean" }, "chunkCacheKey": { "description": "Cache key for continuation", "type": "string" }, "chunkIndex": { "description": "Chunk index (1-based)", "minimum": 1, "type": "number" }, "config": { "anyOf": [ { "type": "string" }, { "additionalProperties": {}, "type": "object" } ], "description": "Configuration overrides as 'key=value' string or object" }, "disableFeatures": { "description": "Disable feature flags (repeatable). Equivalent to -c features.<name>=false", "items": { "type": "string" }, "type": "array" }, "enableFeatures": { "description": "Enable feature flags (repeatable). Equivalent to -c features.<name>=true", "items": { "type": "string" }, "type": "array" }, "fullAuto": { "description": "Full automation mode", "type": "boolean" }, "image": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" } ], "description": "Optional image file path(s) to include with the prompt" }, "includeMetadata": { "default": true, "description": "Include configuration metadata in response", "type": "boolean" }, "includeThinking": { "default": true, "description": "Include reasoning/thinking section in response", "type": "boolean" }, "model": { "description": "Model: gpt-5-codex, gpt-5, o3, o4-mini, codex-1, codex-mini-latest, gpt-4.1. Default: gpt-5-codex", "type": "string" }, "oss": { "description": "Use local Ollama server (convenience for -c model_provider=oss). Requires Ollama running locally. Automatically sets sandbox to workspace-write if not specified.", "type": "boolean" }, "profile": { "description": "Configuration profile to use from ~/.codex/config.toml", "type": "string" }, "prompt": { "description": "Task or question. Use @ to include files (e.g., '@largefile.ts explain').", "minLength": 1, "type": "string" }, "sandbox": { "default": false, "description": "Quick automation mode: enables workspace-write + on-failure approval. Alias for fullAuto.", "type": "boolean" }, "sandboxMode": { "description": "Access: read-only, workspace-write, danger-full-access", "enum": [ "read-only", "workspace-write", "danger-full-access" ], "type": "string" }, "search": { "description": "Enable web search by activating web_search_request feature flag. Requires network access - automatically sets sandbox to workspace-write if not specified.", "type": "boolean" }, "timeout": { "description": "Maximum execution time in milliseconds (optional)", "type": "number" }, "workingDir": { "description": "Working directory for execution", "type": "string" }, "yolo": { "description": "⚠️ Bypass all safety (dangerous)", "type": "boolean" } }, "required": [ "prompt" ], "type": "object" }

Implementation Reference

  • The main execution handler for the 'ask-codex' tool. Processes arguments, calls executeCodex (or handles changeMode chunks), formats output, and provides detailed error messages for common issues like CLI not found, auth, quotas, timeouts, and sandbox violations.
    execute: async (args, onProgress) => { const { prompt, model, sandbox, fullAuto, approvalPolicy, approval, sandboxMode, yolo, cd, workingDir, changeMode, chunkIndex, chunkCacheKey, image, config, profile, timeout, includeThinking, includeMetadata, search, oss, enableFeatures, disableFeatures, } = args; if (!prompt?.trim()) { throw new Error(ERROR_MESSAGES.NO_PROMPT_PROVIDED); } if (changeMode && chunkIndex && chunkCacheKey) { return processChangeModeOutput('', { chunkIndex: chunkIndex as number, cacheKey: chunkCacheKey as string, prompt: prompt as string, }); } try { // Use enhanced executeCodex for better feature support const result = await executeCodex( prompt as string, { model: model as string, fullAuto: Boolean(fullAuto ?? sandbox), approvalPolicy: approvalPolicy as any, approval: approval as string, sandboxMode: sandboxMode as any, yolo: Boolean(yolo), cd: cd as string, workingDir: workingDir as string, image, config, profile: profile as string, timeout: timeout as number, search: search as boolean, oss: oss as boolean, enableFeatures: enableFeatures as string[], disableFeatures: disableFeatures as string[], }, onProgress ); if (changeMode) { return processChangeModeOutput(result, { chunkIndex: args.chunkIndex as number | undefined, prompt: prompt as string, }); } // Format response with enhanced output parsing return formatCodexResponseForMCP( result, includeThinking as boolean, includeMetadata as boolean ); } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); // Enhanced error handling with helpful context if (errorMessage.includes('not found') || errorMessage.includes('command not found')) { return `❌ **Codex CLI Not Found**: ${ERROR_MESSAGES.CODEX_NOT_FOUND} **Quick Fix:** \`\`\`bash npm install -g @openai/codex \`\`\` **Verification:** Run \`codex --version\` to confirm installation.`; } if (errorMessage.includes('authentication') || errorMessage.includes('unauthorized')) { return `❌ **Authentication Failed**: ${ERROR_MESSAGES.AUTHENTICATION_FAILED} **Setup Options:** 1. **API Key:** \`export OPENAI_API_KEY=your-key\` 2. **Login:** \`codex login\` (requires ChatGPT subscription) **Troubleshooting:** Verify key has Codex access in OpenAI dashboard.`; } if (errorMessage.includes('quota') || errorMessage.includes('rate limit')) { return `❌ **Usage Limit Reached**: ${ERROR_MESSAGES.QUOTA_EXCEEDED} **Solutions:** 1. Wait and retry - rate limits reset periodically 2. Check quota in OpenAI dashboard`; } if (errorMessage.includes('timeout')) { return `❌ **Request Timeout**: Operation took longer than expected **Solutions:** 1. Increase timeout: Add \`timeout: 300000\` (5 minutes) 2. Simplify request: Break complex queries into smaller parts`; } if (errorMessage.includes('sandbox') || errorMessage.includes('permission')) { // Enhanced debugging information const debugInfo = [ `**Current Configuration:**`, `- yolo: ${yolo}`, `- fullAuto: ${fullAuto}`, `- sandbox: ${sandbox}`, `- sandboxMode: ${sandboxMode}`, `- approvalPolicy: ${approvalPolicy}`, `- search: ${search}`, `- oss: ${oss}` ].join('\n'); return `❌ **Permission Error**: ${ERROR_MESSAGES.SANDBOX_VIOLATION} ${debugInfo} **Root Cause:** This error typically occurs when: 1. \`approvalPolicy\` is set without \`sandboxMode\` (now auto-fixed in v1.2+) 2. Explicit \`sandboxMode: "read-only"\` blocks file modifications 3. Codex CLI defaults to restrictive permissions 4. **YOLO mode not working**: If yolo is true but still blocked, there may be a configuration conflict **Solutions:** **Option A - Explicit Control (Recommended):** \`\`\`json { "approvalPolicy": "on-failure", "sandboxMode": "workspace-write", "model": "gpt-5-codex", "prompt": "your task..." } \`\`\` **Option B - Automated Mode:** \`\`\`json { "sandbox": true, // Enables fullAuto (workspace-write + on-failure) "model": "gpt-5-codex", "prompt": "your task..." } \`\`\` **Option C - Full Bypass (⚠️ Use with caution):** \`\`\`json { "yolo": true, "model": "gpt-5-codex", "prompt": "your task..." } \`\`\` **Debug Steps:** 1. Check if yolo mode is being overridden by other settings 2. Verify Codex CLI version supports yolo flag 3. Try using only yolo without other conflicting parameters **Sandbox Modes:** - \`read-only\`: Analysis only, no modifications - \`workspace-write\`: Can edit files in workspace (safe for most tasks) - \`danger-full-access\`: Full system access (use with caution)`; } // Generic error with context return `❌ **Codex Execution Error**: ${errorMessage} **Debug Steps:** 1. Verify Codex CLI: \`codex --version\` 2. Check authentication: \`codex login\` 3. Try simpler query first`; } },
  • Comprehensive Zod schema defining all input parameters for the ask-codex tool, including prompt, model selection, safety controls (sandbox, approvalPolicy, yolo), changeMode options, images, config overrides, and feature flags.
    const askCodexArgsSchema = z.object({ prompt: z .string() .min(1) .describe("Task or question. Use @ to include files (e.g., '@largefile.ts explain')."), model: z .string() .optional() .describe(`Model: ${Object.values(MODELS).join(', ')}. Default: gpt-5-codex`), sandbox: z .boolean() .default(false) .describe( 'Quick automation mode: enables workspace-write + on-failure approval. Alias for fullAuto.' ), fullAuto: z.boolean().optional().describe('Full automation mode'), approvalPolicy: z .enum(['never', 'on-request', 'on-failure', 'untrusted']) .optional() .describe('Approval: never, on-request, on-failure, untrusted'), approval: z .string() .optional() .describe(`Approval policy: ${Object.values(APPROVAL_POLICIES).join(', ')}`), sandboxMode: z .enum(['read-only', 'workspace-write', 'danger-full-access']) .optional() .describe('Access: read-only, workspace-write, danger-full-access'), yolo: z.boolean().optional().describe('⚠️ Bypass all safety (dangerous)'), cd: z.string().optional().describe('Working directory'), workingDir: z.string().optional().describe('Working directory for execution'), changeMode: z .boolean() .default(false) .describe('Return structured OLD/NEW edits for refactoring'), chunkIndex: z .preprocess(val => { if (typeof val === 'number') return val; if (typeof val === 'string') { const parsed = parseInt(val, 10); return isNaN(parsed) ? undefined : parsed; } return undefined; }, z.number().min(1).optional()) .describe('Chunk index (1-based)'), chunkCacheKey: z.string().optional().describe('Cache key for continuation'), image: z .union([z.string(), z.array(z.string())]) .optional() .describe('Optional image file path(s) to include with the prompt'), config: z .union([z.string(), z.record(z.any())]) .optional() .describe("Configuration overrides as 'key=value' string or object"), profile: z.string().optional().describe('Configuration profile to use from ~/.codex/config.toml'), timeout: z.number().optional().describe('Maximum execution time in milliseconds (optional)'), includeThinking: z .boolean() .default(true) .describe('Include reasoning/thinking section in response'), includeMetadata: z.boolean().default(true).describe('Include configuration metadata in response'), search: z .boolean() .optional() .describe( 'Enable web search by activating web_search_request feature flag. Requires network access - automatically sets sandbox to workspace-write if not specified.' ), oss: z .boolean() .optional() .describe( 'Use local Ollama server (convenience for -c model_provider=oss). Requires Ollama running locally. Automatically sets sandbox to workspace-write if not specified.' ), enableFeatures: z .array(z.string()) .optional() .describe('Enable feature flags (repeatable). Equivalent to -c features.<name>=true'), disableFeatures: z .array(z.string()) .optional() .describe('Disable feature flags (repeatable). Equivalent to -c features.<name>=false'), });
  • Registers the askCodexTool by importing it and pushing it to the central toolRegistry array.
    import { askCodexTool } from './ask-codex.tool.js'; import { batchCodexTool } from './batch-codex.tool.js'; // import { reviewCodexTool } from './review-codex.tool.js'; import { pingTool, helpTool, versionTool } from './simple-tools.js'; import { brainstormTool } from './brainstorm.tool.js'; import { fetchChunkTool } from './fetch-chunk.tool.js'; import { timeoutTestTool } from './timeout-test.tool.js'; toolRegistry.push( askCodexTool, batchCodexTool, // reviewCodexTool, pingTool, helpTool, versionTool, brainstormTool, fetchChunkTool, timeoutTestTool );

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cexll/codex-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server