Skip to main content
Glama

codex_reply

Continue code conversations using thread IDs for follow-up questions, iterative refinement, or multi-step workflows that build on prior context.

Instructions

Continue a previous codex conversation by thread ID. Use this for follow-up questions, iterative refinement, or multi-step workflows that build on prior context.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
thread_idYesThread ID from a prior codex response
promptYesFollow-up instruction or question
cwdNoWorking directory override
modelNoModel override

Implementation Reference

  • The tool "[toolName]_reply" (e.g., codex_reply) is dynamically registered in `registerQueryTools`. It uses `runQuery` to continue a thread.
    if (baseConfig.persistSession !== false) {
      server.registerTool(replyToolName, {
        description: [
          `Continue a previous ${toolName} conversation by thread ID.`,
          "Use this for follow-up questions, iterative refinement,",
          "or multi-step workflows that build on prior context.",
        ].join(" "),
        inputSchema: z.object({
          thread_id: z.string().describe(`Thread ID from a prior ${toolName} response`),
          prompt: z.string().describe("Follow-up instruction or question"),
          cwd: z.string().optional().describe("Working directory override"),
          model: z.string().optional().describe("Model override"),
        }),
      }, async ({ thread_id, prompt, cwd, model }) => {
        try {
          const result = await runQuery(prompt, {
            cwd, model, resumeThreadId: thread_id,
          }, baseConfig, apiKey);
          return formatResult(result);
        } catch (error) {
          return formatError(error);
        }
      });
    }
  • The `runQuery` function handles the execution for both initial queries and reply/follow-up calls by checking `overrides.resumeThreadId`.
    async function runQuery(
      prompt: string,
      overrides: InvocationOverrides,
      baseConfig: ThreadConfig,
      apiKey?: string
    ): Promise<RunResult> {
      const codex = new Codex({
        ...(apiKey ? { apiKey } : {}),
      });
    
      const threadOptions: Record<string, unknown> = {};
    
      // Model
      const model = overrides.model || baseConfig.model;
      if (model) threadOptions.model = model;
    
      // Working directory — accept any path, preserve agent's base access
      let cwd = baseConfig.cwd || process.cwd();
      if (overrides.cwd) {
        const resolvedCwd = resolve(cwd, overrides.cwd);
        if (resolvedCwd !== cwd) {
          const dirs = new Set(baseConfig.additionalDirectories || []);
          dirs.add(cwd);
          threadOptions.additionalDirectories = [...dirs];
          cwd = resolvedCwd;
        }
      }
      threadOptions.workingDirectory = cwd;
    
      // Per-invocation additionalDirs — unions with server-level + auto-added dirs
      if (overrides.additionalDirs?.length) {
        const existing = (threadOptions.additionalDirectories as string[]) || baseConfig.additionalDirectories || [];
        const dirs = new Set(existing);
        for (const dir of overrides.additionalDirs) {
          dirs.add(dir);
        }
        threadOptions.additionalDirectories = [...dirs];
      } else if (baseConfig.additionalDirectories && !threadOptions.additionalDirectories) {
        threadOptions.additionalDirectories = baseConfig.additionalDirectories;
      }
    
      // Sandbox mode — can only tighten
      const baseSandbox = baseConfig.sandboxMode || "read-only";
      if (overrides.sandboxMode) {
        threadOptions.sandboxMode = narrowSandboxMode(baseSandbox, overrides.sandboxMode);
      } else {
        threadOptions.sandboxMode = baseSandbox;
      }
    
      // Approval policy — can only tighten
      const baseApproval = baseConfig.approvalPolicy || "on-failure";
      if (overrides.approvalPolicy) {
        threadOptions.approvalPolicy = narrowApprovalPolicy(baseApproval, overrides.approvalPolicy);
      } else {
        threadOptions.approvalPolicy = baseApproval;
      }
    
      // Effort
      const effort = overrides.effort || baseConfig.effort;
      if (effort) threadOptions.modelReasoningEffort = effort;
    
      // Network access
      if (overrides.networkAccess !== undefined) {
        threadOptions.networkAccessEnabled = overrides.networkAccess;
      } else if (baseConfig.networkAccess !== undefined) {
        threadOptions.networkAccessEnabled = baseConfig.networkAccess;
      }
    
      // Web search
      const webSearch = overrides.webSearchMode || baseConfig.webSearchMode;
      if (webSearch) threadOptions.webSearchMode = webSearch;
    
      // Start or resume thread
      const thread = overrides.resumeThreadId
        ? codex.resumeThread(overrides.resumeThreadId, threadOptions)
        : codex.startThread(threadOptions);
    
      // Build prompt with instructions
      let fullPrompt = prompt;
      if (!overrides.resumeThreadId) {
        const instructions = overrides.instructions || baseConfig.instructions;
        const appendInstructions = baseConfig.appendInstructions;
        if (instructions) {
          fullPrompt = `${instructions}\n\n${prompt}`;
        } else if (appendInstructions) {
          fullPrompt = `${appendInstructions}\n\n${prompt}`;
        }
      }
    
      const turn = await thread.run(fullPrompt);
    
      return {
        threadId: thread.id || "",
        response: turn.finalResponse || "",
        usage: turn.usage || { input_tokens: 0, cached_input_tokens: 0, output_tokens: 0 },
        isError: false,
      };
    }
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/xiaolai/codex-octopus'

If you have feedback or need assistance with the MCP directory API, please join our Discord server