Skip to main content
Glama
xiaolai

claude-octopus

claude_code_report

Generate HTML reports of workflow runs. For a specific run ID, get agent sequence, cost breakdown, and collapsible transcripts. Omit run ID to list all runs.

Instructions

Generate a self-contained HTML report of a workflow run. No args: list all runs. run_id: detailed report for that run with agent sequence, cost breakdown, and collapsible transcripts. Save the returned HTML to a file and open in a browser.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
run_idNoGenerate detailed report for this run. Omit to list all runs.
include_transcriptsNoInclude full session transcripts in run reports (default: true, requires session persistence)

Implementation Reference

  • The async handler function for the 'claude_code_report' tool. It calls generateReport() with the timeline directory, optional run_id, and include_transcripts flag, returning the HTML result or an error message.
    }, async ({ run_id, include_transcripts }) => {
      try {
        const html = await generateReport({
          timelineDir: timelineConfig.dir,
          runId: run_id,
          includeTranscripts: persistSession && (include_transcripts !== false),
        });
        return {
          content: [{
            type: "text" as const,
            text: html,
          }],
        };
      } catch (error) {
        return {
          content: [{
            type: "text" as const,
            text: `Error generating report: ${error instanceof Error ? error.message : String(error)}`,
          }],
          isError: true,
        };
      }
    });
  • Input schema for the report tool: optional run_id (string) and include_transcripts (boolean, default true if session persistence is enabled).
    inputSchema: z.object({
      run_id: z.string().optional().describe("Generate detailed report for this run. Omit to list all runs."),
      include_transcripts: z.boolean().optional().describe("Include full session transcripts in run reports (default: true, requires session persistence)"),
    }),
  • Registration function registerReportTool that registers the tool with name `${toolName}_report` (e.g. 'claude_code_report') on the MCP server.
    export function registerReportTool(
      server: McpServer,
      toolName: string,
      timelineConfig: TimelineConfig,
      persistSession: boolean,
    ) {
      server.registerTool(`${toolName}_report`, {
        description: [
          "Generate a self-contained HTML report of a workflow run.",
          "No args: list all runs. run_id: detailed report for that run",
          "with agent sequence, cost breakdown, and collapsible transcripts.",
          "Save the returned HTML to a file and open in a browser.",
        ].join(" "),
        inputSchema: z.object({
          run_id: z.string().optional().describe("Generate detailed report for this run. Omit to list all runs."),
          include_transcripts: z.boolean().optional().describe("Include full session transcripts in run reports (default: true, requires session persistence)"),
        }),
      }, async ({ run_id, include_transcripts }) => {
        try {
          const html = await generateReport({
            timelineDir: timelineConfig.dir,
            runId: run_id,
            includeTranscripts: persistSession && (include_transcripts !== false),
          });
          return {
            content: [{
              type: "text" as const,
              text: html,
            }],
          };
        } catch (error) {
          return {
            content: [{
              type: "text" as const,
              text: `Error generating report: ${error instanceof Error ? error.message : String(error)}`,
            }],
            isError: true,
          };
        }
      });
    }
  • The generateReport() function that dispatches to generateRunReport() or generateIndexReport() depending on whether a runId is provided.
    export async function generateReport(opts: ReportOptions): Promise<string> {
      if (opts.runId) {
        return generateRunReport({ ...opts, runId: opts.runId });
      }
      return generateIndexReport(opts);
    }
  • generateRunReport() loads timeline entries for a specific run and renders the full HTML report with agent sequence, cost breakdown, and collapsible transcripts.
    export async function generateRunReport(opts: ReportOptions & { runId: string }): Promise<string> {
      const run = await loadRunReport(opts.timelineDir, opts.runId, opts.includeTranscripts ?? true);
      if (!run) {
        return wrapHtml(
          "Run Not Found",
          `<h1>Run Not Found</h1><p>No timeline entries found for run <code>${esc(opts.runId)}</code>.</p>`,
        );
      }
    
      const body = [
        `<h1>Claude Octopus — Run Report</h1>`,
        `<p class="subtitle">Generated ${new Date().toISOString()}</p>`,
        renderRun(run),
      ].join("\n");
    
      return wrapHtml(`Run: ${opts.runId}`, body);
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses what the tool produces: HTML report, content of detailed report, cost breakdown, transcripts. Notes dependency for include_transcripts. No annotations exist, so description carries full burden; it does well without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: one states purpose, second covers both parameter modes and output handling. No redundancy, perfectly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description explains return value (HTML string) and what to do with it. Covers both parameter use cases sufficiently for a simple two-parameter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (baseline 3). Description adds value by elaborating on report contents for run_id and explaining the default for include_transcripts.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it generates self-contained HTML reports of workflow runs. Distinguishes two modes: without run_id lists all runs; with run_id provides detailed report including agent sequence, cost breakdown, and collapsible transcripts. Distinct from sibling tools like claude_code_reply or claude_code_transcript.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly describes when to use each parameter: omit run_id to list, provide it for detailed report. Mentions prerequisite for include_transcripts (session persistence). No explicit when-not-to-use, but the two modes are clearly differentiated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/xiaolai/claude-octopus'

If you have feedback or need assistance with the MCP directory API, please join our Discord server