Skip to main content
Glama

read_agent_output

Extract structured content from agent terminals using delimiter markers like OUTPUT_START/OUTPUT_END to retrieve specific output sections for processing.

Instructions

Extract structured output from an agent's terminal between delimiter markers (e.g., REVIEW_OUTPUT_START / REVIEW_OUTPUT_END). Returns the content between the markers, or null if not found.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
surfaceYesTarget surface ref (e.g., 'surface:78')
tagNoDelimiter tag name. Looks for {TAG}_START and {TAG}_END markers. Default: OUTPUT (matches OUTPUT_START/OUTPUT_END). Examples: REVIEW_OUTPUT, SYNTHESIS_OUTPUT, PUSHBACK_OUTPUTOUTPUT
linesNoNumber of screen lines to scan (default: 200)
workspaceNoTarget workspace ref

Implementation Reference

  • The handler function that reads the terminal screen and extracts content between the specified markers.
    async (args) => {
      try {
        const opts: Record<string, unknown> = {
          lines: args.lines,
          scrollback: true,
        };
        if (args.workspace) opts.workspace = args.workspace;
    
        const raw = await client.readScreen(args.surface, opts);
        const text =
          typeof raw === "string"
            ? raw
            : ((raw as { content?: string }).content ?? "");
    
        const startMarker = `${args.tag}_START`;
        const endMarker = `${args.tag}_END`;
    
        const startIdx = text.indexOf(startMarker);
        const endIdx = text.indexOf(endMarker);
    
        if (startIdx === -1 || endIdx === -1 || endIdx <= startIdx) {
          return ok({
            found: false,
            tag: args.tag,
            surface: args.surface,
            content: null,
          });
        }
    
        const content = text
          .slice(startIdx + startMarker.length, endIdx)
          .trim();
    
        return ok({
          found: true,
          tag: args.tag,
          surface: args.surface,
          content,
        });
      } catch (e) {
        return err(e);
      }
    },
  • Input validation schema for the read_agent_output tool.
    {
      surface: z.string().describe("Target surface ref (e.g., 'surface:78')"),
      tag: z
        .string()
        .optional()
        .default("OUTPUT")
        .describe(
          "Delimiter tag name. Looks for {TAG}_START and {TAG}_END markers. Default: OUTPUT (matches OUTPUT_START/OUTPUT_END). Examples: REVIEW_OUTPUT, SYNTHESIS_OUTPUT, PUSHBACK_OUTPUT",
        ),
      lines: z
        .number()
        .optional()
        .default(200)
        .describe("Number of screen lines to scan (default: 200)"),
      workspace: z.string().optional().describe("Target workspace ref"),
    },
  • src/server.ts:887-905 (registration)
    Tool registration for read_agent_output.
    server.tool(
      "read_agent_output",
      "Extract structured output from an agent's terminal between delimiter markers (e.g., REVIEW_OUTPUT_START / REVIEW_OUTPUT_END). Returns the content between the markers, or null if not found.",
      {
        surface: z.string().describe("Target surface ref (e.g., 'surface:78')"),
        tag: z
          .string()
          .optional()
          .default("OUTPUT")
          .describe(
            "Delimiter tag name. Looks for {TAG}_START and {TAG}_END markers. Default: OUTPUT (matches OUTPUT_START/OUTPUT_END). Examples: REVIEW_OUTPUT, SYNTHESIS_OUTPUT, PUSHBACK_OUTPUT",
          ),
        lines: z
          .number()
          .optional()
          .default(200)
          .describe("Number of screen lines to scan (default: 200)"),
        workspace: z.string().optional().describe("Target workspace ref"),
      },
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the extraction behavior and return value (content between markers or null if not found), but doesn't mention error conditions, performance characteristics, or what happens if markers are malformed. It provides basic operational context but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise - a single sentence that states the purpose, mechanism, and return behavior with zero wasted words. It's front-loaded with the core functionality and efficiently communicates essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters, 100% schema coverage, but no annotations or output schema, the description provides adequate basic context about what the tool does and its return behavior. However, it lacks information about error handling, performance considerations, or detailed behavioral traits that would be helpful for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all parameters. The description mentions delimiter markers and the null return case, but doesn't add significant semantic context beyond what the schema provides about parameters like 'surface', 'tag', 'lines', or 'workspace'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('extract structured output'), target resource ('from an agent's terminal'), and mechanism ('between delimiter markers'). It distinguishes from siblings like 'read_screen' (general screen reading) and 'get_agent_state' (state monitoring) by focusing on delimited content extraction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (when needing to extract content between specific delimiter markers in an agent's terminal). However, it doesn't explicitly mention when NOT to use it or name specific alternative tools for different extraction scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/EtanHey/cmuxlayer'

If you have feedback or need assistance with the MCP directory API, please join our Discord server