Skip to main content
Glama

pine_read_range

Read a contiguous range of bytes from emulated memory and return them as an array of integers. Uses aligned loads for efficiency, up to 4096 bytes per call.

Instructions

Read a contiguous range of bytes from emulated memory and return them as an array of integers. Implemented client-side as a pipelined sequence of PINE read64/32/16/8 calls (PINE has no native bulk-read), choosing the largest aligned load at each step. Maximum 4096 bytes per call. Slower than mGBA's native readRange but fast enough for cheat-table refresh and small struct dumps over loopback.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
addressYesStart address
lengthYesNumber of bytes to read

Implementation Reference

  • The handler case for 'pine_read_range' in the CallToolRequestSchema switch statement. Calls pine.readRange() with address and length, formats the returned bytes as hex, and returns them as text.
    case "pine_read_range": {
      const bytes = await pine.readRange(p.address as number, p.length as number);
      const hex = Array.from(bytes)
        .map((b) => b.toString(16).padStart(2, "0").toUpperCase())
        .join(" ");
      return ok(`${addrHex(p.address as number)} [${bytes.length} bytes]:\n${hex}`);
    }
  • The Tool schema/definition for 'pine_read_range', including its name, description, and inputSchema requiring 'address' (integer) and 'length' (integer, 1-4096).
    {
      name: "pine_read_range",
      description: "Read a contiguous range of bytes from emulated memory and return them as an array of integers. Implemented client-side as a pipelined sequence of PINE read64/32/16/8 calls (PINE has no native bulk-read), choosing the largest aligned load at each step. Maximum 4096 bytes per call. Slower than mGBA's native readRange but fast enough for cheat-table refresh and small struct dumps over loopback.",
      inputSchema: {
        type: "object",
        required: ["address", "length"],
        properties: {
          address: { type: "integer", description: "Start address" },
          length:  { type: "integer", minimum: 1, maximum: 4096, description: "Number of bytes to read" },
        },
      },
    },
  • src/tools.ts:171-172 (registration)
    Registration of all tools via ListToolsRequestSchema handler, which returns the TOOLS array containing the 'pine_read_range' schema.
    export function registerTools(server: Server, pine: PineClient): void {
      server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: TOOLS }));
  • The readRange() method on PineClient that implements bulk memory reads by issuing pipelined read64/32/16/8 calls, choosing the largest aligned load at each step. Includes batching logic (PINE_PIPELINE_BATCH env var) to avoid PCSX2's fragile request queue.
    /**
     * Bulk read — PINE has no native range read, so we issue the largest aligned
     * load at each step (read64 on 8-byte boundaries, falling back to 32/16/8).
     *
     * IMPORTANT: PCSX2's PINE server silently drops requests when too many are
     * pipelined — empirically, ~7-9 in-flight requests is the limit before drops
     * start. Dropped replies leave the client mis-aligned (next reply gets
     * decoded as the wrong type), so we batch in groups of 4 and await each
     * batch fully before sending the next. Slower than full pipelining but
     * reliable.
     */
    async readRange(addr: number, length: number): Promise<Uint8Array> {
      if (length <= 0)        throw new Error("length must be positive");
      if (length > 4096)      throw new Error("length exceeds 4096 byte limit");
    
      const out = new Uint8Array(length);
    
      // Build the schedule of reads covering [addr, addr+length)
      type Step = { op: 1 | 2 | 4 | 8; addr: number; outOffset: number };
      const steps: Step[] = [];
      let cursor = addr;
      let outOffset = 0;
      let remaining = length;
      while (remaining > 0) {
        let n: 1 | 2 | 4 | 8;
        if      (cursor % 8 === 0 && remaining >= 8) n = 8;
        else if (cursor % 4 === 0 && remaining >= 4) n = 4;
        else if (cursor % 2 === 0 && remaining >= 2) n = 2;
        else                                          n = 1;
        steps.push({ op: n, addr: cursor, outOffset });
        cursor    += n;
        outOffset += n;
        remaining -= n;
      }
    
      // PCSX2's PINE server has a fragile request queue: dropping ANY request
      // (which it does silently when in-flight load is too high) leaves the
      // server's reply pipeline desynced and ALL subsequent requests time out
      // until the emulator is restarted. We've seen drops at as few as ~7 mixed
      // in-flight requests. So the safe default is fully serial. Loopback TCP
      // turns out to be fast enough that this isn't actually a problem —
      // measured ~52 ms for a full 4096-byte read against PCSX2 v2.6.3, less
      // than two emulated frames. Override via env var if you trust your
      // specific emulator's PINE implementation to be more robust than PCSX2's.
      const PIPELINE_BATCH = Number.parseInt(process.env.PINE_PIPELINE_BATCH ?? "1", 10) || 1;
      const splatInto = (op: 1|2|4|8, off: number, v: number | bigint) => {
        if (op === 1)      out[off] = v as number;
        else if (op === 2) { out[off] = (v as number) & 0xFF; out[off+1] = ((v as number) >> 8) & 0xFF; }
        else if (op === 4) {
          const n = v as number;
          out[off]   =  n        & 0xFF;
          out[off+1] = (n >>  8) & 0xFF;
          out[off+2] = (n >> 16) & 0xFF;
          out[off+3] = (n >> 24) & 0xFF;
        } else {
          const n = v as bigint;
          for (let j = 0; j < 8; j++) out[off + j] = Number((n >> BigInt(8 * j)) & 0xFFn);
        }
      };
    
      for (let i = 0; i < steps.length; i += PIPELINE_BATCH) {
        const batch = steps.slice(i, i + PIPELINE_BATCH);
        const promises = batch.map((s) =>
          s.op === 8 ? this.read64(s.addr) :
          s.op === 4 ? this.read32(s.addr) :
          s.op === 2 ? this.read16(s.addr) :
                       this.read8 (s.addr)
        );
        const results = await Promise.all(promises);
        for (let j = 0; j < batch.length; j++) {
          splatInto(batch[j].op, batch[j].outOffset, results[j]);
        }
      }
    
      return out;
    }
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully explains behavior: implemented as pipelined read8/16/32/64 calls with aligned loads, max 4096 bytes per call, and performance implications. This provides rich context for the agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is fairly concise with core purpose stated first, then implementation details. Every sentence adds value, though could be slightly shorter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description adequately explains return type (array of integers) and constraints (max 4096 bytes). It lacks error handling details but is sufficient for a read tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions for address and length. The description adds no additional parameter-level detail, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool reads a contiguous range of bytes from emulated memory and returns an array of integers. It distinguishes from sibling tools (which read single values) by specifying bulk-read functionality and maximum size.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context on when to use the tool (cheat-table refresh, small struct dumps) and notes performance trade-offs vs mGBA's native readRange. However, it does not explicitly state when not to use it or compare with alternatives beyond siblings being single-read operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dmang-dev/mcp-pine'

If you have feedback or need assistance with the MCP directory API, please join our Discord server