Skip to main content
Glama

bizhawk_ping

Confirm that the BizHawk Lua bridge is active and responsive via TCP socket. Start with this check to ensure subsequent tool calls succeed.

Instructions

PURPOSE: Verify that the BizHawk Lua bridge is connected and responding to RPC over the TCP socket. USAGE: Call this once at start-of-session before issuing other tool calls; if it succeeds, every other tool will work. BEHAVIOR: No side effects — pure liveness probe. Times out after ~10 seconds with a clear error if BizHawk isn't running, isn't pointed at the right host:port, or hasn't loaded lua/bridge.lua via Tools → Lua Console. RETURNS: The literal string 'pong' on success.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • src/tools.ts:48-59 (registration)
    The tool 'bizhawk_ping' is registered in the TOOLS array with its name, description, and inputSchema (no parameters).
    const TOOLS: Tool[] = [
      // ── Connectivity & introspection ────────────────────────────────────────
    
      {
        name: "bizhawk_ping",
        description:
          "PURPOSE: Verify that the BizHawk Lua bridge is connected and responding to RPC over the TCP socket. " +
          "USAGE: Call this once at start-of-session before issuing other tool calls; if it succeeds, every other tool will work. " +
          "BEHAVIOR: No side effects — pure liveness probe. Times out after ~10 seconds with a clear error if BizHawk isn't running, isn't pointed at the right host:port, or hasn't loaded lua/bridge.lua via Tools → Lua Console. " +
          "RETURNS: The literal string 'pong' on success.",
        inputSchema: { type: "object", properties: {} },
      },
  • The handler for 'bizhawk_ping' — calls bh.call('ping') and returns the result (expects literal string 'pong' on success).
    case "bizhawk_ping": {
      const r = await bh.call<string>("ping");
      return ok(r);
    }
  • Input schema for bizhawk_ping: empty object — no parameters required.
    inputSchema: { type: "object", properties: {} },
  • The call() method on BizhawkServer that enqueues a command (including 'ping') and sends it over TCP to the Lua bridge.
    async call<T = unknown>(method: string, params: Record<string, unknown> = {}): Promise<T> {
      return new Promise<T>((resolve, reject) => {
        const id = this.nextId++;
        const pending: PendingCmd = {
          id,
          method,
          params,
          resolve: (r) => resolve(r as T),
          reject,
        };
    
        const timer = setTimeout(() => {
          // Drop from queue if still waiting; from inflight if already sent.
          this.queue   = this.queue.filter((p) => p.id !== id);
          this.inflight.delete(id);
          if (this.inflight.size === 0) this.awaitingResult = false;
          reject(new Error(
            `BizHawk call "${method}" timed out (${this.timeoutMs}ms) — ` +
            `is the bridge.lua script still polling?`,
          ));
        }, this.timeoutMs);
    
        // Wrap so the timer always clears
        const origResolve = pending.resolve, origReject = pending.reject;
        pending.resolve = (r) => { clearTimeout(timer); origResolve(r); };
        pending.reject  = (e) => { clearTimeout(timer); origReject(e); };
    
        this.queue.push(pending);
      });
    }
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully covers behavior: no side effects, pure liveness probe, timeout of ~10 seconds, and clear error conditions. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Structured with clear sections (PURPOSE, USAGE, BEHAVIOR, RETURNS), front-loaded, each sentence adds value, no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no parameters, the description is complete: explains purpose, usage, behavior, return value, and error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters; baseline is 4. Description adds no param info but provides comprehensive tool context beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the purpose: verifying BizHawk Lua bridge connectivity. It uses specific verb 'Verify' and distinguishes from sibling tools by being a liveness probe.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly recommends calling at start-of-session before other tools, and specifies success condition. Provides clear context for when to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dmang-dev/mcp-bizhawk'

If you have feedback or need assistance with the MCP directory API, please join our Discord server