Skip to main content
Glama

poll_voice_bridge

Poll an open Voice Bridge call to retrieve new transcript events, including partial and final transcripts and system events, since the last cursor.

Instructions

Fetch new transcript events from an open Voice Bridge call since the last cursor. Returns partial + final transcripts + system events. Agent should poll in a loop (~500ms-1s). No additional payment.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sessionIdYesSession ID from open_voice_bridge
cursorNoLast seq number seen (default 0 = start from beginning)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses that the tool returns partial and final transcripts along with system events, and clarifies it fetches new events since last cursor. It also notes cost implications ('No additional payment'), though it lacks details on error handling or required call state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences with no wasted words. First sentence provides core purpose, second lists return types, third gives usage guidance and cost info. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately covers return types and polling behavior. It assumes the agent knows about open Voice Bridge calls (from sibling tools), which is reasonable. Minor gap: no mention of errors or prerequisites.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema already describes both parameters well (sessionId and cursor with defaults). The description adds minimal extra value beyond restating the cursor purpose, so baseline 3 is appropriate given 100% schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool fetches new transcript events from an open Voice Bridge call, specifying the verb 'Fetch' and the resource. It distinguishes itself from siblings like open_voice_bridge and end_voice_bridge by focusing on incremental event retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly recommends polling in a loop with a suggested interval (500ms-1s) and mentions no additional payment, which helps the agent decide usage. However, it does not explicitly state when not to use it or list alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cnghockey/sats4ai-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server