AILANG Parse
Server Details
Deterministic DOCX/PPTX/XLSX/PDF parser: track changes, comments, headers, footers, merged cells.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- sunholo-data/ailang-parse
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 29 of 29 tools scored. Lowest: 1.1/5.
Multiple tools have overlapping or unclear purposes, causing significant ambiguity. For example, 'deviceAuthRequest' and 'mcpAuth' both handle device authorization requests with similar descriptions, while 'parseFileSecure' and 'mcpParse' both parse documents with overlapping functionality. Tools like 'capabilities', 'estimate', and 'pricing' have vague descriptions that don't clearly differentiate their roles, making it difficult for an agent to choose correctly between them.
The naming conventions are highly inconsistent, mixing different styles and patterns. Some tools use camelCase (e.g., 'agentCard', 'getKeyUsage'), others use snake_case (e.g., 'device_auth_approve' is implied but listed as 'deviceAuthApprove'), and some have prefixes like 'mcp' or 'api' without a clear rule. Verbs vary widely (e.g., 'list', 'get', 'request', 'poll'), and there's no uniform verb_noun structure, leading to a chaotic and unpredictable naming scheme.
With 29 tools, the count is excessive for the server's apparent purpose of document parsing and API management. Many tools seem redundant or overly specialized, such as having multiple device authorization tools ('deviceAuthRequest', 'deviceAuthApprove', 'deviceAuthPoll', 'deviceAuthInspect', 'mcpAuth', 'mcpAuthPoll') and overlapping parsing tools. This bloated set likely overwhelms agents and indicates poor scoping, as a more streamlined approach could consolidate functionality without losing coverage.
The tool set provides comprehensive coverage for the domain of document parsing and API management, including CRUD operations for API keys, device authorization workflows, parsing with various formats, and usage tracking. However, minor gaps exist, such as no explicit tool for updating user account details or handling billing beyond viewing, but agents can likely work around these with the available tools like 'mcpAccount' and 'myEntitlements'. The surface is largely complete for core workflows.
Available Tools
29 toolsagentCardBInspect
A2A Agent Card for agent-to-agent discovery. Returns a hand-crafted Agent Card with AILANG Parse-specific skills, supported formats, and capability descriptions. @raw: return raw JSON (no {result:..., elapsed_ms:N} envelope) for A2A compliance. Switch to @nowrap once available (lighter than @raw). If using --a2a flag, the built-in route takes precedence (collision guard skips this).
| Name | Required | Description | Default |
|---|---|---|---|
| _ | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context: it returns raw JSON for A2A compliance, mentions a future switch to @nowrap for lighter output, and notes a collision guard with built-in routes. This covers output format, performance considerations, and potential conflicts, though it doesn't detail error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise with four sentences, but it includes technical details like '@raw' and '@nowrap' that may not be essential for basic understanding. It's front-loaded with the core purpose, but the later sentences about flags and collisions add complexity without clear justification, reducing overall efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and low parameter coverage, the description is incomplete. It covers the tool's purpose and some behavioral aspects like output format and collisions, but it lacks details on return values, error cases, and parameter usage. For a tool with one undocumented parameter, this leaves significant gaps in understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, and the description provides no information about the parameter's purpose, semantics, or usage. It mentions '@raw' and '--a2a flag' but doesn't explain how these relate to the '_' parameter. Since schema coverage is low, the description fails to compensate, leaving the parameter undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: it returns a hand-crafted Agent Card for agent-to-agent discovery with specific AILANG Parse skills, formats, and capabilities. It uses the verb 'returns' with the resource 'Agent Card', making the action clear. However, it doesn't explicitly differentiate this from sibling tools like 'capabilities' or 'formats', which appear related.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through mentions of A2A compliance and the --a2a flag, suggesting it's for agent-to-agent interactions. It notes a collision guard with built-in routes, which provides some guidance on when it might not be used. However, it lacks explicit when-to-use vs. alternatives, such as how it differs from sibling tools like 'capabilities' or 'formats'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apiSamplesCInspect
Sample files inventory — delegates to the upstream package. Package @route annotations don't auto-register with serve-api, so we provide a local wrapper here.
| Name | Required | Description | Default |
|---|---|---|---|
| _ | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions delegation and wrapper aspects, which hint at behavioral traits like being a proxy, but doesn't disclose critical details such as whether it's read-only, destructive, requires authentication, or has rate limits. The description is too technical and lacks operational clarity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (two sentences) but not front-loaded with clear purpose; it starts with technical jargon. While concise, the structure is inefficient as the first sentence is vague, and the second adds implementation noise without aiding tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (1 parameter with 0% schema coverage, no annotations, no output schema), the description is incomplete. It fails to explain what the tool does operationally, how to use the parameter, or what to expect in return, making it inadequate for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description provides no information about the single required parameter '_' (type: string). It doesn't explain what this parameter represents (e.g., a file path, query, or identifier), leaving its semantics completely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Sample files inventory' which gives a vague purpose, but it's unclear what specific action the tool performs (e.g., list, retrieve, create). The phrase 'delegates to the upstream package' adds confusion rather than clarity. It doesn't distinguish from siblings like 'apiTools' or 'formats'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Package @route annotations don't auto-register with serve-api, so we provide a local wrapper here,' which implies a technical implementation detail but doesn't provide practical guidance on when to use this tool vs. alternatives. No explicit when/when-not or sibling comparisons are included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apiToolsCInspect
Tool definitions for Claude, OpenAI, MCP, and A2A agent frameworks. Delegates to the upstream package.
| Name | Required | Description | Default |
|---|---|---|---|
| _ | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions delegating to an upstream package, which hints at a read-only or proxy operation, but does not detail permissions, rate limits, side effects, or response format. This is inadequate for a tool with unknown behavior and no structured safety hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief with two sentences, front-loaded with the main purpose. It avoids unnecessary verbosity, but the under-specification reduces its effectiveness. Overall, it is appropriately sized, though it could be more informative without sacrificing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (one parameter with no schema coverage, no annotations, no output schema), the description is incomplete. It does not explain what the tool returns, how it interacts with frameworks, or the role of the parameter. For a tool with minimal structured data, more context is needed to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one required parameter '_' with 0% description coverage, and the tool description does not explain what this parameter represents or how it affects the operation. With low schema coverage, the description fails to compensate by adding meaning, leaving the parameter undocumented and unclear in semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool provides 'Tool definitions for Claude, OpenAI, MCP, and A2A agent frameworks' and 'Delegates to the upstream package,' which gives some indication of purpose but is vague about what the tool actually does (e.g., list, fetch, or manage these definitions). It does not clearly distinguish from siblings like 'apiSamples' or 'capabilities,' making it a tautology that restates the name 'apiTools' without specifying the action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives. The description mentions delegating to an upstream package but does not specify contexts, prerequisites, or exclusions. With siblings like 'apiSamples' and 'capabilities,' it lacks explicit when/when-not instructions, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
capabilitiesDInspect
capabilities(∀α2812. () -> α2812 ! {Env}) [pure]
| Name | Required | Description | Default |
|---|---|---|---|
| _ | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. The description contains only technical notation that doesn't explain whether this is a read/write operation, what permissions are needed, what side effects occur, or what the tool returns. The '[pure]' notation might hint at no side effects but is unclear without explanation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While technically concise, the description is under-specified rather than appropriately sized. The single line of technical notation doesn't convey useful information to help an AI agent understand or invoke the tool, failing to earn its place as a helpful description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 1 parameter, 0% schema coverage, no annotations, and no output schema, the description is completely inadequate. It provides no meaningful information about purpose, usage, behavior, or parameters, leaving the agent unable to properly select or invoke this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. The description provides no information about the single required parameter '_' or its purpose. The technical notation doesn't explain parameter usage, format, or meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'capabilities(∀α2812. () -> α2812 ! {Env}) [pure]' is a tautology that restates the tool name in a technical notation without explaining what the tool actually does. It provides no meaningful verb+resource statement or differentiation from sibling tools like 'myEntitlements' or 'health' that might relate to system capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description offers no context, prerequisites, exclusions, or references to sibling tools, leaving the agent with no information about appropriate usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deviceAuthApproveAInspect
Approve a device authorization request. Called by the browser approval page after user signs in and confirms. Verifies Firebase ID token from Authorization header to ensure the caller is a real authenticated user. Extracts uid from the verified token. Dev mode: ALLOW_SELF_APPROVE=true skips Firebase verification (for testing).
| Name | Required | Description | Default |
|---|---|---|---|
| req | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses important behavioral traits: authentication verification ('Verifies Firebase ID token'), user identification ('Extracts uid'), and dev mode behavior ('ALLOW_SELF_APPROVE=true skips Firebase verification'). However, it doesn't mention error conditions, response format, or what happens after approval (e.g., does it trigger notifications?).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized at 4 sentences. It's front-loaded with the core purpose, followed by implementation context, authentication details, and dev mode behavior. Each sentence adds value, though the dev mode sentence could potentially be omitted for production use cases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and 0% schema description coverage, the description provides good behavioral context but leaves critical gaps. The authentication flow is explained, but parameter requirements and return values are undocumented. For a security-sensitive approval tool, this incomplete specification could lead to invocation errors.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage for its single parameter 'req' (type: object). The description doesn't explain what 'req' should contain - it mentions Firebase ID tokens and Authorization headers but doesn't specify how these map to the parameter structure. For a single undocumented parameter, this represents a significant gap in understanding how to invoke the tool correctly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Approve a device authorization request') and identifies the resource ('device authorization request'). It distinguishes from siblings like deviceAuthRequest (initiates request), deviceAuthPoll (checks status), and deviceAuthInspect (examines details) by focusing on the approval step after user confirmation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool: 'Called by the browser approval page after user signs in and confirms.' It doesn't explicitly state when NOT to use it or name alternatives, but the context strongly implies this is for post-authentication approval flows rather than initial requests or status checks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deviceAuthInspectAInspect
Look up a pending device code by user_code and return its provenance. No authentication required — possession of the user_code is the proof. The dashboard /approve.html page calls this BEFORE showing the approve UI so the user can see where the request came from (IP, User-Agent, Referer, age) and decide whether to trust it.
Returns 404 if the user_code does not match any pending device_codes doc. Already-approved or expired codes return as INPUT_NOT_FOUND too — there is nothing to inspect after the fact.
| Name | Required | Description | Default |
|---|---|---|---|
| userCode | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and effectively discloses key behavioral traits: it specifies 'No authentication required — possession of the user_code is the proof', describes error handling ('Returns 404 if the user_code does not match any pending device_codes doc'), and explains the return logic for approved/expired codes. It lacks details on rate limits or response format, but covers essential operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage context and error handling. Each sentence adds value without redundancy, making it efficient and well-structured for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is largely complete: it covers purpose, usage, authentication, and error cases. It lacks details on the return value structure (e.g., what 'provenance' includes like IP, User-Agent), but this is partially mitigated by the context provided in the description itself.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It explains the purpose of the single parameter ('user_code') as the lookup key for pending device codes, adding meaningful context beyond the bare schema. However, it does not specify the parameter's format or constraints, leaving some gaps in full semantic understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Look up a pending device code by user_code and return its provenance'), identifies the resource ('pending device code'), and distinguishes it from siblings like deviceAuthApprove, deviceAuthPoll, and deviceAuthRequest by focusing on inspection rather than approval, polling, or request initiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states when to use this tool ('The dashboard /approve.html page calls this BEFORE showing the approve UI') and provides context on when not to use it ('Already-approved or expired codes return as INPUT_NOT_FOUND too — there is nothing to inspect after the fact'), clearly differentiating it from alternatives in the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deviceAuthPollBInspect
Poll for device authorization approval. Returns AUTHORIZATION_PENDING if user hasn't approved yet. Returns DEVICE_CODE_EXPIRED if the code has timed out. Returns approved status with API key on success.
| Name | Required | Description | Default |
|---|---|---|---|
| deviceCode | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's behavior by listing possible return statuses (AUTHORIZATION_PENDING, DEVICE_CODE_EXPIRED, approved status with API key), which adds value beyond the input schema. However, it lacks details on polling frequency, timeouts, error handling, or authentication requirements, leaving gaps in understanding its operational traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the first sentence stating the core purpose. The subsequent sentences efficiently list return statuses without unnecessary elaboration. However, it could be slightly more structured by explicitly separating behavior from outcomes, but overall, it avoids waste and is concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (polling for authorization with potential status changes), no annotations, no output schema, and low schema coverage, the description is incomplete. It covers basic behavior and outcomes but misses key contextual elements like integration with sibling tools (e.g., 'deviceAuthRequest'), error scenarios beyond timeouts, and response format details. This makes it adequate but with clear gaps for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, so the description must compensate. It doesn't explicitly mention the 'deviceCode' parameter or explain its semantics, such as where to obtain it or its format. The description implies usage with a device code but doesn't add meaningful details beyond what's inferred from the tool name and context, resulting in a baseline score due to the schema's lack of coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Poll for device authorization approval.' This specifies the verb ('poll') and resource ('device authorization approval'), making it understandable. However, it doesn't explicitly differentiate from sibling tools like 'deviceAuthPoll' vs. 'deviceAuthRequest' or 'deviceAuthApprove', which could provide more context on its specific role in the authorization flow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions possible return statuses (e.g., AUTHORIZATION_PENDING, DEVICE_CODE_EXPIRED), but doesn't explain prerequisites, such as needing a device code from a prior step like 'deviceAuthRequest', or when to poll versus using other device auth tools. This lack of context makes it unclear how it fits into the workflow with siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deviceAuthRequestBInspect
Request a device authorization code. Returns device_code, user_code, and verification URL. The agent should display the verification_url to the user.
| Name | Required | Description | Default |
|---|---|---|---|
| req | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the return values (device_code, user_code, verification URL) and the agent's expected action (display URL), which is useful behavioral context. However, it doesn't mention authentication requirements, rate limits, error conditions, or what happens after the request—significant gaps for an auth tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise: three sentences that each earn their place by stating the action, listing returns, and providing agent guidance. No wasted words, and the most critical information (what it does) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and 0% schema coverage, the description is moderately complete for its complexity. It covers the core purpose and returns but lacks details on parameters, error handling, and integration with sibling tools like deviceAuthPoll. For an auth flow tool, more context on the overall workflow would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% with one required parameter 'req' of type object. The description adds no parameter information beyond what the schema provides—it doesn't explain what 'req' should contain, its structure, or example values. For a single parameter with zero schema coverage, this is inadequate compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Request a device authorization code' specifies the verb and resource. It distinguishes from siblings like deviceAuthPoll and deviceAuthApprove by focusing on the initial request step rather than polling or approval. However, it doesn't explicitly differentiate from deviceAuthInspect or other auth-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance: 'The agent should display the verification_url to the user' implies this is part of an OAuth-like flow, but it doesn't specify when to use this tool versus alternatives like mcpAuth or deviceAuthPoll. No explicit when/when-not guidance or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimateDInspect
estimate(∀α25. (string, string) -> α25 ! {Clock, FS}) [pure]
| Name | Required | Description | Default |
|---|---|---|---|
| filepath | Yes | ||
| outputFormat | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. The description includes '[pure]' which might hint at functional purity (no side effects), but this is ambiguous and not clearly explained. It doesn't disclose what 'estimate' does behaviorally, such as whether it performs calculations, accesses files, or has rate limits. The technical notation adds little practical value for an AI agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise—a single line—but this conciseness comes at the cost of clarity. It's front-loaded with technical jargon that doesn't aid understanding. While not verbose, it fails to communicate essential information efficiently, making it under-specified rather than appropriately concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity implied by the technical notation, 2 required parameters with 0% schema coverage, no annotations, and no output schema, the description is completely inadequate. It doesn't explain what the tool estimates, how it uses the filepath and outputFormat, or what it returns. For a tool that likely performs some estimation based on a file, this lacks basic contextual information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning the input schema provides no descriptions for the parameters 'filepath' and 'outputFormat'. The description does not add any meaning beyond the schema; it doesn't explain what these parameters represent, their expected formats, or valid values. For a tool with 2 required parameters and no schema documentation, this is a critical gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'estimate(∀α25. (string, string) -> α25 ! {Clock, FS}) [pure]' is a technical type signature that does not state what the tool does in plain language. It's essentially a tautology that restates the name 'estimate' in a formal notation without explaining the actual purpose or resource being estimated. No specific verb or resource is mentioned, and it doesn't distinguish from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There is no mention of context, prerequisites, or exclusions. Given the sibling tools include 'mcpEstimate' which might be related, the lack of differentiation is particularly problematic.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
formatsAInspect
List all supported document formats for parsing and generation. Returns: parse formats (13), generate formats (9), output formats (blocks/markdown/html/a2ui), and which formats require AI (PDF, images).
| Name | Required | Description | Default |
|---|---|---|---|
| _ | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior by detailing the return values (parse formats, generate formats, output formats, and AI-required formats), which helps the agent understand what to expect. It does not mention permissions, rate limits, or side effects, but for a read-only query tool, this level of detail is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose and then listing return details in a structured manner. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (a simple query), no annotations, no output schema, and one undocumented parameter, the description is mostly complete. It clearly explains what the tool returns, which compensates for the lack of output schema. However, the undocumented parameter reduces completeness slightly, as its purpose remains unclear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one required parameter '_' with 0% description coverage and no enums. The description does not explain this parameter's purpose, semantics, or usage, leaving it undocumented. Since schema coverage is low (<50%), the description fails to compensate for this gap, resulting in unclear parameter meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('List') and resource ('all supported document formats for parsing and generation'), and it distinguishes itself from sibling tools by focusing on format capabilities rather than operations like parsing, converting, or authentication.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying what information is returned (parse formats, generate formats, etc.), which suggests it should be used to query format support. However, it does not explicitly state when to use this tool versus alternatives like 'mcpConvert' or 'mcpParse', nor does it provide exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
getKeyUsageBInspect
Get usage stats for a user's API key. Accepts Firebase JWT or apiKey. Verifies that the requested keyId belongs to the authenticated user.
| Name | Required | Description | Default |
|---|---|---|---|
| req | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions authentication and ownership verification, which are useful, but lacks details on rate limits, error handling, response format, or whether it's read-only (implied by 'Get' but not explicit). For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise with two sentences that are front-loaded and waste no words. Each sentence adds critical information (purpose and authentication/verification details), making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (authentication, ownership verification), lack of annotations, 0% schema coverage, no output schema, and nested parameters, the description is incomplete. It covers the basic purpose and some behavioral aspects but misses details on parameter usage, response format, and operational constraints, making it inadequate for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage and one parameter ('req') that is a nested object with no documented properties. The description adds minimal value by hinting at authentication inputs ('Firebase JWT or apiKey') and 'keyId', but it doesn't fully explain the parameter structure or semantics, failing to compensate for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get usage stats') and resource ('for a user's API key'), making it immediately understandable. However, it doesn't differentiate from sibling tools like 'listApiKeys' or 'revokeApiKey' beyond stating it retrieves usage statistics specifically.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying authentication methods ('Firebase JWT or apiKey') and ownership verification, but it doesn't explicitly state when to use this tool versus alternatives like 'listApiKeys' (which might list keys without usage stats) or provide clear exclusions. The guidance is contextual but not comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
getUploadUrlAInspect
Request a pre-authenticated GCS upload URL for direct file upload. Business tier only. The returned URL allows the client to PUT file content directly to GCS, bypassing the 32MB Cloud Run request limit. After upload, pass the gcs_ref to POST /api/v1/parse.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | ||
| filename | Yes | ||
| mimeType | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tier restriction ('Business tier only'), the upload mechanism ('PUT file content directly to GCS'), the size limitation workaround ('bypassing the 32MB Cloud Run request limit'), and the required follow-up action ('pass the gcs_ref to POST /api/v1/parse'). It doesn't cover rate limits or error conditions, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with four concise sentences that each earn their place: the core purpose, tier restriction, technical benefit, and required follow-up action. It's front-loaded with the main functionality and wastes no words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a file upload workflow tool with no annotations and no output schema, the description provides good operational context but leaves gaps. It explains the purpose, restrictions, and next steps well, but doesn't describe the return value format or error conditions, which would be important for a tool with 3 required parameters and no schema documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 3 undocumented parameters, the description fails to add meaningful parameter semantics. It mentions 'filename' and 'mimeType' implicitly through context but doesn't explain what values are expected for these parameters or the 'apiKey' parameter. The description doesn't compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Request a pre-authenticated GCS upload URL') and resource ('for direct file upload'), distinguishing it from sibling tools like parseFileSecure or partitionGeneral. It precisely explains what the tool does rather than just restating the name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Business tier only', 'bypassing the 32MB Cloud Run request limit') and what to do after using it ('pass the gcs_ref to POST /api/v1/parse'). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
healthAInspect
Health check for the AILANG Parse API.
Returns service status, version, AILANG commit hash, supported format counts,
and billing catalog status. billing_catalog_loaded is FALSE when the
BILLING_PLAN_CATALOG env var is unset or parses to an empty list — in that
mode every authenticated request silently falls back to the safety-net
"fallback" plan (limit=1) and is rejected as over-quota. release.sh asserts
billing_catalog_loaded == true after every promotion to catch this regressing.
| Name | Required | Description | Default |
|---|---|---|---|
| _ | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it returns service status, version, commit hash, format counts, and billing catalog status. It also explains the implications of 'billing_catalog_loaded' being FALSE, including fallback behavior and quota rejection, which is valuable operational context not inferable from the input schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. The second sentence details the return values, and the third explains a critical behavioral nuance ('billing_catalog_loaded'). While slightly verbose in explaining edge cases, every sentence adds necessary value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 likely placeholder parameter) and no annotations or output schema, the description is reasonably complete. It explains what the tool does, what it returns, and a key behavioral detail about billing fallbacks. However, it lacks usage guidelines and parameter documentation, which slightly reduces completeness for a health check tool in a context with many sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, but the tool description provides no information about the parameter's purpose or usage. Since the parameter is required and undocumented, the description fails to compensate for the schema gap. However, the baseline is 3 because the tool likely has zero meaningful parameters (the '_' parameter appears to be a placeholder), and the description focuses on the tool's output behavior instead.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Health check for the AILANG Parse API.' It specifies the exact resource (AILANG Parse API) and action (health check), distinguishing it from sibling tools like 'capabilities' or 'pricing' that serve different functions. The description goes beyond a simple status check by detailing what information is returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, context for invocation, or comparisons with sibling tools like 'capabilities' or 'mcpParse'. The focus is solely on what the tool returns, not when it should be used.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
listApiKeysBInspect
List API keys for a user. Accepts either:
Authorization: Bearer <firebase_id_token> (dashboard path)
{apiKey: "..."} or {args: ["dp_..."]} in body (CLI/SDK path) The resolved userId filters the Firestore query server-side.
| Name | Required | Description | Default |
|---|---|---|---|
| req | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses authentication requirements (Bearer token or API key) and server-side filtering behavior, which is useful. However, it lacks details on rate limits, pagination, error handling, or what the output looks like, leaving gaps for a tool that lists data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three sentences that cover purpose, authentication, and filtering. It's front-loaded with the main action, though the authentication details could be slightly more streamlined. Overall, it avoids unnecessary fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (authentication paths, server-side filtering), no annotations, no output schema, and poor parameter coverage, the description is incomplete. It misses key details like output format, error cases, and full parameter documentation, making it inadequate for safe and effective use by an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter ('req') with 0% description coverage, and the description doesn't add meaningful semantics beyond mentioning authentication methods. It fails to explain what 'req' should contain, its structure, or how it relates to the authentication paths described, leaving the parameter largely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'API keys for a user', making the purpose explicit. However, it doesn't distinguish this tool from potential siblings like 'getKeyUsage' or 'revokeApiKey', which might also involve API keys but serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying authentication methods (Bearer token or API key in body) and mentions server-side filtering by userId, but it doesn't explicitly state when to use this tool versus alternatives like 'getKeyUsage' or provide clear exclusions. The context is somewhat implied rather than fully guided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mcpAccountAInspect
View account info, pricing, entitlements, or list keys. Actions: "status" (default) → tier, quota, usage from /me/entitlements "pricing" → public pricing tiers (no auth required) "keys" → list user's API keys with per-key usage "usage" → alias for "keys" (per-key usage is shown there)
| Name | Required | Description | Default |
|---|---|---|---|
| action | Yes | ||
| apiKey | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behaviors: the default action ('status'), authentication requirements (notes 'pricing' requires no auth), and what data each action returns. However, it doesn't mention rate limits, error conditions, or response formats, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a summary sentence followed by a bullet-like list of actions, each with clear explanations. Every sentence earns its place by providing essential information without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters with 0% schema coverage and no output schema, the description is largely complete for a read-only tool, covering actions, auth nuances, and data returned. However, it lacks details on response structure or error handling, which would be helpful given the complexity of multiple actions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 2 parameters, the description fully compensates by explaining both parameters: 'action' with its four options and their meanings, and 'apiKey' (implied as required for most actions). It adds crucial meaning beyond the bare schema, making parameters understandable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('View account info, pricing, entitlements, or list keys') and distinguishes it from siblings by listing four distinct actions. It explicitly differentiates from tools like 'pricing' (which only shows public pricing) and 'listApiKeys' (which likely doesn't show per-key usage).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use each action: 'status' for tier/quota/usage, 'pricing' for public pricing without auth, 'keys' for API keys with usage, and clarifies 'usage' is an alias for 'keys'. It also implicitly distinguishes from siblings by noting 'pricing' action requires no auth, unlike other actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mcpAuthAInspect
Start device authorization to get an API key. Returns device_code, user_code, and verification URL. The agent should display the verification URL to the user, who signs in and approves the code. Then call mcpAuthPoll with the device_code. MCP wrappers don't have HTTP request headers, so provenance is empty here.
| Name | Required | Description | Default |
|---|---|---|---|
| label | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by explaining the multi-step authentication workflow, what gets returned, and how the agent should handle the response (display URL to user). It mentions 'provenance is empty here' which adds useful context about HTTP headers. It doesn't cover potential errors or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with four sentences that each earn their place: stating the purpose, listing returns, explaining the workflow, and adding technical context. It's front-loaded with the core purpose and contains zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an authentication initiation tool with no annotations and no output schema, the description does well by explaining the workflow, returns, and agent responsibilities. It could be more complete by mentioning potential errors, timeout periods, or what the 'label' parameter controls, but covers the essential context for proper usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 1 parameter with 0% description coverage. The description doesn't mention the 'label' parameter at all, providing no semantic information beyond what the bare schema offers. However, with only one parameter, the baseline is 4, but the complete lack of parameter explanation reduces this to 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Start device authorization', 'get an API key') and resources ('device_code, user_code, and verification URL'). It distinguishes from sibling tools like 'mcpAuthPoll' by explaining the workflow relationship.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Start device authorization') and what to do next ('call mcpAuthPoll with the device_code'). It provides clear workflow guidance and distinguishes from alternatives by naming the specific follow-up tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mcpAuthPollAInspect
Poll for device authorization completion. Returns "pending" (keep polling every 5s), "approved" (with api_key and tier), or "expired" (start over with mcpAuth).
| Name | Required | Description | Default |
|---|---|---|---|
| deviceCode | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the polling mechanism ('keep polling every 5s'), possible return states ('pending', 'approved', 'expired'), and actions for each state (e.g., 'start over with mcpAuth' for 'expired'). It also implies the tool is non-destructive and read-only, though not explicitly stated. No contradictions exist with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and well-structured: it uses three sentences that efficiently cover purpose, behavior, and guidelines without redundancy. Each sentence adds value, such as specifying polling intervals and state-dependent actions, making it front-loaded and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (polling with state transitions), no annotations, no output schema, and low schema coverage, the description is reasonably complete. It explains the polling process, return states, and next steps, but lacks details on parameter semantics and exact output structure (e.g., what 'api_key and tier' entail). It compensates well for the missing structured data but has minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, so the description must compensate. It mentions 'device authorization' contextually but does not explain the 'deviceCode' parameter's purpose, format, or source. The description adds minimal semantics beyond the schema, failing to fully address the coverage gap, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Poll for device authorization completion.' It specifies the verb ('poll') and resource ('device authorization'), and distinguishes it from sibling tools like 'mcpAuth' (which likely initiates the process) and 'deviceAuthPoll' (which may be a similar but distinct polling function). However, it doesn't explicitly differentiate from 'deviceAuthPoll' beyond the name prefix, leaving some ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: it states when to use this tool ('Poll for device authorization completion'), when not to use it ('expired (start over with mcpAuth)'), and alternatives ('keep polling every 5s' implies a retry mechanism, and 'start over with mcpAuth' specifies an alternative tool). This gives clear context for invocation and error handling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mcpConvertCInspect
Convert is not supported on the hosted server (no persistent local filesystem to write the output file to). Use the local stdio SDK (@ailang/parse) for local conversions, where the user has filesystem access.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | ||
| apiKey | Yes | ||
| outputPath | Yes | ||
| outputFormat | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It reveals that this tool is unsupported in hosted environments due to filesystem constraints, which is valuable context. However, it doesn't describe what happens when invoked anyway, what permissions are needed, rate limits, or error behavior for a tool with 4 required parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two clear sentences. The first sentence establishes the limitation, and the second provides the alternative. There's no wasted text, though it could be more front-loaded about the tool's actual purpose before discussing limitations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 4 required parameters, 0% schema coverage, no annotations, and no output schema, the description is incomplete. While it provides important usage guidance, it doesn't explain what the tool actually does when usable, what the parameters mean, or what to expect from the operation. The description focuses on limitations rather than enabling successful tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 4 required parameters are documented in the schema. The description provides no information about what 'input', 'outputFormat', 'outputPath', or 'apiKey' parameters mean or how they should be used. The description fails to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states what the tool does NOT do (convert on hosted server) rather than its actual purpose. It mentions 'local conversions' but doesn't specify what resource or operation this tool performs. The name 'mcpConvert' suggests a conversion function, but the description focuses on limitations rather than functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance about when NOT to use this tool ('not supported on the hosted server') and offers a clear alternative ('Use the local stdio SDK (@ailang/parse) for local conversions'). This directly addresses when/when-not scenarios with named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mcpEstimateBInspect
Estimate cost and latency for parsing a document. Accepts a file path or sample_id. No auth required.
| Name | Required | Description | Default |
|---|---|---|---|
| filepath | Yes | ||
| outputFormat | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that 'No auth required,' which is a useful behavioral trait. However, it lacks details on rate limits, error handling, or what the estimation output includes (e.g., cost units, latency metrics). The description adds some value but leaves significant gaps for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and well-structured: two sentences that directly state the purpose and key usage details. Every word earns its place, with no wasted information, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (estimation tool with 2 required parameters), no annotations, and no output schema, the description is incomplete. It doesn't explain what the estimation returns (e.g., cost in dollars, latency in seconds), how to interpret results, or handle errors. For a tool with no structured support, this leaves the agent under-informed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It mentions 'Accepts a file path or sample_id,' which partially explains the 'filepath' parameter but doesn't clarify the 'outputFormat' parameter at all. The description adds minimal meaning beyond the bare schema, failing to fully address the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Estimate cost and latency for parsing a document.' It specifies the verb ('estimate'), resources ('cost and latency'), and target action ('parsing a document'). However, it doesn't explicitly differentiate from the sibling tool 'estimate' which might have overlapping functionality, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: 'Accepts a file path or sample_id' and 'No auth required.' This implies when to use it (for estimation without authentication) but doesn't explicitly state when not to use it or mention alternatives like 'estimate' or 'mcpParse' for actual parsing. The guidance is helpful but incomplete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mcpFormatsAInspect
List supported formats, samples, and service capabilities. Pure JSON, no auth required. Delegates to package implementation. Single source of truth lives in pkg/sunholo/ailang_parse/services/mcp/tools.
| Name | Required | Description | Default |
|---|---|---|---|
| _ | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context: 'Pure JSON, no auth required' clarifies the response format and authentication needs, and 'Delegates to package implementation' indicates it's a wrapper. It does not cover rate limits or error handling, but for a read-only tool, this is reasonably transparent given the lack of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, with three sentences that each add value: the first states the purpose, the second covers format and auth, and the third explains delegation and source. There is no wasted text, and it efficiently communicates key information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple read operation), lack of annotations, no output schema, and low parameter coverage, the description is moderately complete. It covers purpose, format, auth, and implementation details but misses parameter explanations and return value specifics. For a tool with no structured data support, it does an adequate but not thorough job.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, and the description provides no information about parameters. It does not explain the purpose or usage of the required '_' parameter, leaving it undocumented. Since schema coverage is low (<50%), the description fails to compensate, resulting in inadequate parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List supported formats, samples, and service capabilities.' It specifies the verb ('List') and resources ('formats, samples, and service capabilities'), making the action explicit. However, it does not distinguish this tool from sibling 'formats' or 'apiSamples', which appear related, so it misses full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some implied usage context: 'Pure JSON, no auth required' suggests this tool is simple and accessible, and 'Delegates to package implementation' hints at its technical role. However, it lacks explicit guidance on when to use this tool versus alternatives like 'formats' or 'apiSamples', and does not mention prerequisites or exclusions, leaving usage somewhat vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mcpParseBInspect
Parse a document. Accepts a file path or sample_id (e.g. "sample_docx_formatting"). The hosted server requires a valid dp_ API key — get one via mcpAuth. Output formats: blocks (default), markdown, html, a2ui. requestId is reserved for future replay support.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | ||
| filepath | Yes | ||
| requestId | Yes | ||
| outputFormat | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses authentication requirements (dp_ API key via mcpAuth) and output format options, which are valuable behavioral traits. However, it doesn't mention rate limits, error handling, or what 'blocks' format entails, leaving gaps for a mutation-like operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. Each sentence adds value: input types, authentication, output formats, and requestId purpose. No wasted words, though it could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage, no annotations, and no output schema, the description is incomplete. It covers authentication and output formats but lacks details on parameter formats, error cases, or return values, making it inadequate for a tool with multiple required inputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains that 'filepath' accepts a file path or sample_id (e.g., 'sample_docx_formatting'), lists outputFormat options, and notes requestId is for future replay. However, it doesn't clarify apiKey format or provide examples for outputFormat values, leaving 4 parameters partially documented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Parse a document' with specific input types (file path or sample_id). It distinguishes from some siblings like 'mcpConvert' or 'partitionGeneral' by focusing on parsing rather than conversion or partitioning, though it doesn't explicitly differentiate from 'parseFileSecure'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning the need for a dp_ API key via mcpAuth and listing output formats, but doesn't explicitly state when to use this tool versus alternatives like 'parseFileSecure' or 'mcpConvert'. It provides some prerequisites but no clear 'when-not' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
myEntitlementsAInspect
Get the authenticated user's billing entitlements, usage, and plan details. Returns: plan name, monthly request limit, requests used, remaining requests, upgrade/manage URLs. Requires a valid AILANG Parse API key (dp_ prefix).
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it's a read operation ('Get'), requires authentication ('Requires a valid AILANG Parse API key'), specifies the key format ('dp_ prefix'), and outlines what information is returned. It doesn't mention rate limits or error handling, but covers the essential operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by return details and authentication requirements. Every sentence earns its place: the first defines the tool, the second lists return values, and the third specifies prerequisites. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (personal billing info retrieval), no annotations, no output schema, and 0% schema description coverage, the description is largely complete. It covers purpose, return values, and authentication needs. However, it doesn't specify error conditions or response formats, leaving some gaps for an agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage (no parameter descriptions), but the description compensates by explaining the 'apiKey' parameter's purpose and format ('valid AILANG Parse API key (dp_ prefix)'). This adds significant meaning beyond the bare schema. With only one parameter, the description provides adequate semantic context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and specifies the resource as 'the authenticated user's billing entitlements, usage, and plan details.' It distinguishes itself from siblings like 'getKeyUsage' or 'pricing' by focusing on personal billing/plan information rather than general usage stats or pricing tables.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to retrieve personal billing and plan details for the authenticated user. However, it doesn't explicitly state when NOT to use it or mention alternatives among the sibling tools (e.g., 'getKeyUsage' for usage stats without billing details).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
parseFileSecureAInspect
Parse a document. Requires a valid API key. Validates the key, checks entitlement quotas, logs for replay. filepath: file path OR sample_id (e.g. "sample_docx_formatting" → resolved via /api/v1/samples). outputFormat: "blocks", "markdown", "html", or "a2ui". gcsRef: optional gs:// URI for Business tier large file uploads (>32MB). When provided, the file is downloaded from GCS via our service account. Business tier only — Free/Pro users get TIER_UPGRADE_REQUIRED error. sourceUrl: optional https:// URL (e.g., a signed GCS URL or any public file). When provided, the file is fetched over HTTPS by docparse and parsed. Available on all tiers; tier dictates the max fetched-file size. Cannot be combined with gcsRef or filepath — sourceUrl wins. @nowrap: raw JSON (no envelope), _headers extracted as HTTP response headers.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | ||
| gcsRef | Yes | ||
| filepath | Yes | ||
| sourceUrl | Yes | ||
| outputFormat | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and excels. It reveals security behaviors (API key validation, entitlement quota checks, logging for replay), tier-based access controls, file handling mechanisms (GCS download via service account, HTTPS fetching), error conditions, and output formatting details (@nowrap for raw JSON). This covers critical operational aspects beyond basic parsing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose and key requirement. Each subsequent sentence adds value: parameter explanations are grouped logically, and tier/conflict rules are clearly stated. While dense, it avoids redundancy, though the formatting (bullet-like indentation) could be slightly more structured for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 5 parameters, 0% schema coverage, no annotations, and no output schema, the description is highly complete. It covers purpose, usage rules, behavioral traits, and parameter semantics thoroughly. The only minor gap is lack of explicit output details (e.g., structure of parsed content), but given the outputFormat options and @nowrap hint, it's largely adequate for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage and 5 parameters, the description compensates fully by explaining each parameter's semantics. It clarifies filepath accepts both paths and sample IDs with resolution logic, lists outputFormat options, details gcsRef's tier restrictions and GCS handling, explains sourceUrl's tier-based size limits and conflict rules, and implies apiKey's purpose. This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Parse a document.' It specifies the verb (parse) and resource (document), and distinguishes it from siblings like 'mcpParse' by emphasizing security aspects ('Secure' in name, API key requirement, validation, logging). However, it doesn't explicitly differentiate from 'partitionGeneral' or other parsing-related tools beyond the security focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives. It details tier-specific constraints (Business vs. Free/Pro for gcsRef), sourceUrl availability across tiers, and conflict resolution rules ('sourceUrl wins' over gcsRef/filepath). It also mentions prerequisites ('Requires a valid API key') and error conditions (TIER_UPGRADE_REQUIRED), offering comprehensive usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
partitionGeneralBInspect
Unstructured API-compatible endpoint (drop-in replacement for Unstructured.io). Returns elements in Unstructured JSON format (Title, NarrativeText, Table, ListItem, etc.). Accepts file upload (multipart/form-data) or JSON body with filepath/sample_id. API key: via unstructured-api-key header (Unstructured convention) or apiKey form field. strategy parameter: "auto" (default), "hi_res", "fast", "ocr_only". Uses _headers for header access while keeping @route multipart support.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | ||
| _headers | Yes | ||
| filepath | Yes | ||
| strategy | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about authentication (API key via header or form field), input formats (multipart/form-data or JSON), and strategy parameter options. However, it doesn't cover critical behaviors like rate limits, error handling, response structure, or whether it's read-only or destructive, which are significant gaps for a tool with 4 required parameters and no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. Most sentences add value, though the last sentence about '_headers' could be more clearly integrated. There's minimal redundancy, and the structure efficiently conveys key information in a compact format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 required parameters, nested objects, no output schema, and no annotations), the description is incomplete. It lacks details on response format, error conditions, rate limits, and full parameter semantics. While it covers some aspects like authentication and strategy options, the gaps are substantial for a tool that processes files and returns structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains the 'strategy' parameter with options ('auto', 'hi_res', 'fast', 'ocr_only') and mentions that '_headers' is for 'header access while keeping @route multipart support'. However, it doesn't clarify the semantics of 'filepath' or 'apiKey', leaving two of the four required parameters undocumented. This partial coverage is insufficient given the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: it's an 'Unstructured API-compatible endpoint' that 'returns elements in Unstructured JSON format' and 'accepts file upload or JSON body'. This specifies the verb (returns elements), resource (Unstructured JSON format), and input methods. However, it doesn't explicitly differentiate from sibling tools like 'mcpParse' or 'parseFileSecure', which might have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning it's a 'drop-in replacement for Unstructured.io' and specifying input methods (file upload or JSON body). However, it lacks explicit guidance on when to use this tool versus alternatives like 'mcpParse' or 'parseFileSecure', and doesn't mention prerequisites or exclusions, leaving usage somewhat ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pricingDInspect
pricing(∀α367. () -> α367) [pure]
| Name | Required | Description | Default |
|---|---|---|---|
| _ | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. The '[pure]' notation might suggest a pure function without side effects, but this is ambiguous technical notation rather than clear behavioral description. The description doesn't disclose whether this is a read/write operation, what resources it accesses, or any performance/rate limit considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While technically concise, this is under-specification rather than effective conciseness. The description is a single line of type notation that fails to communicate functional purpose. Every sentence should earn its place, but this 'sentence' provides no operational value to an AI agent trying to understand when and how to use the tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a pricing-related tool with no annotations, 0% schema coverage, no output schema, and numerous potentially related sibling tools, this description is completely inadequate. It provides no functional context, no parameter guidance, and no behavioral information that would help an agent understand this tool's role in the system.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description provides no information about the single required parameter '_'. With 0% schema description coverage and no parameter documentation in the description, the agent has no semantic understanding of what this parameter represents or how to use it. The type signature notation doesn't clarify parameter meaning or usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'pricing(∀α367. () -> α367) [pure]' is completely opaque and provides no meaningful information about what the tool does. It appears to be type signature notation rather than a functional description, failing to state any purpose or action. This is essentially missing/misleading information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With 27 sibling tools including 'estimate' and 'mcpEstimate' that might be related to pricing calculations, there's no indication of how this tool differs or when it should be selected. No usage context or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
requestHistoryBInspect
List recent parse requests for a user. Returns up to 50 entries. Accepts Firebase ID token (dashboard) OR dp_ API key (programmatic). Dashboard sends Authorization: Bearer <firebase_token> with {args: [uid]}. API clients send {args: [apiKey]}. Uses Firestore structured query to filter by user_id server-side and order by timestamp descending. Only reads matching docs (not full scan).
| Name | Required | Description | Default |
|---|---|---|---|
| req | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and adds valuable behavioral context: it discloses the return limit (up to 50 entries), authentication requirements (two methods with details), server-side filtering (by user_id, ordered by timestamp descending), and performance note (not a full scan). This goes beyond basic purpose, though it could mention error handling or permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose and key details. Each sentence adds value (e.g., return limit, auth methods, server behavior). It could be slightly more structured by separating auth instructions, but overall it avoids waste and is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (1 parameter with nested object, no output schema, no annotations), the description provides good context on behavior and usage but has gaps: it does not fully explain the parameter structure or potential return values. It compensates well for missing annotations but falls short of being complete for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter ('req') with 0% description coverage and no details in the schema. The description adds some semantics by mentioning 'args: [uid]' or 'args: [apiKey]' for different auth methods, but it does not fully explain the structure of 'req' (e.g., as an object with specific fields like 'uid' or 'apiKey'), leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List recent parse requests for a user.' It specifies the resource (parse requests) and scope (for a user, recent, up to 50 entries). However, it does not explicitly differentiate from sibling tools like 'requestReplay' or 'mcpParse', which may have overlapping domains, so it misses the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying authentication methods (Firebase ID token for dashboard, dp_ API key for programmatic) and the context of listing user-specific requests. However, it does not explicitly state when to use this tool versus alternatives like 'requestReplay' or provide clear exclusions, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
requestReplayAInspect
Retrieve a stored request/response pair for replay. Requires authentication: Firebase JWT or dp_ API key. The request must belong to the authenticated user (user_id match). Accepts optional outputFormat (blocks/markdown/html/a2ui) to re-render the stored blocks server-side using the ailang_parse pipeline.
| Name | Required | Description | Default |
|---|---|---|---|
| req | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond the schema by detailing authentication needs (Firebase JWT or dp_ API key), user ownership constraints (user_id match), and server-side rendering capabilities (outputFormat options). This covers key behavioral traits like security and processing, though it could mention rate limits or error handling for a higher score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose and efficiently listing key details in four sentences. Each sentence adds value without redundancy, covering retrieval, authentication, ownership, and output options, making it concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (involves authentication and data retrieval), no annotations, no output schema, and low schema coverage, the description is moderately complete. It addresses authentication and usage constraints but lacks details on return values, error cases, or how the replay functions. For a tool with these gaps, it provides a basic foundation but could be more comprehensive to fully guide an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage and one required parameter ('req') of type object, which is undocumented. The description adds some semantic meaning by implying 'req' is used to identify the stored request, but it doesn't specify the structure or content of 'req' (e.g., what fields it contains). Since schema coverage is low, the description partially compensates but not fully, aligning with the baseline for incomplete parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieve a stored request/response pair for replay.' It specifies the verb ('retrieve') and resource ('stored request/response pair'), making the function unambiguous. However, it does not explicitly differentiate from sibling tools like 'requestHistory', which might be related, so it doesn't reach a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context by mentioning authentication requirements and user ownership, which implies when to use it (i.e., for authenticated users retrieving their own data). However, it lacks explicit guidance on when to use this tool versus alternatives like 'requestHistory' or other siblings, and does not specify exclusions or prerequisites beyond authentication, so it's not fully comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revokeApiKeyCInspect
Revoke an API key by keyId. Authenticates via either Firebase JWT or apiKey.
| Name | Required | Description | Default |
|---|---|---|---|
| req | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions authentication methods but fails to disclose critical behavioral traits: whether revocation is permanent, if it affects existing sessions, what permissions are required, rate limits, or what happens after revocation. For a destructive operation with zero annotation coverage, this is inadequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two clear sentences. The first sentence states the core purpose, and the second adds authentication context. No wasted words, though it could benefit from more complete information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with no annotations, 0% schema coverage, no output schema, and complex nested parameters, the description is insufficient. It doesn't explain what 'req' should contain, what happens after revocation, error conditions, or return values. The authentication mention is helpful but doesn't compensate for major gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% with 1 required parameter ('req') that's a nested object. The description mentions 'keyId' but doesn't explain how to structure the 'req' object or what other parameters might be needed. It adds minimal value beyond the bare schema, failing to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Revoke') and target resource ('API key by keyId'), providing a specific verb+resource combination. However, it doesn't distinguish this tool from sibling 'rotateApiKey' or explain the relationship between these key management operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions authentication methods ('Authenticates via either Firebase JWT or apiKey') but provides no guidance on when to use this tool versus alternatives like 'rotateApiKey' or 'listApiKeys'. There's no indication of prerequisites, consequences, or appropriate contexts for revocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rotateApiKeyCInspect
Rotate an API key: generate new key, revoke old one, preserve tier + usage.
| Name | Required | Description | Default |
|---|---|---|---|
| req | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but only states the basic operation. It doesn't disclose critical behavioral traits like authentication requirements, rate limits, whether the old key is immediately invalidated, or what the response contains. The mention of 'preserve tier + usage' adds some context but is insufficient for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It front-loads the core action ('Rotate an API key') and succinctly explains the key steps and preservation aspects.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations, 0% schema coverage, no output schema, and a nested object parameter, the description is incomplete. It covers the what but not the how, missing details on inputs, outputs, errors, and behavioral implications essential for safe use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 required parameter ('req') with 0% description coverage and no details in the schema. The description provides no information about what 'req' should contain, such as key identifiers or configuration options, failing to compensate for the schema's lack of documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('rotate', 'generate', 'revoke', 'preserve') and identifies the resource ('API key'). It distinguishes from the sibling 'revokeApiKey' by mentioning both generation and preservation aspects, though it doesn't explicitly contrast them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'revokeApiKey' or 'listApiKeys'. The description implies usage for key rotation but offers no context about prerequisites, timing, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!