bubblyphone-agents
Server Quality Checklist
- Disambiguation4/5
Most tools have distinct purposes, but `get_call` and `get_call_transcript` overlap significantly—both retrieve call transcripts, creating ambiguity about which to use. `get_country_rates` vs `lookup_rate` are sufficiently differentiated by scope (country-wide vs specific number).
Naming Consistency5/5Excellent consistency with strict snake_case and verb_noun pattern throughout (e.g., `buy_phone_number`, `list_calls`, `inject_context`). All 20 tools follow the same grammatical structure without mixing conventions.
Tool Count5/5Twenty tools is well-scoped for a telephony platform with AI agents, covering number lifecycle (search/buy/manage), call operations (make/transfer/hangup), account management, and AI controls without bloat.
Completeness4/5Strong coverage with minor gaps. Missing `delete_phone_number` or `release_phone_number` leaves the number lifecycle incomplete (no way to stop renting). Active call controls cover essential operations (hangup, transfer, inject) but lack hold/resume functionality.
Average 3.9/5 across 20 of 20 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 20 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as read-only, non-destructive, idempotent, and closed-world. The description adds the modifier 'all' implying comprehensive return of rates, but does not disclose additional behavioral traits like pagination, caching, rate limits, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient 9-word sentence with no filler. It is appropriately front-loaded with the action and resource, delivering maximum information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a simple read-only tool with one parameter and no output schema, the description combined with annotations provides adequate context for invocation. A minor gap exists in not describing the return structure, though no output schema exists to require this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('Two-letter country code'), the schema fully documents the parameter. The description mentions 'specific country' which aligns with the parameter but adds no syntax details, format constraints, or usage examples beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get'), resource ('calling rate prefixes and prices'), and scope ('for a specific country'). However, it does not explicitly differentiate from the sibling tool 'lookup_rate', leaving ambiguity about when to use which rate-related tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'lookup_rate', nor does it mention prerequisites or constraints beyond the country_code parameter. It simply states what the tool does, not when to invoke it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, establishing the safety profile. The description adds valuable scope context by specifying 'AI agent configuration' as part of the retrieved data, which explains the specific nature of the details returned. However, it omits additional behavioral traits like response format or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with the action front-loaded. While 'details' is slightly generic, the addition of 'AI agent configuration' provides necessary specificity. No redundant or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single string parameter) and strong annotation coverage, the description adequately explains the retrieval scope. However, with no output schema provided, the description could better specify the structure or content of the returned details beyond mentioning 'AI agent configuration'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents the phone_number_id parameter. The description adds no explicit parameter guidance, meeting the baseline expectation for high-coverage schemas. No additional syntax or format details are provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with clear resource scope ('details and AI agent configuration for a phone number'). It effectively distinguishes from siblings: vs list_phone_numbers (single retrieval vs list), vs buy_phone_number (retrieval vs acquisition), vs get_call (phone config vs call instance), and vs update_phone_number (read vs write).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool retrieves but provides no explicit guidance on when to use it versus alternatives like list_phone_numbers or search_phone_numbers. There are no 'when-not-to-use' exclusions or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, non-destructive, idempotent safety. The description adds scope ('your calls') and filtering context but omits pagination behavior, default sorting, result limits, or whether deleted calls are included.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single front-loaded sentence (12 words) with zero redundancy. Core action ('List your calls') precedes modifiers ('with optional filtering...'), making it immediately scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a list operation given rich annotations and complete parameter schema. However, lacking an output schema, it could benefit from noting whether results are paginated or if there's a maximum return limit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is met. The description categorizes filters (direction, status, phone number, date range) providing a useful overview, but adds no semantic depth beyond the schema's individual parameter descriptions (e.g., no format examples or dependency notes).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'List' and resource 'calls' clearly, and notes filtering capabilities. However, it does not explicitly distinguish from sibling 'get_call' (singular retrieval) vs this list operation, which could cause selection ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions 'optional filtering' implying parameters are not required, but provides no explicit guidance on when to use this tool versus 'get_call' for single-call retrieval, nor any prerequisites or performance considerations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare read-only, non-destructive, and idempotent properties. The description adds valuable context that the listing includes 'pricing per minute,' indicating what data fields to expect. However, it omits details like rate limits, caching behavior, or the full structure of returned model objects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is efficiently front-loaded with the verb 'List' and packs essential information (resource type and key data field 'pricing') into 11 words without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description partially compensates by mentioning 'pricing per minute,' but remains incomplete regarding other returned fields (model IDs, capabilities, supported languages). With zero input parameters and good annotations, the description meets minimum viability but could elaborate on return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per the rubric, 0 parameters establishes a baseline score of 4. The description correctly implies no filtering is needed by stating 'List available' without qualifying parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and specific resource ('available AI models for phone agents'), distinguishing it from sibling 'list' tools like list_calls or list_phone_numbers by specifying the AI domain. However, it does not explicitly contrast with siblings or clarify selection criteria.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites (e.g., whether certain account types can access all models) or workflow context (e.g., 'use this before make_call to select a model').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate the operation is read-only, idempotent, and safe. The description adds valuable domain-specific context by specifying 'USD' as the currency unit and 'current' implying real-time balance, but omits details about return format, precision (cents vs dollars), or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no extraneous words. Information is front-loaded with specific resource identification (BubblyPhone credit balance) and unit (USD) immediately clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple, zero-parameter read operation with comprehensive safety annotations, the description is sufficient. It identifies the domain (BubblyPhone), resource type (credit balance), and unit (USD). Without an output schema, additional detail on return type would be helpful but not strictly necessary for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters. According to scoring guidelines, zero-parameter tools receive a baseline score of 4. The description appropriately does not fabricate parameter requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves the current credit balance for BubblyPhone and specifies the currency (USD), which distinguishes it from siblings like get_account or get_usage. However, uses generic verb 'Get' rather than 'Retrieve' and lacks explicit negative constraints (e.g., 'does not return transaction history').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like list_transactions (for history) or get_usage (for consumption metrics). Does not mention prerequisites or typical use cases (e.g., checking funds before make_call).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare destructiveHint=true and idempotentHint=false, the description adds no behavioral context about what destruction entails (original call termination), transfer type (blind vs attended), or open-world implications for external numbers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single 7-word sentence with zero redundancy; action verb leads immediately and every word serves the definition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 2-parameter tool with full schema documentation, but insufficient for a destructive operation lacking output schema; omits success/failure semantics, original call disposition, and openWorldHint implications.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the schema already documents 'call_id' and E.164 format for 'to'. The description adds no parameter syntax, examples, or validation rules beyond the schema, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Transfer') with clear resource ('active call') and target ('another phone number'), unambiguously distinguishing it from siblings like make_call (new calls) or hangup_call (terminating calls).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'active call' implicitly signals prerequisite state (the call must be in-progress), but lacks explicit when-not-to-use guidance, alternatives for inactive calls, or warnings about transfer failure scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true. The description adds value by specifying what data is returned (name, email, company, balance) since no output schema exists, but does not disclose additional behavioral traits like rate limits, authentication scope, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It front-loads the action and resource, then appends the specific field list, earning its place with high information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that no output schema exists, the description compensates appropriately by enumerating the returned fields (name, email, company, balance). Combined with comprehensive annotations covering safety profiles and the trivial parameter schema, the description provides complete context for invoking this simple read-only tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters. According to the scoring guidelines, 0 parameters establishes a baseline score of 4, as there are no parameter semantics to clarify beyond what the empty schema already communicates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get) and resource (BubblyPhone account information) and lists specific return fields (name, email, company, balance). However, it does not explicitly distinguish from the sibling tool 'get_balance', which also returns balance information, potentially causing confusion about which to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_balance'. It lacks prerequisites, conditions, or explicit comparisons to sibling tools that would help an agent select the correct tool for balance-checking scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnly, non-destructive, idempotent). The description adds valuable behavioral context by detailing exactly what statistics are returned (call counts, minutes, costs, number costs), compensating for the missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded with action verb, uses em-dash efficiently to enumerate return values. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single optional parameter) and rich annotations, the description is nearly complete. It effectively compensates for the lack of output schema by listing return value fields. Could mention default behavior (current month) but schema covers this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the 'period' parameter is fully documented as 'Month in YYYY-MM format'). The description mentions 'for a time period' which aligns with the parameter but adds no additional syntax or semantic details beyond the schema. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get') + resource ('usage statistics') with specific data points listed (call counts, minutes, costs). Implicitly distinguishes from siblings like get_balance (current funds) and list_calls (individual records) by specifying aggregated statistics, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance. An agent might confuse this with get_balance for cost inquiries or list_calls for call history. However, the specificity of 'inbound/outbound call counts, minutes, costs' implies this is for aggregated reporting over time periods.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The word 'immediately' adds valuable timing context beyond what the annotations provide, indicating the action occurs without delay. However, given the annotations already cover the destructive and idempotent nature of the operation, the description could have added more about side effects (e.g., whether the call record is preserved or marked as 'completed').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is exceptionally concise with zero redundant words. It front-loads the critical action verb and efficiently communicates both the target resource and execution speed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter destructive operation with complete schema coverage and comprehensive annotations, the description is appropriately complete. It could marginally improve by clarifying the terminal state of the call after hangup, but this is not required given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents the 'call_id' parameter. The description does not add additional semantic meaning about the parameter format or constraints, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Terminate') and resource ('active call') that clearly distinguishes this tool from siblings like 'transfer_call' (which moves calls) and 'make_call' (which creates them). It precisely conveys the core action of ending a call session.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'active call' implies this should only be used when a call is in progress, but there is no explicit guidance on when to choose this over 'transfer_call' or other alternatives. It lacks prerequisites or 'when-not-to-use' exclusions that would help an agent avoid hanging up calls that should be transferred instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent), so description adds value by disclosing return payload contents (status, duration, cost, transcript). This is important given the sibling get_call_transcript tool exists, clarifying this returns transcript among other data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence front-loaded with the verb 'Get'. The field enumeration (status, duration, cost, transcript) earns its place by providing specific behavioral expectations. Zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter retrieval tool with good annotations, the description is nearly complete. It compensates for the lack of output schema by listing the specific fields returned, though it could mention error behavior if call_id is invalid.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'call_id' fully described as 'The call ID'. Description reinforces this is for a 'specific call', but adds minimal semantic detail beyond the schema's clear parameter documentation. Baseline 3 appropriate when schema carries the load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with resource 'call' and enumerates exact fields returned (status, duration, cost, transcript). It clearly distinguishes from sibling list_calls (which lists multiple) and get_call_transcript (which is narrower in scope).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'specific call' implies usage when you have a call ID versus browsing with list_calls, but there is no explicit when-to-use guidance or named alternatives. Sibling differentiation is implicit rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnly/idempotent status, the description adds valuable behavioral context: it specifies the return value ('full conversation text and summary') and the completion status requirement ('completed call') which annotations don't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose and constraint (completed call), second discloses return values. Perfectly front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter retrieval tool, the description adequately compensates for the missing output schema by describing return values ('text and summary'). Annotations provide safety context. Minor gap: doesn't indicate where to obtain the call_id.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'call_id' parameter, the schema carries the semantic burden. The description doesn't add parameter details (e.g., format, where to obtain it), but none are needed given the comprehensive schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') with a specific resource ('transcript') and distinguishes from sibling tools like 'get_call' (metadata) by specifying 'transcript' and 'full conversation text'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'for a completed call' implies a usage constraint (don't use on active calls), but there are no explicit when-not guidelines or named alternatives like 'use get_call for metadata instead'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnlyHint, destructiveHint) and idempotency. The description adds valuable behavioral context by specifying the tool 'Returns the per-minute cost,' explaining what the rate represents without contradicting the read-only annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first states the action and target, the second clarifies the return value. Perfectly front-loaded with no filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only lookup with one parameter and no output schema, the description is nearly complete. It covers the lookup purpose and return value semantics (per-minute cost), though specifying the currency format would make it fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (E.164 format with example), the schema fully documents the parameter. The description references 'phone number' but does not add semantic details beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Look up') with a specific resource ('calling rate for a specific phone number'), clearly distinguishing it from the sibling tool 'get_country_rates' which operates at the country level rather than on specific phone numbers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage by specifying 'specific phone number,' it lacks explicit guidance on when to use this tool versus the sibling 'get_country_rates' (e.g., 'Use this for individual number pricing; use get_country_rates for country-wide tariffs').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, non-destructive, idempotent), but the description adds valuable return value context: 'Returns numbers with pricing information.' This discloses what data to expect since no output schema is provided. It also clarifies these are inventory/available numbers, not owned ones.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: purpose (sentence 1), filtering capabilities (sentence 2), and return values (sentence 3). Zero redundancy—every sentence adds distinct information not found in the structured fields. Appropriately front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and present annotations, the description is appropriately scoped. It compensates for the missing output schema by mentioning 'pricing information' in the description. Could be improved by noting that all parameters are optional (0 required) or mentioning the limit default/max behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mentions filtering by country, area code, and locality, conceptually grouping three parameters, but adds no semantic details for number_type, contains, or limit that aren't already in the schema. It meets but does not exceed the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches for 'available phone numbers to purchase,' using a specific verb and resource. It effectively distinguishes from siblings like buy_phone_number (purchase action), get_phone_number (retrieve specific owned number), and list_phone_numbers (list existing inventory) by emphasizing 'available' and 'to purchase.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the phrase 'to purchase' implies this tool precedes buy_phone_number, there is no explicit guidance on when to use this versus list_phone_numbers (for owned numbers) or get_phone_number (for specific number details). The workflow relationship with the purchase tool is left implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare destructiveHint and openWorldHint; the description adds critical behavioral context by specifying 'Costs money from your balance' (financial impact) and explaining the AI agent handling in streaming mode. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose front-loaded, cost warning second (critical for destructive tool), streaming guidance third. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 11 parameters and two distinct operational modes (webhook vs streaming), the description covers cost implications and streaming setup but omits webhook mode behavior expectations, transfer functionality details, and success/failure outcomes (though no output schema exists to describe).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description adds minimal semantic value beyond schema, noting the relationship between mode/model/system_prompt for streaming, but does not elaborate on webhook_url behavior, voice selection criteria, or other parameter interactions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Initiate' with clear resource 'outbound call from a BubblyPhone number'. The term 'outbound' and 'make' clearly distinguishes this from siblings like get_call (retrieval), hangup_call (termination), and transfer_call (redirecting existing calls).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context for streaming mode usage ('provide model and system_prompt to have an AI agent handle the call') and warns about costs ('Costs money from your balance'). However, lacks explicit differentiation from transfer_call (use this for new calls vs. transfer for existing calls) or when to use webhook vs streaming.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations establish the read-only, non-destructive, idempotent nature of the call, the description adds valuable behavioral context about data characteristics: 'detailed event log' implies structured temporal data, and 'millisecond-precision timestamps' specifies granularity not indicated elsewhere.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence front-loads the core capability and key technical detail (millisecond precision), second sentence provides the use case (debugging). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read operation with strong annotations, the description is sufficiently complete. It explains what the tool returns (detailed event log) despite lacking an output schema. Minor gap: could briefly indicate expected return structure (array of events).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('The call ID'), the schema fully documents the single parameter. The description implicitly references the target resource ('for a call') but does not add syntax details, format constraints, or examples beyond the schema baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get'), resource ('detailed event log for a call'), and distinguishing characteristic ('millisecond-precision timestamps'). It effectively differentiates from siblings like get_call (likely basic metadata) and get_call_transcript (audio/text content) by specifying 'event log' format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context ('Useful for debugging call flow issues') that establishes when to invoke this tool. However, it lacks explicit guidance on when NOT to use it or named alternatives (e.g., distinguishing from get_call for basic retrieval).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover readOnly/destructive/idempotent hints, so the bar is lower. The description adds valuable domain context ('active streaming call', 'AI agent', 'mid-call') that clarifies scope. However, it omits behavioral details like whether the AI immediately processes the message or if injection triggers an immediate response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first defines the action, second provides usage context and example. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with complete schema coverage and present annotations, the description is nearly complete. Minor gap: it doesn't describe the return value or immediate effect on the call (e.g., whether the AI verbally acknowledges the injection).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), the description adds semantic value through the example message ('The customer's order #1234 has shipped'), which clarifies that the 'message' parameter is intended for business context/updates rather than arbitrary text.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Send') and resource ('message or context into an active streaming call's AI agent') that clearly distinguishes it from siblings like hangup_call, transfer_call, or get_call. It precisely identifies the target domain (AI agent within active calls).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('give the AI agent new information mid-call') and provides a concrete example ('The customer's order #1234 has shipped'). However, it lacks explicit 'when not to use' guidance or mention of alternatives for non-streaming calls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare read-only, safe, idempotent characteristics. The description adds valuable behavioral context by disclosing the specific transaction types returned (top-ups, charges, rentals, refunds), which defines the scope of data without contradicting the safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence with high information density. The em-dash construction front-loads the action and efficiently enumerates example transaction types with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a simple read-only list tool with no input parameters. The description compensates for the missing output schema by enumerating the transaction types returned. Could mention pagination or time range behavior, but the content is sufficient for tool selection given the rich annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, the baseline score applies. The description appropriately focuses on the return value semantics rather than parameters, which is correct for a parameterless list operation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'List' with clear resource 'billing transactions'. The enumerated examples (top-ups, call charges, number rentals, refunds) effectively distinguish this from siblings like list_calls, get_usage, or get_balance by specifying the exact nature of the records returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The transaction type examples (top-ups, refunds, etc.) provide implied usage context for when to retrieve billing history. However, it lacks explicit guidance contrasting this with get_balance (current balance) or get_usage (aggregates), and does not state when NOT to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare idempotentHint=true, destructiveHint=false, and readOnlyHint=false. The description adds valuable context that this configures an 'AI agent' (explaining the conceptual model), but does not disclose additional behavioral traits like rate limits, authentication requirements, or partial update semantics beyond what the schema and annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with no redundancy. The first states the core action, the second elaborates scope and use case. Every word earns its place; well front-loaded with the most critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich schema (100% coverage, 18 params) and comprehensive annotations, the description provides sufficient context by framing this as an AI agent configuration tool. No output schema exists, so return values need not be explained. Could be improved by noting that only phone_number_id is required, enabling partial updates.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds semantic value by grouping parameters into conceptual categories ('model, system prompt, voice, tools, webhook, and other settings' as AI agent configuration), helping the agent understand the purpose of the 18 parameters beyond the schema's flat list.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Update[s] phone number configuration' and clarifies this is specifically to 'set up or modify the AI agent.' It clearly distinguishes from sibling tools like buy_phone_number (acquisition) and get_phone_number (retrieval) by focusing on configuration/modification of existing resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear positive guidance ('Use this to set up or modify the AI agent') indicating when to invoke the tool. However, it lacks explicit negative guidance or named alternatives (e.g., 'do not use for purchasing numbers, use buy_phone_number instead').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds crucial cost structure details ('setup fee + monthly') beyond the annotations' destructiveHint=true, helping the agent understand the financial commitment. Does not contradict annotations (destructiveHint aligns with 'costs money'). Minor gap: doesn't clarify failure modes or whether purchase is reversible.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: action declaration, cost warning, and prerequisite instruction. Information is front-loaded and each sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial transaction tool with no output schema, the description adequately covers the workflow (search first), cost structure, and side effects. Slight gap: doesn't describe what the tool returns on success or specific error conditions (e.g., number already purchased).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description doesn't add parameter-specific guidance (e.g., webhook behavior, number_type implications) but the schema is self-sufficient. No additional semantic context provided beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'Purchase' and clear resource 'phone number', immediately distinguishing this from sibling tools like search_phone_numbers (finding) or get_phone_number (retrieving). It establishes the commercial nature of the operation up front.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the prerequisite workflow ('Use search_phone_numbers first to find available numbers') and warns about financial implications ('costs money from your balance'), giving the agent clear signals on when to invoke this versus the search tool and when to avoid it (insufficient balance).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context by specifying that the listing includes 'status and configuration' data, informing the agent what attributes are returned without contradicting the safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 9 words with zero redundancy. Information is front-loaded with the action verb, and every word ('owned', 'status', 'configuration') earns its place by adding semantic value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so the description carries the burden of explaining return values. It partially satisfies this by mentioning 'status and configuration' fields, though it could explicitly confirm the return structure is a list/collection. Adequate for a zero-parameter read operation with good annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, establishing a baseline of 4. The description appropriately requires no parameter clarification since no arguments are needed to invoke this tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'List' + resource 'owned phone numbers' + scope 'status and configuration'. The term 'owned' effectively distinguishes from sibling 'search_phone_numbers' (which finds available numbers) and 'get_phone_number' (which retrieves a specific number).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The word 'owned' provides clear contextual guidance that this tool is for viewing existing inventory rather than acquiring new numbers (buy_phone_number) or searching available inventory (search_phone_numbers). However, it does not explicitly name alternatives or state when-not-to-use conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/JobXDubai/mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server