Listing Bureau - Amazon Organic Ranking
Server Quality Checklist
- Disambiguation5/5
Tools are cleanly separated by resource domain (account, orders, projects, schedule, wallet) with distinct actions per resource. No overlapping functionality—schedule_quick_set vs schedule_set serve different use cases, and project getters target different data types (metadata vs stats).
Naming Consistency5/5Strictly follows lb_{resource}_{action} snake_case convention throughout. Action verbs are consistent (get/list/create/update/archive) with clear modifiers (quick_set, report_issue, topup) that describe behavior without breaking the pattern.
Tool Count4/521 tools is slightly above the ideal 3-15 range but justified by the domain complexity spanning account management, project CRUD, scheduling (three distinct operations), orders, and wallet/billing. Each tool earns its place with no redundancy.
Completeness4/5Covers full lifecycles: account management, project CRUD with archival, flexible scheduling (uniform and per-day), order tracking with issue reporting, and wallet operations. Minor gap: project updates are limited to pause/resume (cannot modify ASIN/keyword after creation), and orders lack cancellation, but these may reflect business constraints.
Average 4/5 across 21 of 21 tools scored. Lowest: 3.3/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.13
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 21 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotation declares idempotentHint=false, indicating multiple calls create multiple entries, but the description doesn't explain this behavior or what happens to the submitted feedback (e.g., support ticket creation, email notification). The character limit mentioned duplicates the schema constraints rather than adding behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence of seven words with a parenthetical constraint. It is appropriately front-loaded with the action verb and contains no redundant or wasteful language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without an output schema, the description covers the basic action and constraints. However, it lacks context about the submission outcome (e.g., confirmation message, processing time) or side effects, which would help an agent understand the complete operation cycle.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description repeats the character limit constraint (10-5000) found in both the schema description and minLength/maxLength fields, adding no additional semantic meaning about content format, examples, or validation beyond the structured data.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Submit') and clearly identifies the resource types (feedback, feature requests, suggestions). While it doesn't explicitly reference siblings, the purpose is distinct enough from the operational siblings (account/orders/projects/wallet tools) to avoid confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it state prerequisites or conditions for use. While there are no direct sibling alternatives for feedback submission, the lack of any contextual guidance (e.g., 'Use this to report bugs') leaves usage criteria undefined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the safety profile is covered. The description adds domain context ('Listing Bureau') and depth hint ('detailed info'), but omits error behavior, authentication requirements, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no waste. Information is front-loaded with the verb and resource immediately clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple single-parameter read operation with readOnly annotations. No output schema exists, so explaining return values isn't expected, though error case handling could be mentioned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (order_id is documented as 'Order identifier'). The description doesn't add parameter syntax or validation rules, meeting the baseline for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Get') and resource ('Listing Bureau order'). The word 'specific' distinguishes it from sibling lb_orders_list, though it doesn't explicitly name the sibling tool or clarify what 'detailed info' encompasses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use this versus lb_orders_list or lb_orders_report_issue. While 'specific' implies use when an order_id is known, it doesn't state prerequisites or when to prefer listing over retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare destructiveHint=true, indicating state mutation, but the description only states 'Generate... URL' which understates the write operation (creating a checkout session). No explanation provided for why this is destructive or what side effects occur.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded with the core action (Generate Stripe checkout URL) followed by return value description. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool, but gaps remain regarding the destructive annotation and the post-payment flow (e.g., whether balance updates automatically after URL completion). Without output schema, the description adequately covers the return value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('Top-up amount in USD...'), the schema fully documents the parameter. The description adds no additional semantics about the amount parameter beyond what the schema already provides, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Generate' combines with explicit resources 'Stripe checkout URL' and 'wallet balance'. The scope (topping up vs viewing) clearly distinguishes it from siblings lb_wallet_get_balance and lb_wallet_get_transactions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'to top up wallet balance' provides implied usage context (use when adding funds), but there is no explicit when/when-not guidance or comparison to sibling tools like lb_wallet_get_balance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, confirming safe read behavior. The description adds valuable behavioral context by enumerating the specific data fields returned (email, name, status, balance), compensating for the missing output schema. It does not mention rate limits or caching, but the core return structure is disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no wasted words. The parenthetical field list efficiently conveys the return payload. Information is front-loaded and dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description adequately compensates by listing the four key fields returned. For a simple zero-parameter read operation with readOnlyHint annotation, this covers the essential behavioral contract without needing exhaustive detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters exist in the input schema, establishing a baseline of 4. No parameter documentation is required or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Get' and resource 'Listing Bureau account info' plus lists exact fields returned (email, name, account status, wallet balance). Distinguishes from siblings like lb_account_get_subscription and lb_account_update_profile by scope, though it doesn't clarify the overlap with lb_wallet_get_balance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the specialized siblings like lb_wallet_get_balance (which also returns wallet data) or lb_account_get_subscription. No prerequisites or conditional usage advice is included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, so the safety profile is covered. Description adds value by specifying what 'detailed info' entails (schedule, services, SERP data). However, it omits error handling (e.g., invalid ui_id), rate limits, or cache behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single, dense sentence with zero redundancy. Front-loaded action ('Get detailed info'), followed by scope ('for a specific...'), and precise content enumeration ('including...'). Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple read-only getter with one parameter. Description compensates for missing output schema by listing return content types. Could improve by noting error cases (e.g., project not found) or authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage ('Project unique identifier'), establishing baseline 3. Description implicitly references the parameter via 'specific... project' but adds no additional syntax, format details, or examples beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: uses concrete verb 'Get', identifies resource as 'Listing Bureau project', and distinguishes from siblings via 'specific' (contrasting with lb_projects_list) and detailed content list 'schedule, services, and SERP data' (contrasting with lb_projects_get_stats).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through the word 'specific', suggesting single-project retrieval vs. bulk operations, but lacks explicit guidance on when to use this versus lb_projects_list or lb_projects_get_stats. No 'when-not' or prerequisite guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare idempotency (idempotentHint: true), and the description adds the validation constraint that at least one field must be provided, but does not disclose other behavioral traits like permission requirements or what constitutes a successful response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two front-loaded sentences with zero waste: the first establishes the operation and target fields, the second states the critical constraint. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 3-parameter input schema with full coverage and the idempotency annotation, the description is nearly complete. It could be improved by mentioning the return value or success behavior, but it adequately covers the essential invocation constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage with basic descriptions, the description adds crucial semantic information that the schema lacks: the validation rule that at least one of the three optional parameters must be provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('Update') and resource ('Listing Bureau account profile') and explicitly lists the fields being operated on, clearly distinguishing this from sibling read operations like lb_account_get.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a critical usage constraint ('At least one field required') which prevents invalid invocations, but does not explicitly address when to use this tool versus sibling alternatives like lb_account_get.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true; description adds valuable behavioral context beyond this: default sort order (newest first), pagination model, and specific response fields included (order status, campaign, region, issue report status). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes operation and key behaviors (pagination, sorting), second previews response payload. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter list operation, description adequately compensates for missing output schema by enumerating returned fields (status, campaign, region, issue report status). Combined with readOnly annotation and full schema coverage, provides sufficient context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (default values, min/max constraints), so baseline applies. Description mentions 'paginated' conceptually but doesn't add parameter-specific semantics (e.g., cursor vs offset pagination) beyond what schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'List' + resource 'Listing Bureau orders' with scope indicators 'paginated, newest first' that distinguish it from sibling lb_orders_get (singular retrieval) and lb_orders_report_issue (mutation). Plural 'orders' and pagination signal bulk listing vs individual fetch.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through 'paginated' and response field preview (status, campaign, region), indicating when you need order overviews. However, lacks explicit guidance on when to use lb_orders_get for specific order retrieval instead, or filtering limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true, the description adds critical behavioral context: the specific data domains returned (service execution, SERP, ARA, etc.) and performance characteristics ('14+ backend DB queries', '~1-2s' latency) that help agents understand operational impact. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences total: first establishes purpose and data scope, second provides performance warning. Information is front-loaded, no redundancy, every clause earns its place regarding functionality or operational characteristics.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description compensates effectively by listing the 7+ specific data categories returned. The performance disclosure is essential for a potentially slow operation. Minor gap: could mention response format or pagination, but the data type enumeration provides substantial value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (ui_id: 'Project unique identifier', days: 'Number of days of history'), so the schema fully documents parameters. The description provides contextual alignment ('daily stats' implies time-series data matching the days parameter) but does not add syntax or format details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' + resource 'daily stats' and explicitly enumerates the data categories returned (SFB, ATC, PGV, SERP rankings, ARA analytics, Brand Referral, Search Query), clearly distinguishing this from sibling lb_projects_get which likely retrieves project metadata rather than analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The performance warning ('may be slow ~1-2s') provides implicit usage guidance about latency expectations, and the enumeration of data types suggests when to use the tool (when those specific metrics are needed). However, it lacks explicit contrast with lb_projects_get or guidance on when NOT to use this versus other project tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true; description adds valuable behavioral context about return payload (ASIN, keyword, active status, service volumes) since no output schema exists. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence establishes operation and filtering capability; second sentence discloses return fields. Perfectly sized and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter read-only list tool without output schema, the description adequately compensates by listing return fields. With readOnly annotation covering safety profile and schema covering parameters, description provides sufficient contextual completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('Filter by Amazon region code', 'Filter by active status'). Description references 'optional filters' generally but does not add parameter-specific semantics beyond what schema already provides. Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'List' with resource 'Listing Bureau Amazon projects' and clearly distinguishes from siblings like lb_projects_get (single retrieval), lb_projects_create, lb_projects_update, and lb_projects_archive through the plural 'projects' and list operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions 'optional filters' implying when to use (filtering needed), but lacks explicit when-not-to-use guidance or direct references to sibling alternatives like lb_projects_get for single project retrieval vs this list operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, so description appropriately focuses on return value structure ('per-day service volumes') and behavioral constraints (US-region limitation for SFB). Adds valuable context about what data fields (atc, sfb, pgv) are returned without contradicting safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient statements: purpose declaration, return value specification, and regional constraint. No redundant text. Information is front-loaded with the core action in the first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter read operation. Despite lacking output schema, description explains return structure (per-day volumes with specific field codes). Regional limitation is documented. Domain terminology (atc, sfb, pgv) is specific but appropriate for the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('Project unique identifier'), the schema carries the burden. Description mentions 'Listing Bureau project' providing domain context for the ui_id parameter, but does not elaborate on format, validation, or acquisition of the identifier beyond schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' + resource 'schedule' + scope 'Listing Bureau project'. The naming convention (get vs siblings 'lb_schedule_set'/'lb_schedule_quick_set') clearly distinguishes this as a read operation versus write operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides one usage constraint ('SFB is only available for US-region projects'), but lacks explicit guidance on when to use this versus sibling tools lb_schedule_set or lb_schedule_quick_set. The distinction is implied by 'Get' versus 'Set' naming but not stated explicitly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the readOnlyHint annotation, the description adds valuable behavioral context: it confirms pagination behavior and specifically warns about legacy entries exceeding per_page count on the last page—a critical edge case for consumption logic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose and pagination, second delivers a specific behavioral warning about legacy entries. Front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a read-only list operation with optional pagination parameters. The description covers the primary behavioral quirk (legacy entries), and with readOnlyHint annotated and high schema coverage, no significant gaps remain despite missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameters are fully documented in the schema itself. The description mentions 'paginated' which contextually supports the parameters but adds no specific semantic details beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with clear resource 'Listing Bureau wallet transaction history', and distinguishes itself from siblings like lb_wallet_get_balance and lb_wallet_topup through the 'transaction history' scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'paginated' keyword and note about 'Last page may include legacy entries' provide implicit usage guidance for handling large datasets, but lacks explicit when-to-use guidance or differentiation from lb_wallet_get_balance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true, the description adds valuable behavioral context: 'Returns empty object if no plan is active.' This specific edge-case behavior helps the agent interpret responses correctly without trial and error.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads the core purpose; second sentence provides the critical edge-case behavior. No redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple getter with no output schema. Covers the success case (get rates) and edge case (empty object). Could be improved by briefly describing the structure of returned rate data (e.g., whether it's a tiered pricing object or flat rate), but sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters with 100% schema coverage (empty object). Description correctly focuses on behavior rather than inventing parameter documentation. Baseline 4 achieved as no parameters require semantic explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' plus precise resource 'Listing Bureau service pricing rates' clearly defines scope. Distinct from sibling lb_estimate_cost (which calculates specific costs) and lb_account_get_subscription (which gets subscription status) by focusing on base pricing rates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Lacks explicit when-to-use guidance contrasting with lb_estimate_cost. Agent must infer this retrieves rate cards/pricing tiers while lb_estimate_cost calculates projected costs. No prerequisites or conditions mentioned beyond the no-plan edge case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, and description confirms with 'Get' operation. Description adds valuable behavioral context by disclosing exactly what data fields are retrieved (plan label, fee, discount, wallet usage) beyond the annotation, though it omits caching or rate limit details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action verb. Parenthetical enumeration of return fields is information-dense with zero waste. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple parameter-less read operation, description adequately compensates for missing output schema by enumering return values (plan, fee, discount, wallet usage). Minor gap: doesn't mention if this returns current subscription only or historical data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, meeting baseline score of 4 per rubric. Schema is empty object with 100% coverage (vacuously true), requiring no additional parameter semantics from description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' + clear resource 'Listing Bureau subscription info' + explicit scope differentiation via parenthetical list of returned fields (plan label, fee, discount, wallet usage), clearly distinguishing from sibling lb_account_get (general account) and lb_wallet_get_balance (wallet only).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context through specificity of returned data (subscription financial details vs general account info), but lacks explicit when-to-use guidance or named alternatives contrasting with lb_account_get or lb_account_get_service_rates.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations only indicate non-idempotency (idempotentHint: false). The description adds crucial behavioral context: the 5-report limit per order and specific retry prohibitions on limit errors. This discloses rate-limiting behavior not evident from the annotation alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient two-sentence structure. The first sentence states purpose; the second delivers critical operational constraints without waste. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter mutation tool with no output schema, the description adequately covers purpose, constraints, and error-handling behavior. Slight deduction for not describing success state or return value implications, though this is partially mitigated by the straightforward nature of the operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description repeats the character limit constraint ('1-1000 characters') already defined in the schema's minLength/maxLength, adding no new semantic meaning beyond the structured definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Report') and clear resource ('an issue with a Listing Bureau order'), precisely defining the tool's function. It effectively distinguishes itself from sibling tools like lb_orders_get (retrieval) and lb_feedback_submit (general feedback vs. specific issues).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit operational constraints: 'Maximum 5 issue reports per order' defines a usage limit, and 'do not retry if you receive a limit error' gives clear when-not guidance. Lacks explicit differentiation from lb_feedback_submit, preventing a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations declare destructiveHint: true, but the description adds essential behavioral context by clarifying this is a 'soft delete' rather than permanent destruction, and documents the specific recovery mechanism (recreation with same ASIN+keyword+region). This prevents the agent from assuming irreversible data loss.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first defines the action and scope, second provides critical recovery information. Every word earns its place. Front-loaded with the verb and resource identifier.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter destructive operation without output schema, the description adequately covers the action, soft-delete nature, and recovery path. Could marginally improve by mentioning success indicator or immediate side effects (e.g., visibility changes), but sufficient for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'ui_id' parameter ('Project unique identifier'). The description does not add parameter-specific semantics, but none are needed given the complete schema documentation. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Archive (soft delete)') and identifies the exact resource ('Listing Bureau Amazon project'). It clearly distinguishes from sibling tools like lb_projects_update or lb_projects_list by specifying the archive/soft-delete action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While it doesn't explicitly name alternatives, it provides crucial usage context by explaining that archived projects 'can be reactivated by creating a new project with the same ASIN+keyword+region.' This recovery guidance implicitly indicates when to use the tool (when temporary removal is acceptable) versus permanent deletion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds crucial behavioral context beyond the idempotentHint annotation: explains the upsert-like reactivation logic, specific 201 return code for that case, and validation constraints (ASIN validity). Does not clarify behavior when project is already active or side effects like charging.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste. Front-loaded with primary action, followed by critical reactivation behavior, then validation rules. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 4 parameters and no output schema, description adequately covers the unique reactivation quirk and key validation rules. Missing explicit handling of active-project conflicts and return value structure, but sufficient for safe invocation given the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds 'ASIN must be a valid Amazon ASIN' (validation semantics) and reinforces keyword length constraints, but adds no context for 'region' or 'expected_retail_price' parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Create') + resource ('Listing Bureau Amazon project') clearly stated. Distinguishes from sibling tools (archive/get/update/list) by defining the creation operation and unique reactivation behavior.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context for when reactivation occurs vs. new creation ('If a project with the same ASIN+keyword+region was previously archived...'). However, lacks explicit guidance on when NOT to use (e.g., active project conflicts) or explicit sibling alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint: true, confirming safe read behavior. The description adds valuable behavioral context beyond annotations by warning that the response 'May include a warning if data is temporarily unavailable,' alerting agents to potential degraded states not captured in structured metadata.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states core purpose immediately, second adds critical behavioral note about data availability warnings. Appropriate length for a simple read-only getter with no parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter, read-only tool without an output schema, the description adequately covers the essentials by specifying return content type (credits and USD balance) and potential warning flags. Slightly short of perfect only because it doesn't elaborate on the structure of the balance object or exact currency formatting.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters. Per calibration rules, zero-parameter tools receive a baseline score of 4. The description correctly requires no additional parameter clarification since no arguments are needed to invoke the tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'Listing Bureau wallet balance' and specifies scope '(credits and USD)'. This clearly distinguishes the tool from siblings like lb_wallet_get_transactions (which retrieves transaction history) and lb_wallet_topup (which adds funds).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the specificity of 'wallet balance' versus siblings implies usage (retrieve current funds vs. history or adding funds), there is no explicit guidance on when to use this versus alternatives, prerequisites, or conditions that would warrant checking the balance first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare destructive/idempotent hints; description reinforces this with 'Replaces any existing schedule.' Adds valuable behavioral context not in annotations: regional execution rate differences (lower outside US) and clarifies that SFB is US-only despite the tool accepting the parameter globally.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfect information density. Front-loaded with core action ('Set the full per-day schedule'), followed by critical warning ('Replaces any existing'), then structural details and constraints. No redundant words; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a complex nested-object tool with destructive behavior. Covers domain constraints (regional limitations), data structure, and volume limits. Missing explicit mention of prerequisites (e.g., valid project ID from lb_projects_list) or authentication requirements, though these may be implicit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value by contextualizing the abbreviations (atc, sfb, pgv) as 'volumes,' explaining the entry structure ('each entry represents one day'), and emphasizing the complete replacement nature of the schedule parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Set' (verb), 'full per-day schedule' (scope), 'Listing Bureau project' (resource). The 'full' qualifier distinguishes it from sibling lb_schedule_quick_set, while 'Replaces any existing schedule' clarifies the destructive update semantics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical constraints: max 365 entries, regional restrictions (SFB US-only), and warns that it 'Replaces any existing schedule.' Deduction for not explicitly contrasting with lb_schedule_quick_set or mentioning prerequisite steps like retrieving current schedule first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true, the description adds valuable context about data sources accessed ('Fetches current rates and wallet balance'), computed outputs ('total cost, daily averages, and wallet sustainability'), and operational constraints ('SFB is US-region only', 'lower execution rate outside US') that annotations do not cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste. Front-loaded with purpose ('Estimate campaign cost'), followed by behavior, input patterns, and constraints. Each sentence delivers distinct, non-redundant information without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex parameter interdependencies (7 params, alternative input modes, regional constraints) and absence of an output schema, the description comprehensively covers what is calculated and returned ('total cost, daily averages, and wallet sustainability'). It adequately addresses the tool's complexity without requiring additional structured fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description elevates this by explaining semantic relationships: the mutual exclusivity of uniform volumes versus schedule arrays, the dependency of retail_price on SFB usage, and the validation logic between region and SFB eligibility. These interdependencies are not evident from the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific action ('Estimate campaign cost') and scope ('before committing'), clearly distinguishing this from sibling tools like lb_wallet_get_balance (simple retrieval) or lb_projects_create (mutation). It defines the computational nature of the tool versus raw data access.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on mutually exclusive input patterns ('either uniform daily volumes... or a per-day schedule array') and conditional requirements ('Include retail_price for accurate SFB costs', 'Pass region to validate SFB eligibility'). Lacks explicit sibling comparisons ('use X instead'), but clearly defines when specific parameters are required.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare idempotentHint=true (safe retries). Description adds valuable behavioral context: the semantic mapping of boolean values (pause/resume) and the current implementation limitation ('Currently supports'), indicating this is a partial update capability. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence front-loads purpose and capability; second sentence provides the sibling distinction. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter update tool with no output schema, the description is complete. It covers purpose, behavioral scope, sibling alternatives, and the idempotentHint annotation covers safety. No gaps remain for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with parameters fully documented ('Project unique identifier', 'Set active status (true=resume, false=pause)'). Description reinforces the pause/resume semantics but does not add syntax, format details, or examples beyond what the schema provides. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Update') and resource ('Listing Bureau project'), clearly defining scope as 'toggling the active status (pause/resume)'. Explicitly distinguishes from sibling tool lb_projects_archive by stating that archiving requires a different tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit alternative: 'To archive a project, use lb_projects_archive instead.' Also implies usage constraints with 'Currently supports toggling the active status,' guiding users away from attempting other update operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure beyond annotations. It confirms the destructive nature ('clears any existing per-day schedule'), explains idempotency ('replaces it with uniform values'), documents default behavior ('All omitted fields default to 0'), and discloses regional execution constraints ('lower execution rate outside US').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences with zero redundancy. Front-loaded with purpose, followed by critical behavioral warning, then operational constraints. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations covering safety profiles and 100% schema coverage, the description provides complete domain-specific context including regional restrictions, default behaviors, and the destructive scope—everything needed for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage, the description adds valuable aggregation ('All omitted fields default to 0') and conceptual framing ('uniform daily volumes') that explains the relationship between parameters. It also reinforces regional constraints that affect parameter applicability.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb phrase ('Quick-set uniform daily volumes') and identifies the resource ('Listing Bureau project'). It effectively distinguishes itself from sibling tools like lb_schedule_set (implied by 'Quick-set' and 'uniform' vs detailed/per-day scheduling).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides clear exclusion criteria via the WARNING about clearing existing schedules, effectively indicating when NOT to use this tool (when preserving existing per-day schedules is required). However, it stops short of explicitly naming the alternative tool (lb_schedule_set) to use instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/listingbureau/listingbureau-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server