Server Quality Checklist
- Disambiguation3/5
Most tools have distinct purposes (e.g., check_stock_levels vs lookup_sku vs search_inventory target different access patterns), but the natural_language_command tool significantly overlaps with list_orders, list_shipments, search_inventory, and others, creating ambiguity about when to use specific structured tools versus the flexible NL interface.
Naming Consistency5/5Excellent consistency throughout. All tools use snake_case with clear verb_noun patterns (get_, list_, check_, search_, optimize_, compare_). Verb usage is predictable: 'get' for specific items, 'list' for collections, 'search' for filtered queries.
Tool Count4/5Twenty tools is slightly above the ideal range but reasonable given the domain complexity spanning inventory, orders, shipments, fleet, warehouse, and AI analytics. Each tool serves a distinct supply chain function without obvious redundancy.
Completeness4/5Strong coverage across the supply chain lifecycle: inventory management, order tracking, shipment logistics, fleet optimization, and financial metrics. Minor gaps in explicit write/update operations (create_order, update_inventory), though natural_language_command appears to handle some mutations.
Average 3.6/5 across 20 of 20 tools scored. Lowest: 2.9/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 20 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It notes the 'AI-powered' nature (suggesting probabilistic outputs) and API key requirement, but fails to disclose output format, error handling, rate limits, data retention policies, or whether forecasts are cached. The 'Premium' label hints at cost implications but lacks specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description comprises three efficient sentences: capability statement, usage pattern, and operational constraints. The 'Premium tool' classification, while brief, serves a functional purpose for agent decision-making regarding authentication requirements. No significant redundancy or marketing fluff is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a two-parameter tool with complete schema documentation and no output schema, the description adequately covers the tool's purpose and access requirements. However, given the lack of annotations and output schema, the description should ideally explain return value structure (e.g., 'returns forecast data with confidence intervals') to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting both the 'query' parameter (with NL example) and 'hemisphere' parameter. The description does not add parameter-specific semantics beyond what the schema provides, meeting the baseline expectation for high-coverage schemas. No clarification is offered for the unconventional 'hemisphere' parameter name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the core function ('AI-powered demand forecasting') and specific domains covered (inventory needs, sales trends, supply chain patterns). It distinguishes this from sibling tools that handle current state (check_stock_levels) or optimization (optimize_route) by emphasizing 'future' forecasting and natural language interaction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Premium tool. Requires API key,' indicating an authentication prerequisite, but provides no guidance on when to use this versus the sibling 'natural_language_command' tool or other analytics tools like 'get_financial_metrics.' It does not clarify selection criteria or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It discloses return fields (SKU, name, quantity, location, category) and auth requirement (API key), but omits safety profile (read-only vs destructive), rate limits, or empty result behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences with zero waste. Front-loaded with the core action (Search inventory items), followed by return values and requirements. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter search tool with no output schema, the description adequately covers the return payload structure by listing fields. However, it should clarify that all parameters are optional (required: 0) and elaborate on pagination behavior given the page/limit parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description mentions 'filters' which loosely references the category parameter, but adds no syntax details, format examples, or semantic relationships between parameters beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Search) + resource (inventory items) + method (with filters). Distinguishes from 'lookup_sku' sibling by implying broader filtering capability, though it doesn't explicitly name the sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like 'lookup_sku' (for specific SKU retrieval) or 'check_stock_levels' (for quantity-focused queries). No prerequisites or exclusions mentioned beyond API key.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the 'Requires API key' authentication requirement, but omits other critical behavioral traits: read-only vs. destructive nature, data freshness, time-range scope for 'trends', and return format (since no output schema exists).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally concise with two efficient sentences. The primary action ('Get order statistics') is front-loaded, followed by specific examples and the authentication requirement. No redundant or wasted language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, and the presence of numerous semantically similar siblings (get_kpi_dashboard, get_financial_metrics), the description is incomplete. It should clarify the output structure and explicitly differentiate this tool's aggregation scope from list/get alternatives.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per the calibration rules, zero parameters establishes a baseline score of 4. The description neither adds nor removes value regarding parameters, maintaining this baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'order statistics and KPIs' and lists specific metrics (total orders, fulfillment rate, average value, trends). However, it does not differentiate from the sibling tool 'get_kpi_dashboard' or clarify the scope relative to 'list_orders', preventing a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_kpi_dashboard', 'get_financial_metrics', or 'list_orders'. It also lacks prerequisites (beyond the API key mention) or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the authentication requirement (API key) and implies a read-only operation via 'Get,' but omits other behavioral traits like error handling (what happens if the order ID is invalid), rate limits, or data freshness/caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately compact with two sentences totaling nine words. The structure is front-loaded with the purpose first, followed by the operational requirement. However, the second sentence ('Requires API key') is informationally thin compared to the first, slightly reducing the value density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter, no nested objects) and lack of output schema, the description adequately covers the input but falls short by not describing the return value. Since no output schema exists to document what 'detailed status and tracking' actually contains, the description should ideally outline the response structure or key fields returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('Order ID to look up'), establishing a baseline of 3. The description adds minimal semantic value regarding the parameter—while it implies a single order lookup, it provides no additional format guidance, validation rules, or examples beyond what the schema already specifies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'detailed status and tracking for a specific order' using specific verbs and resource identification. It implicitly distinguishes from sibling list_orders by emphasizing 'specific order' (singular lookup vs. list), though it does not explicitly clarify when to use this versus track_shipment, which also involves tracking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides one usage constraint—'Requires API key'—indicating an authentication prerequisite. However, it lacks explicit guidance on when to use this tool versus siblings like track_shipment (package tracking) or list_orders (order enumeration), leaving the agent to infer based on the 'specific order' qualifier.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adds critical auth context (API key requirement) but lacks other behavioral details like pagination, rate limits, return format, or whether the hierarchy is returned as nested objects vs. flat list.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The hierarchy detail is front-loaded in the first sentence, and the API key requirement is clearly stated in the second.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description partially compensates by mentioning the hierarchical levels (zones, aisles, bins) which hints at return structure. However, it remains silent on actual return format, pagination, or filtering capabilities expected for a data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per scoring rules, 0 parameters establishes a baseline score of 4. The description appropriately does not invent parameter semantics where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'List warehouse storage locations' and specifies the hierarchical components (zones, aisles, bins) that distinguish it from flat inventory searches. However, it does not explicitly differentiate from sibling tools like search_inventory or check_stock_levels which might also involve locations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description only states 'Requires API key' as a prerequisite. It provides no guidance on when to use this tool versus alternatives like search_inventory, or under what conditions (e.g., when needing physical layout vs. stock counts).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of disclosing behavioral traits. It successfully notes the API key requirement and return value schema (order ID, status, items, total), but omits details on rate limits, pagination behavior, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste. Front-loaded with the core action ('List and search'), followed by return values and authentication requirements. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with 100% schema coverage and no output schema, the description adequately compensates by documenting the return fields and authentication needs. However, it lacks usage context regarding sibling differentiation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'filters' which loosely maps to the status parameter, but adds no semantic detail beyond the schema regarding pagination (page/limit) or the status enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verbs ('List and search') and resource ('orders'), and mentions filtering capability. Distinguishes implicitly from sibling 'get_order_status' (single retrieval vs. list/search) but does not explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use this tool versus siblings like 'get_order_status' (for single orders) or 'get_order_statistics' (for aggregates). No mention of prerequisites or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the API key requirement and hints at returned fields (status, location, capacity), but fails to explicitly confirm read-only safety, pagination behavior, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the core action and resource; the second states the auth requirement. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter schema and lack of output schema, the description adequately compensates by indicating what data is returned (status, location, capacity) and noting the authentication requirement. It misses pagination details but is sufficient for a basic list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'status' as a returned attribute but does not add syntax details, valid formats, or usage guidance for the filter parameter beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists fleet vehicles and implies the data returned (status, location, capacity). It distinguishes from sibling 'get_fleet_stats' by focusing on individual vehicle records rather than aggregated statistics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings like 'get_fleet_stats' or 'list_shipments'. The 'Requires API key' statement indicates a prerequisite but does not help with selection logic.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It compensates by describing return values ('full item details including stock, location, and pricing') and auth requirements ('Requires API key'), but omits rate limits, error behaviors, or idempotency declarations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences with zero waste: action definition, return value disclosure, and auth requirement. Information is front-loaded with the primary verb and scoping.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (1 string parameter, no nested objects, no output schema), the description adequately covers the operation's contract. It explains what is returned in lieu of an output schema, though error handling (e.g., invalid SKU) remains unspecified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% ('SKU code to look up'), establishing baseline 3. Description reinforces this with 'by its SKU code' but adds no additional semantic context like format constraints (e.g., alphanumeric length) or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Look up'), resource ('inventory item'), and method ('by its SKU code'). Implicitly distinguishes from sibling 'search_inventory' by emphasizing exact SKU lookup versus general search, though it could explicitly name the alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions 'Requires API key' as a prerequisite but provides no guidance on when to choose this tool versus siblings like 'search_inventory' (for partial matches) or 'check_stock_levels' (for stock-only queries). No explicit when/when-not conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds necessary context absent from annotations: flags it as a 'Premium tool' (cost implication) and 'Requires API key' (auth barrier). However, lacks essential safety disclosure—examples show both read ('show me') and write ('reserve') operations, but the description doesn't clarify mutability, idempotency, or failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences with zero redundancy: purpose declaration, illustrative examples, and operational constraints (premium/API key). Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool but incomplete given the complexity. Missing output format description (no output schema exists) and safety profile disclosure. The 'CerebroChain AI Command Center' reference hints at external processing but doesn't specify latency, reliability, or whether results are deterministic.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by providing concrete, domain-relevant examples ('delayed shipments', 'SKU-1234') that illustrate the expected linguistic patterns and operational scope beyond the generic schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States a specific action ('Process') and resource ('natural language command') through a named system ('CerebroChain AI Command Center'). Concrete examples ('show me all delayed shipments', 'reserve 50 units') clarify scope. Differentiates from siblings by establishing itself as the natural language interface versus structured API tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Fails to provide critical guidance on when to use this 'Premium' tool versus the numerous specific siblings (e.g., check_stock_levels, list_shipments). No mention of cost trade-offs, latency implications, or accuracy differences between NL and structured tools. The examples imply usage but do not constrain it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context: 'real-time' nature, specific return data fields (compensating for missing output schema), and authentication requirements. However, it omits rate limits, error behaviors, and side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded with action and resource ('Get real-time tracking for a shipment route'), followed by specific data points and auth requirement. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no annotations or output schema, the description adequately compensates by previewing return values (location, progress, ETA, stops) and noting the API key requirement. Sufficient for agent invocation decisions despite missing error documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single 'route_id' parameter, establishing a baseline of 3. The description does not add syntax details, format constraints, or examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get') + specific resource ('shipment route') with enumerated data points (location, progress, ETA, stops). Distinguishes from siblings like list_shipments (specific tracking vs. listing) and optimize_route (tracking vs. optimization), though it doesn't explicitly clarify the difference from get_order_status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
States 'Requires API key' indicating a prerequisite, but provides no guidance on when to select this tool versus alternatives like get_order_status or list_shipments. No mention of input prerequisites (e.g., valid route_id format) or error conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable context ('AI-powered' implies probabilistic/non-deterministic behavior, 'Premium' implies cost/complexity, 'API key' states auth requirement). However, lacks safety profile (read-only vs destructive), rate limits, or output format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste. Front-loaded with capability (AI-powered detection), followed by specific outputs (severity, recommendations), then operational constraints (Premium, API key). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool with good schema coverage. Mentions key outputs (severity, recommendations) despite missing output schema. However, gaps remain regarding return structure, AI accuracy limitations, and specific premium usage terms.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for the single 'scope' parameter. Description adds no parameter-specific guidance beyond what the schema enum ('warehouse', 'logistics', 'full-chain') already provides. Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Identifies') and resource ('supply chain bottlenecks') with clear scope ('current and predicted' with 'severity and recommendations'). Implicitly distinguishes from siblings like get_optimization_recommendations through specific focus on bottlenecks, though lacks explicit sibling comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides operational constraints ('Premium tool', 'Requires API key') implying usage prerequisites, but lacks explicit when-to-use guidance or comparison to alternatives like forecast_demand or get_optimization_recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses critical authentication requirements ('Premium tool. Requires API key'), but fails to declare safety properties (read-only vs. destructive), rate limits for the premium tier, or the nature/format of AI-generated outputs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two highly efficient sentences. The first sentence front-loads the core function and scope, while the second sentence delivers critical constraint information (premium/API key). Zero redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with full schema coverage, the description covers the essential function and constraints. However, given the 'AI-generated' nature and 'Premium' status, gaps remain around output format expectations, rate limiting behavior, and safety profile that would help an agent invoke this confidently without annotations or output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with clear enum values and descriptions. The description adds contextual mapping by linking the 'area' parameter to business domains (warehouse operations maps to inventory/picking, logistics routes maps to fleet/shipping), providing helpful semantic context beyond the schema's technical 'Area to optimize' label. Baseline 3 with modest value added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'AI-generated optimization recommendations' and specifies three distinct domains (warehouse operations, logistics routes, supply chain efficiency). However, it does not explicitly differentiate from the sibling 'optimize_route' tool, which could cause confusion about whether to get recommendations versus execute optimizations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance through the specific domains mentioned (warehouse, logistics, supply chain), helping agents understand the scope. However, it lacks explicit when-to-use/when-not-to-use guidance and does not mention alternatives like 'optimize_route' for actual route optimization versus recommendation retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context by specifying 'real-time' data freshness and the API key requirement. However, it omits other behavioral details such as rate limits, error handling for missing keys, or data retention policies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with no filler. It is front-loaded with the action and resource ('Get real-time fleet KPIs'), followed by specific examples and the auth requirement. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the absence of an output schema, the description compensates by listing the specific KPI categories returned (active vehicles, utilization, etc.). With zero input parameters to document and no annotations to reference, the description covers the essential prerequisites (API key) and return value concepts adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters (empty object). According to the baseline rules, zero parameters warrants a baseline score of 4. The description correctly does not invent or reference any parameters, maintaining consistency with the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('fleet KPIs'), and enumerates specific metrics (active vehicles, utilization, fuel efficiency, maintenance status). This distinguishes it from siblings like 'list_vehicles' (which likely returns vehicle records rather than aggregated statistics) and 'get_order_statistics' (orders vs. fleet), though it does not explicitly contrast with 'get_kpi_dashboard'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states a prerequisite ('Requires API key') but provides no explicit guidance on when to use this tool versus alternatives like 'get_kpi_dashboard' or 'list_vehicles'. The usage is implied by the specific metrics listed, but lacks explicit when/when-not conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Successfully notes operational constraints (API key, premium status) and calculation factors. Missing safety classification (read-only vs state-modifying), rate limits, and output format disclosure expected for computation tools without annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four tight sentences with zero redundancy. Front-loaded with core purpose ('AI-powered route optimization'), followed by specific factors, then operational constraints. Every sentence delivers distinct value (function, factors, cost, auth).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for the complexity given 100% schema coverage, but gaps remain due to missing output schema and annotations. Description omits return value structure (sequence? costs? map?) and safety hints (idempotent? read-only?) that would complete the operational picture.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description references 'capacity' and 'time windows' confirming input utilization but adds no syntax details, validation rules, or semantic relationships beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('calculates') and resource ('delivery route'), explicitly listing optimization factors (traffic, capacity, time windows, fuel costs). Clearly distinguishes from sibling 'compare_shipping_rates' (pricing vs routing) and 'get_optimization_recommendations' (advice vs calculation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides operational context via 'Premium tool' and 'Requires API key', implying cost and authentication prerequisites. However, lacks explicit when-to-use guidance versus alternatives like 'compare_shipping_rates' or exclusions for simple routing scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Compensates well by detailing return values ('quantity on hand, reserved, available') absent from output schema, and notes 'Requires API key' auth constraint. Could improve by mentioning if data is real-time or cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: (1) purpose and target, (2) return value disclosure, (3) auth requirement. Each sentence adds distinct value not present in structured fields. Front-loaded with the most critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read operation, description is nearly complete. Successfully compensates for missing output schema by enumerating return fields. Lacks only minor behavioral details like error conditions or rate limiting to achieve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('Inventory item ID'), establishing baseline 3. Description reinforces 'by ID' usage but doesn't add syntax details, format examples, or validation rules beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') + resource ('stock levels') + scope ('by ID'). Effectively distinguishes from sibling 'search_inventory' (which implies broad querying) and 'lookup_sku' (which focuses on SKU mapping rather than stock quantities).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context through 'by ID' phrasing, suggesting specific item lookup rather than search. However, lacks explicit guidance on when to use this versus 'search_inventory' or 'lookup_sku', or prerequisites like needing the item_id beforehand.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden and succeeds in disclosing return behavior ('Returns cheapest and fastest options') and critical auth context ('Free — no API key needed'). Missing error handling or rate limit details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste. Front-loaded with core action, followed by output behavior, then auth requirements. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output schema by describing return values ('cheapest and fastest'). Input schema is fully documented. Could enhance by noting international support (country codes exist in schema) or dimensional requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description mentions 'package' which loosely implies dimensional relevance, but adds no syntax guidance, validation rules, or relationships between parameters (e.g., that dimensions are optional but affect accuracy).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with 'Compare shipping rates' verb, explicit carrier enumeration (UPS, FedEx, USPS, DHL), and clear resource ('package'). Distinct from siblings like track_shipment (tracking status) and optimize_route (routing logic).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implicit usage is clear (use when comparing rates), but lacks explicit when-to-use guidance versus siblings like track_shipment or list_shipments. No mention of prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses 'Premium' tier and API key requirement (auth barrier), but omits safety profile (read-only vs destructive), rate limits, or response format details. 'Get' implies read-only but doesn't confirm it.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence defines function and scope; second states operational constraints. Front-loaded and appropriately sized for a simple getter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool without output schema, description adequately explains what data is returned (dashboard with targets vs actuals). Would benefit from explicit read-only declaration, but 'Get' provides sufficient context for selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters. Per scoring rules, baseline is 4 for zero-parameter tools. Description appropriately avoids inventing parameters where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Get') + resource ('KPI performance dashboard') with clear scope ('targets vs actuals for all key supply chain metrics'). The 'supply chain' focus effectively distinguishes it from sibling financial/fleet tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides prerequisite constraints ('Premium tool. Requires API key') but lacks explicit guidance on when to use this versus siblings like get_financial_metrics or get_order_statistics. The constraints imply usage limitations but don't specify selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully communicates the authentication behavior (no API key required) but fails to mention rate limits, side effects, or what format/status codes the health check returns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence establishes purpose; second provides critical auth context. Front-loaded structure with every sentence earning its place. No redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (no params, no nested objects) and lack of output schema, the description adequately covers the essential operational context (purpose + auth model). Minor gap: does not describe expected return values or health metric format, which would be helpful without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has zero input parameters with 100% schema coverage (trivially complete). Per calibration guidelines, zero-param tools baseline at 4. The description appropriately does not fabricate parameter semantics where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with clear resource 'CerebroChain platform health and service availability'. It clearly distinguishes from operational siblings (stock levels, orders, shipments) by focusing on platform meta-status rather than business data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance through 'Free — no API key needed', signaling it can be used without authentication credentials. However, lacks explicit when-to-use guidance (e.g., 'use before authenticated calls to verify connectivity') or when-not-to-use relative to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. It successfully notes the API key requirement and the 90-day time constraint, but omits other behavioral traits like pagination, rate limits, or data freshness/caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence using an em-dash to append details without repetition. Information is front-loaded ('Get shipment history with analytics') and every clause adds value (scope, data types, auth requirement).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While no output schema exists, the description compensates by enumerating the returned data categories (delivery rates, carrier performance, shipment history). It appropriately covers the content for a zero-parameter analytics tool, though it could specify the response format (e.g., JSON array vs summary object).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, which establishes a baseline score of 4. The description correctly implies no filtering parameters are available by stating 'all shipments' and '90-day view' as fixed scope.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves 'shipment history with analytics' and specifies the exact scope (90-day view) and data categories returned (delivery rates, carrier performance). This effectively distinguishes it from siblings like track_shipment (individual tracking) and list_orders (different domain).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The '90-day view' and 'analytics' phrasing implies this is for historical reporting rather than real-time operations, but there is no explicit guidance on when to use this versus track_shipment for specific shipment tracking or list_orders for order-based queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses data freshness ('real-time'), access tier ('Premium tool'), and authentication requirements ('Requires API key'). It does not disclose rate limits, error behaviors, or data scope boundaries, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two highly efficient sentences with no waste. The first sentence front-loads the core value proposition and specific metric types; the second sentence provides critical access constraints. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (zero parameters) and lack of output schema, the description adequately covers the tool's purpose and return value categories. It compensates for the missing output schema by enumerating expected metric types. Minor gap: does not specify the scope/entity of the metrics (e.g., company-wide vs. departmental).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per the evaluation rules, zero parameters establishes a baseline score of 4. The description appropriately does not fabricate parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('financial metrics') and lists concrete examples (revenue, margins, cash flow, profit/loss). It clearly distinguishes from operational siblings like check_stock_levels and track_shipment by specifying the financial domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides prerequisites ('Premium tool', 'Requires API key') but lacks explicit guidance on when to use this versus sibling tools. While the financial domain differs from the operational siblings (fleet, orders, shipping), there is no explicit comparison to get_kpi_dashboard or guidance on selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.