Server Quality Checklist
- Disambiguation3/5
While Amazon and Shopify tools pair nicely (search/list + analyze/detail), maps_search and maps_leads overlap significantly—both query Google Maps and return Lead Quality Scores. An agent would struggle to determine whether to use the general business search or the lead-specific variant for the same task.
Naming Consistency3/5All tools use a consistent [platform]_[resource] prefix pattern (amazon_, maps_, shopify_), but the suffixes mix verbs (search, analyze) and nouns (product, leads, products) inconsistently. This creates slight unpredictability in whether a tool performs an action or retrieves a resource.
Tool Count5/5Six tools covering three distinct data sources (Amazon, Google Maps, Shopify) with two complementary operations each represents a well-scoped, focused surface. No bloat or obvious gaps in quantity for an intelligence API.
Completeness3/5Amazon and Shopify have reasonable coverage (discovery + deep analysis), but Maps lacks a specific business lookup tool (e.g., 'get_business_details') to match amazon_product or shopify_analyze. Instead, it offers two list operations with unclear functional boundaries, creating a gap in the entity lifecycle.
Average 3.7/5 across 6 of 6 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 6 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions 'Lead Quality Score' filtering but fails to explain what this score represents (scale, calculation method), pagination behavior, rate limits, or side effects. Does not disclose that this is a read-only operation or potential costs associated with the Google Places API.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes core functionality and unique filtering mechanism; second sentence specifies optimal use case. Information is front-loaded and appropriately scoped.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 4-parameter search tool but gaps remain. No output schema exists, yet description does not hint at return format (list of businesses with scores?). Fails to mention API key prerequisites or error handling for invalid keys, which are critical for a third-party API integration tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description adds value by contextualizing 'min_score' as 'Lead Quality Score' in the main text, but does not elaborate on parameter interactions, syntax nuances, or the significance of the required 'google_key' parameter beyond schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Find') and resource ('qualified sales leads'), specifies the platform ('Google Maps'), and distinguishes from sibling tool 'maps_search' by emphasizing sales-specific functionality ('Lead Quality Score', 'outreach lists').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides positive usage context ('Best for building targeted outreach lists') but lacks explicit when-not-to-use guidance or comparison to alternatives like 'maps_search' for general business lookups. No mention of the Google Places API key requirement outside the schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. It lists what data is extracted but omits safety profile (read-only vs. destructive), authentication requirements, rate limiting, caching behavior, or what happens with invalid/non-Shopify URLs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Highly efficient single-sentence structure. The em-dash enumeration of analysis targets delivers maximum information density with zero redundancy. Every element (products, pricing, apps, theme) earns its place in defining scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description effectively documents return value categories (pricing distribution, detected apps, etc.), giving agents clear expectations of analysis depth. Minor gap regarding error handling or edge cases prevents a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'url' parameter, the baseline is 3. The description adds minimal semantic value beyond the schema, mentioning 'any Shopify store' but not adding format constraints, validation rules, or protocol requirements beyond the schema's example.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool analyzes Shopify stores with specific capabilities enumerated (products, pricing distribution, vendors, detected apps with examples, theme, collections). It effectively distinguishes from sibling 'shopify_products' by emphasizing comprehensive store analysis versus simple product listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the scope (comprehensive analysis vs. simple product retrieval) implicitly differentiates it from 'shopify_products', there are no explicit guidelines on when to prefer this tool over alternatives, prerequisites like store accessibility, or error conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully communicates pagination behavior and the 250-item limit constraint. However, it omits authentication requirements, Shopify rate limiting, error handling behavior, and the read-only nature of the operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the core purpose (fetching paginated catalogs), while the second provides critical operational constraint (250 limit). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a low-complexity tool with 3 simple parameters and complete schema coverage. However, given the absence of both annotations and output schema, the description should ideally disclose return format, authentication requirements, or error behavior to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the 250 limit mentioned in the schema but does not add syntax details, format examples, or semantic context beyond what the parameter descriptions already provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Fetch' with resource 'paginated product catalog' and scope 'any Shopify store'. It clearly distinguishes from sibling shopify_analyze (analysis vs. fetching) and amazon_product (single product vs. full catalog).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'Fetch' and 'paginated', suggesting use for catalog listing. However, it lacks explicit when-to-use guidance versus siblings like shopify_analyze or amazon_product, and omits prerequisites like store accessibility requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adds valuable behavioral context by disclosing specific analyses performed (FBA fee estimate, profit margin, opportunity tier). However, missing operational details like error handling for invalid ASINs, caching behavior, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes scope and input method; second sentence lists specific analytical outputs. Every word earns its place—no filler or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool without output schema, the description adequately compensates by listing key return values (FBA fees, margins, opportunity tier). Would benefit from mentioning error cases (invalid ASIN) or default marketplace behavior, but sufficient for tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description mentions 'by ASIN' which reinforces the required parameter, but adds no additional semantic detail about marketplace selection or ASIN format beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Deep analysis'), resource ('Amazon product'), and distinguishing scope ('single' vs siblings, 'by ASIN'). The 'single' and 'ASIN' qualifiers clearly differentiate from amazon_search, while 'Amazon' differentiates from shopify tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance. The term 'single' implies distinction from amazon_search (likely for bulk/multiple), but lacks explicit guidance like 'use this when you have a specific ASIN; use amazon_search to discover products'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It effectively discloses return format (Lead Quality Score 0-100 and outreach hints) but omits mutation safety, rate limits, or error behaviors. 'Search' implies read-only but this is not explicitly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states function, second states return value. Appropriately front-loaded with core action and sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter tool with no output schema, the description adequately compensates by detailing the return structure (Lead Quality Score and hints). Missing only minor details like the 'max' parameter behavior or API key prerequisites that could be mentioned explicitly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description maps 'type' to 'query' parameter implicitly but adds no syntax details, examples, or clarification beyond what schema already provides for the 4 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Search' with clear resource 'Google Maps businesses' and scope 'by type and location'. The mention of 'Lead Quality Score' and 'outreach hints' distinguishes this from generic map searches and implies its lead-generation purpose versus sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives named, but the focus on 'Lead Quality Score' and 'outreach hints' implies usage for sales prospecting and lead generation contexts. Lacks explicit guidance on when to use versus 'maps_leads' sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the scoring algorithm (demand, rating gap, price) and return value format (0-100 score). However, it omits other behavioral traits like pagination behavior, result limits, or error handling (e.g., no results found).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence front-loads the core action. Second sentence efficiently explains the return value semantics. Every word earns its place; no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 flat parameters with full schema coverage) and lack of output schema, the description adequately compensates by explaining the return value structure and scoring methodology. Minor gap regarding error states or pagination, but sufficient for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both 'keyword' and 'marketplace' fully documented in the schema. The description mentions 'by keyword' but adds no additional semantic information (syntax constraints, format details) beyond what the schema already provides. Baseline score appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search'), resource ('Amazon products'), and scope ('by keyword'). It distinguishes from sibling tools like 'amazon_product' (implied to be a lookup rather than search) and non-Amazon tools through explicit domain mention. It also clarifies what the tool returns (Opportunity Score), adding crucial context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies a use case through mentioning 'Opportunity Score' and its components (demand, rating gap, price), signaling this is for product research and market opportunity analysis. However, it lacks explicit guidance on when to use this versus 'amazon_product' or other siblings, and does not state prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.