PriceAtlas MCP Server
Server Quality Checklist
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 8 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Submit' implies a write operation, the description does not clarify what happens upon submission (creates new record, updates existing, validation rules), side effects, idempotency, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence (9 words) that front-loads the action. Every word earns its place with zero redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex write operation with 9 parameters, no annotations, and no output schema, the description is insufficient. It omits expected behavior (return value, success indication), prerequisites for valid submission, and the relationship between required parameters like barcode and observed_price.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all 9 parameters well-documented in the schema. The description adds no semantic value beyond the schema, which is acceptable given the high coverage baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Submit') and identifies the resource ('price observation') and context ('at a specific store'). It implicitly distinguishes from sibling read operations like get_prices, though it does not explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool does but provides no guidance on when to use it versus alternatives, prerequisites (e.g., verifying the store exists), or when not to use it. The schema mentions using list_stores, but the description itself lacks usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden of behavioral disclosure but fails to mention safety guarantees (though implied by 'List'), pagination, return format, or what 'available' signifies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient, front-loaded sentences with no waste. Appropriately concise for the tool's simplicity, though bordering on underspecified.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for the low complexity (1 optional param), but gaps remain regarding the return value structure and store availability criteria, especially since no output schema exists to compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the country parameter fully documented. The description adds minimal semantic value beyond the schema, merely confirming the filter is optional.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb (List) and resource (stores) with scope (available), but lacks differentiation from sibling tools like list_countries or search_products.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage by noting the filter is optional, indicating the tool can be called with no parameters to list all stores. However, lacks explicit when-to-use guidance or comparisons to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the limited scope ('27 supported') and return fields ('currencies and regions'), but lacks safety indicators (read-only/destructive), rate limits, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence contains specific numeric constraints and return structure without redundancy. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool without output schema, the description adequately compensates by specifying the return payload structure (countries with currencies and regions). Only minor gaps remain regarding response format or pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, which establishes a baseline of 4 per the scoring rubric. No parameter description is needed in the text since the empty schema is self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('List') and resource ('countries') with scope details ('27 supported', 'default currencies and regions'). However, it does not explicitly differentiate from siblings like list_stores or price-related tools that might require country codes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to invoke this versus sibling tools, nor does it mention that the output can be used as input for other tools (e.g., get_prices). Users must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context by disclosing the return structure (individual observations plus min/avg/count aggregations per country), which compensates for the missing output schema. However, it omits operational details such as error behavior (e.g., invalid barcode handling), rate limits, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three efficient sentences with zero waste: purpose declaration, filtering option, and return value disclosure. The information is front-loaded and logically sequenced, making it easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description appropriately discloses the return structure (observations and stats). For a read-only tool with simple parameters, this is nearly complete. It could be improved by mentioning error conditions (e.g., invalid barcode format or not found), but the essential information is present.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds minimal semantic value beyond the schema—primarily labeling the country parameter as an 'optional filter,' which clarifies its functional role. The barcode parameter semantics are adequately covered by the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and clearly identifies the resource (price observations and stats) and scope (for a product). It effectively distinguishes from write-oriented sibling 'submit_price' and implies single-product lookup via the barcode parameter. However, it does not explicitly differentiate from the closely named sibling 'get_world_prices', leaving ambiguity about when to use which.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the phrase 'Optionally filter by country,' suggesting when to apply the filter parameter. However, it provides no explicit guidance on when to use this tool versus close siblings like 'get_world_prices' (global vs. per-product) or versus 'search_products' (fuzzy search vs. barcode lookup).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Credits for disclosing external data source (Open Food Facts). However, lacks details on rate limits, latency expectations, or exact return structure that would fully compensate for missing annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence defines the operation; second specifies return value and source. Perfectly front-loaded and dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a simple 2-parameter search tool. Mentions return value ('matching products') and data source despite lacking output schema. Sufficient for agent selection, though return structure details would elevate this further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both params fully documented). Description adds semantic context that 'query' is specifically for product names, but does not elaborate beyond schema constraints (min 2 chars, range 1-20). Baseline 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (search), resource (products), and method (by name). The 'by name' clause helps distinguish from sibling 'lookup_product' (likely ID/barcode-based), though explicit differentiation from price-focused siblings (get_prices) is absent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage via 'by name' (use when searching by product name), but lacks explicit when-not-to-use guidance or contrasts with alternatives like 'lookup_product' for barcode lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses data source (Open Food Facts global database) and return format (price ranges across countries). However, as a read operation with no annotations, it should ideally confirm read-only nature, caching behavior, or error handling for invalid barcodes—none of which are mentioned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two precisely worded sentences. First establishes action and source, second clarifies return value. No redundant phrases or tautologies. Excellent information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter lookup tool without output schema. Description compensates for missing output schema by stating what gets returned ('price ranges across countries'). Minor gap regarding error cases or specific data format, but sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with clear barcode description ('Product barcode (8-14 digits)'). Description references 'a product' which implicitly maps to the barcode parameter but adds no syntax, format, or constraint details beyond what the schema already provides. Baseline 3 appropriate given complete schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with resource 'aggregated world prices' and clarifies the scope (global database, across countries). Effectively distinguishes from sibling 'get_prices' by emphasizing geographic breadth and aggregation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit context through 'world' and 'global database' wording that suggests use for international price comparison, but lacks explicit guidance on when to choose this over 'get_prices' or 'lookup_product'. No prerequisites or exclusions stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong disclosure given zero annotations: reveals write behavior ('saves to database'), lists all 6 external API dependencies, and clarifies this fetches 'fresh' (real-time) data. Missing: failure handling (what if one connector fails?), timeout behavior for multiple API calls, whether operation is idempotent, or if it overwrites existing DB records.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two dense sentences zero waste. First sentence front-loads action and enumerates specific connectors + scope; second sentence discloses critical side effect (DB persistence). No redundancy with structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 2-parameter tool, but gaps remain: no output schema exists, yet description doesn't specify what the tool returns (price objects? success boolean?). Given this calls 6 external APIs, should mention timeout risks or partial failure behavior. However, database persistence and connector enumeration provide sufficient context for basic invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both barcode and country_code fully described in JSON schema), establishing baseline 3. Description provides contextual mapping ('for a product in a specific country') but adds no syntax, format, or constraint details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Run' + resource 'price data connectors' + explicit scope (6 named sources: Open Food Facts, Kroger, etc.). The phrase 'Saves results to the database' clearly distinguishes this from sibling 'get_prices' (which presumably reads cached data) by establishing this as a fetch-and-persist operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies distinction from cached retrieval via 'fetch fresh prices' and 'saves to database', but provides no explicit when-to-use guidance versus siblings like get_prices or submit_price. No mention of prerequisites (e.g., product must exist in system) or when NOT to use (e.g., rate limit concerns).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses the data source (Open Food Facts) and specific return values (name, brand, quantity, image). However, it omits error behavior (e.g., barcode not found) and rate limiting details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded with the action, followed by input specification and return value documentation. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool with 100% schema coverage and no output schema, the description is complete. It adequately describes the input, action, data provenance, and return fields without verbosity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the barcode parameter already documented as 'Product barcode (8-14 digits)'. The description adds 'EAN/UPC' clarification, providing minor additional semantic value, which meets the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Look up), resource (product), and distinguishing input method (by barcode/EAN/UPC). The barcode specificity clearly differentiates this from sibling 'search_products', which likely performs text-based search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Specifies the input requirement (barcode) which implies when to use the tool, but provides no explicit guidance on when to choose this over 'search_products' or other alternatives, nor does it state prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/musaceylan/priceatlas-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server