PriceOf
Server Details
AI agents hallucinate SaaS pricing constantly. PriceOf fixes that with structured, verified data — 50+ products with confidence scores, freshness scores, and last-verified timestamps.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 5 of 5 tools scored.
Each tool has a distinct purpose: search, get pricing, compare, report incorrect, and request new product. No overlap or ambiguity.
All tools follow a consistent verb_noun pattern (e.g., search_products, get_pricing), making them predictable and easy to understand.
With 5 tools, the set is well-scoped for the domain of SaaS product pricing—neither too sparse nor too overwhelming.
Core operations (search, get, compare, report, request) are covered. Missing direct update of pricing data, but the reporting mechanism fills that gap adequately.
Available Tools
5 toolscompare_productsAInspect
Compare pricing across multiple SaaS products side-by-side. Supports team-size cost calculation.
| Name | Required | Description | Default |
|---|---|---|---|
| products | Yes | Product slugs to compare, e.g. ['notion', 'asana', 'linear'] | |
| team_size | No | Number of seats to calculate total cost |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully convey behavioral traits. It mentions side-by-side comparison and team-size calculation but omits constraints (e.g., max 5 products, min 2), error handling, or what happens when a product slug is invalid. Important behavioral details are left out.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences, front-loaded with the core purpose. Every word earns its place. No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, so the description should hint at return format. It does not. Constraints like max/min items are in schema but not highlighted. Given the complexity (2 params, no nested objects) and 100% schema coverage, the description is adequate but incomplete—missing output expectations and failure modes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds context ('side-by-side', 'team-size cost calculation') but does not enrich parameter meanings beyond what the schema already provides. The team_size parameter is explained in schema as 'Number of seats to calculate total cost', which aligns with the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it compares pricing across multiple SaaS products side-by-side and supports team-size cost calculation. This sharply distinguishes it from sibling tools like get_pricing (single product) and search_products (product discovery).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for multi-product comparison, but does not explicitly state when not to use this tool or suggest alternatives. The context from sibling tools provides some guidance, but the description itself lacks explicit usage boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pricingAInspect
Get current pricing for a SaaS product. Returns all plans with prices, features, and confidence scores.
| Name | Required | Description | Default |
|---|---|---|---|
| plan | No | Specific plan slug, e.g. 'pro', 'team', 'enterprise' | |
| product | Yes | Product slug, e.g. 'notion', 'slack', 'linear' | |
| billing_cycle | No | Filter by billing cycle |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully convey behavioral traits. It mentions 'returns all plans' and 'confidence scores' but lacks details on side effects (likely read-only), rate limits, authentication requirements, or whether filters affect the 'all plans' claim.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
At two short sentences, the description is concise and front-loaded with the core purpose. Every sentence adds value without superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description partially explains return values (plans, prices, features, confidence scores) but omits behavior with optional filters (plan, billing_cycle) and does not mention error conditions or pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 100% of parameters with descriptions. The tool description adds no further parameter details beyond what the schema already provides, thus meets the baseline without adding extra meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves current pricing for a SaaS product and specifies the return includes plans, prices, features, and confidence scores. This distinguishes it from sibling tools like compare_products or search_products which have different focuses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for pricing retrieval but provides no explicit guidance on when to use this tool versus alternatives such as compare_products or report_incorrect_pricing. No when-not-to-use or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
report_incorrect_pricingCInspect
Report incorrect or outdated pricing data for a product.
| Name | Required | Description | Default |
|---|---|---|---|
| product | Yes | Product slug | |
| description | Yes | What's wrong with the current pricing data | |
| report_type | No | incorrect_price |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It does not disclose side effects, confirmation behavior, or authentication requirements. The action of 'reporting' is vague—whether it submits a ticket, flags a record, or persists data is unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at 8 words, but could include more useful context without being verbose. Not overly long, but slightly under-specified for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should provide more context about return values, success/failure indicators, or what happens after reporting. It feels incomplete for a reporting action.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67%, but the tool description adds no additional meaning to parameters. The 'report_type' enum lacks a description in both schema and tool description, missing an opportunity to clarify the options.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Report incorrect or outdated pricing data for a product', with specific verb 'report' and resource 'pricing data'. It distinguishes from sibling tools (compare_products, get_pricing, request_product, search_products) which serve different functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description does not mention prerequisites, exclusions, or sibling tool comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
request_productBInspect
Request that a new SaaS product be added to the PriceOf database.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Product name, e.g. 'Airtable' | |
| reason | No | Why this product should be added | |
| website | Yes | Product website URL | |
| category | No | Product category guess |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description should provide behavioral context. It only says 'request' without indicating if the addition is immediate, requires approval, or has side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence that states the tool's purpose without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description lacks details on return values, process, or constraints, leaving gaps for a tool with 4 parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema already provides descriptions for all parameters (100% coverage), so description adds no extra meaning, baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool adds a new SaaS product to the PriceOf database, distinguishing it from siblings like search_products or compare_products.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs others, no context about prerequisites or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_productsCInspect
Search for SaaS products by name, category, or description.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query, e.g. 'CRM', 'project management', 'notion' | |
| max_results | No | Maximum results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden of behavioral disclosure. It does not mention side effects, results format, pagination, or any constraints (e.g., rate limits). The description is too brief to inform the agent about underlying behavior beyond the basic search action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that is easy to parse. It avoids unnecessary words but could include a bit more detail (e.g., about the result set) without losing conciseness. Still, it is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, the description is incomplete. It does not describe the output format or what data is returned (e.g., product IDs, names, prices). Without an output schema, the agent lacks information about the tool's output, which is important for downstream decisions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already explains both parameters. The tool description adds minimal value by mentioning searchable fields (name, category, description) but does not go beyond the schema's examples. Baseline score of 3 is appropriate since schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Search for SaaS products' and the search criteria 'by name, category, or description'. It is specific enough to understand the tool's purpose, but does not explicitly differentiate from sibling tools like compare_products or get_pricing, which could confuse an agent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., compare_products). It lacks context about typical use cases or scenarios, leaving the agent to infer usage from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!