Skip to main content
Glama

Server Details

AI agents hallucinate SaaS pricing constantly. PriceOf fixes that with structured, verified data — 50+ products with confidence scores, freshness scores, and last-verified timestamps.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.2/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose: search, get pricing, compare, report incorrect, and request new product. No overlap or ambiguity.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern (e.g., search_products, get_pricing), making them predictable and easy to understand.

Tool Count5/5

With 5 tools, the set is well-scoped for the domain of SaaS product pricing—neither too sparse nor too overwhelming.

Completeness4/5

Core operations (search, get, compare, report, request) are covered. Missing direct update of pricing data, but the reporting mechanism fills that gap adequately.

Available Tools

5 tools
compare_productsAInspect

Compare pricing across multiple SaaS products side-by-side. Supports team-size cost calculation.

ParametersJSON Schema
NameRequiredDescriptionDefault
productsYesProduct slugs to compare, e.g. ['notion', 'asana', 'linear']
team_sizeNoNumber of seats to calculate total cost
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully convey behavioral traits. It mentions side-by-side comparison and team-size calculation but omits constraints (e.g., max 5 products, min 2), error handling, or what happens when a product slug is invalid. Important behavioral details are left out.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short sentences, front-loaded with the core purpose. Every word earns its place. No extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should hint at return format. It does not. Constraints like max/min items are in schema but not highlighted. Given the complexity (2 params, no nested objects) and 100% schema coverage, the description is adequate but incomplete—missing output expectations and failure modes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds context ('side-by-side', 'team-size cost calculation') but does not enrich parameter meanings beyond what the schema already provides. The team_size parameter is explained in schema as 'Number of seats to calculate total cost', which aligns with the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it compares pricing across multiple SaaS products side-by-side and supports team-size cost calculation. This sharply distinguishes it from sibling tools like get_pricing (single product) and search_products (product discovery).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for multi-product comparison, but does not explicitly state when not to use this tool or suggest alternatives. The context from sibling tools provides some guidance, but the description itself lacks explicit usage boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pricingAInspect

Get current pricing for a SaaS product. Returns all plans with prices, features, and confidence scores.

ParametersJSON Schema
NameRequiredDescriptionDefault
planNoSpecific plan slug, e.g. 'pro', 'team', 'enterprise'
productYesProduct slug, e.g. 'notion', 'slack', 'linear'
billing_cycleNoFilter by billing cycle
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully convey behavioral traits. It mentions 'returns all plans' and 'confidence scores' but lacks details on side effects (likely read-only), rate limits, authentication requirements, or whether filters affect the 'all plans' claim.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

At two short sentences, the description is concise and front-loaded with the core purpose. Every sentence adds value without superfluous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description partially explains return values (plans, prices, features, confidence scores) but omits behavior with optional filters (plan, billing_cycle) and does not mention error conditions or pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters with descriptions. The tool description adds no further parameter details beyond what the schema already provides, thus meets the baseline without adding extra meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves current pricing for a SaaS product and specifies the return includes plans, prices, features, and confidence scores. This distinguishes it from sibling tools like compare_products or search_products which have different focuses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for pricing retrieval but provides no explicit guidance on when to use this tool versus alternatives such as compare_products or report_incorrect_pricing. No when-not-to-use or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

report_incorrect_pricingCInspect

Report incorrect or outdated pricing data for a product.

ParametersJSON Schema
NameRequiredDescriptionDefault
productYesProduct slug
descriptionYesWhat's wrong with the current pricing data
report_typeNoincorrect_price
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It does not disclose side effects, confirmation behavior, or authentication requirements. The action of 'reporting' is vague—whether it submits a ticket, flags a record, or persists data is unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at 8 words, but could include more useful context without being verbose. Not overly long, but slightly under-specified for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should provide more context about return values, success/failure indicators, or what happens after reporting. It feels incomplete for a reporting action.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67%, but the tool description adds no additional meaning to parameters. The 'report_type' enum lacks a description in both schema and tool description, missing an opportunity to clarify the options.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Report incorrect or outdated pricing data for a product', with specific verb 'report' and resource 'pricing data'. It distinguishes from sibling tools (compare_products, get_pricing, request_product, search_products) which serve different functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description does not mention prerequisites, exclusions, or sibling tool comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

request_productBInspect

Request that a new SaaS product be added to the PriceOf database.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesProduct name, e.g. 'Airtable'
reasonNoWhy this product should be added
websiteYesProduct website URL
categoryNoProduct category guess
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description should provide behavioral context. It only says 'request' without indicating if the addition is immediate, requires approval, or has side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One concise sentence that states the tool's purpose without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description lacks details on return values, process, or constraints, leaving gaps for a tool with 4 parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema already provides descriptions for all parameters (100% coverage), so description adds no extra meaning, baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool adds a new SaaS product to the PriceOf database, distinguishing it from siblings like search_products or compare_products.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs others, no context about prerequisites or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_productsCInspect

Search for SaaS products by name, category, or description.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query, e.g. 'CRM', 'project management', 'notion'
max_resultsNoMaximum results
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It does not mention side effects, results format, pagination, or any constraints (e.g., rate limits). The description is too brief to inform the agent about underlying behavior beyond the basic search action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that is easy to parse. It avoids unnecessary words but could include a bit more detail (e.g., about the result set) without losing conciseness. Still, it is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, the description is incomplete. It does not describe the output format or what data is returned (e.g., product IDs, names, prices). Without an output schema, the agent lacks information about the tool's output, which is important for downstream decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already explains both parameters. The tool description adds minimal value by mentioning searchable fields (name, category, description) but does not go beyond the schema's examples. Baseline score of 3 is appropriate since schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Search for SaaS products' and the search criteria 'by name, category, or description'. It is specific enough to understand the tool's purpose, but does not explicitly differentiate from sibling tools like compare_products or get_pricing, which could confuse an agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., compare_products). It lacks context about typical use cases or scenarios, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources