cyclesite-mcp-server
Server Details
Search, value, sell, and trust-check used bikes on Cyclesite — UK's used-bicycle marketplace.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 33 of 33 tools scored.
Most tools have clearly distinct purposes with detailed descriptions, reducing ambiguity. However, overlaps exist between search-related tools (search, search_bikes, search_by_location, find_similar_listings) and price analysis tools (get_valuation, suggest_listing_price), which could cause misselection without careful reading.
Tool names are snake_case but vary in verb prefixes: some use 'get_', 'list_', 'search_', while others use standalone verbs like 'fetch' or 'search'. This inconsistency, though readable, lacks a uniform pattern and may confuse agents expecting consistent verb_noun conventions.
With 33 tools, the count exceeds the calibration threshold (25+), indicating an excessive number. Many tools could be consolidated (e.g., merging search variants), making the surface feel bloated and harder to navigate.
The tool set covers most aspects of a used bike marketplace: browsing, searching, valuations, listing management, market insights, and stolen bike checks. Minor gaps (e.g., no tool to delete a listing) exist but do not severely hinder common workflows.
Available Tools
33 toolscheck_stolenARead-onlyIdempotentInspect
Check if a UK bicycle is reported stolen by serial number. Cyclesite aggregates lookups across UK stolen-bike databases — the unique data we own. Per-serial rate-limited (3/hour) to prevent enumeration. Example: 'is the bike with serial WTU123456 reported stolen?'. Live data — cross-references multiple registries on every call.
| Name | Required | Description | Default |
|---|---|---|---|
| serial | Yes | Frame/serial number (4-50 chars). |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | No | |
| action | No | |
| status | Yes | |
| message | No | |
| checkedAt | No | |
| confidence | No | |
| attribution | Yes | Citation string — include verbatim when surfacing data. |
| sourcesChecked | No | |
| confidenceLabel | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (readOnly, openWorld, idempotent), the description adds rate limits, live cross-referencing, and proprietary database aggregation. All traits are consistent with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences: purpose, data source/rate limit, example. Every sentence contributes value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 1-parameter tool with output schema and full annotations, the description covers purpose, constraints, and data source. Could mention error handling, but not essential given schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a description of 'serial'. Description adds an example and context (rate-limited, live data), enhancing practical understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool checks if a UK bicycle is reported stolen by serial number. Distinguishes from siblings like 'report_stolen' and mentions unique data ownership, making purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a clear example and mentions rate limiting (3/hour) as a usage constraint. Does not explicitly contrast with sibling tools, but the narrow purpose inherently guides when to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_bikesARead-onlyIdempotentInspect
Side-by-side comparison of up to 3 bikes (each by brand+model[+year]). Returns spec sheets and valuations together so the user can pick. Reuses get_spec_sheet + get_valuation server-side. Example: 'compare a Trek Domane SL 6 against a Specialized Roubaix Comp'.
| Name | Required | Description | Default |
|---|---|---|---|
| bikes | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, and open-world. Description adds that it internally calls get_spec_sheet and get_valuation, returning combined results. This provides useful behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences plus an example. No redundant information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description covers input structure, limit of 3 bikes, and combined output purpose. Output schema exists, so no need to detail return fields. Slight gap: no mention of error handling or behavior when bikes are not found, but overall adequate for a composite read tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, but description explains the 'bikes' parameter: up to 3 bikes, each identified by brand, model, and optional year. Provides an example that clarifies usage. Adds meaning beyond the bare schema structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it performs side-by-side comparison of up to 3 bikes, using brand+model+year. Includes an example, making purpose explicit. Differentiates from sibling tools like get_spec_sheet and get_valuation by noting it reuses them server-side.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explains when to use (comparing bikes with spec sheets and valuations) and gives an example. Implicitly informs that individual tools exist for separate needs, but does not explicitly state when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
draft_listingARead-onlyInspect
Sell-side helper: turn a seller's raw facts into a polished Cyclesite listing draft (title, description, suggested price, photo plan). Does NOT publish — for actual publication use publish_listing (requires OAuth). Useful for previewing what a listing would look like. Example: 'help me draft a listing for my 2021 Specialized Allez, very good condition, in Bristol'.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | ||
| year | No | ||
| brand | Yes | ||
| model | Yes | ||
| groupset | No | ||
| willShip | No | ||
| condition | No | ||
| frameSize | No | ||
| knownIssues | No | Honest declaration of any issues. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description clarifies that the tool only creates a draft and does not publish, which aligns with the readOnlyHint annotation. The openWorldHint is noted from annotations but not mentioned in the description; however, the draft nature implies potential hallucination. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences plus an example, with no unnecessary words. It efficiently communicates purpose, usage distinction, and provides a concrete usage scenario.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 9 parameters and the presence of output schema and annotations, the description covers high-level purpose and sibling distinction but lacks parameter guidance and explicit mention of open-world behavior. It is adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only 11% schema coverage (only knownIssues has a description), the description should compensate but does not. It only mentions 'brand' and 'model' in the example, leaving 7 parameters with no added meaning. The example gives context but insufficient for all parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'turn a seller's raw facts into a polished Cyclesite listing draft (title, description, suggested price, photo plan).' It uses specific verbs and resources, and explicitly distinguishes it from publish_listing, a sibling tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description contrasts the tool with publish_listing, stating it does NOT publish and is useful for previewing. An example is provided. It could explicitly state when not to use (e.g., for final publication) but the sibling distinction serves that purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetchARead-onlyIdempotentInspect
OpenAI deep-research / company-knowledge compatibility. Fetch the full document for a Cyclesite listing id (returned by the search tool). Returns { id, title, text, url, metadata } — text is a plain-prose summary of the listing's description and specs, suitable for direct quoting in deep-research answers.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Document id from search results. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| url | Yes | |
| text | Yes | |
| title | Yes | |
| metadata | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true and idempotentHint=true, so the description adds value by specifying the return fields and that text is a plain-prose summary. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with key purpose, no fluff, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and output schema mentioned, the description covers purpose, return structure, and use case. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter 'id' with schema description 'Document id from search results.' The description does not add additional meaning beyond what the schema provides; schema coverage is 100% so baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches the full document for a Cyclesite listing id, with specific return fields and a note about plain-prose summary suitable for deep-research. It differentiates from sibling search tools by indicating it is used after search results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates usage after the search tool ('returned by the search tool'), but does not explicitly mention when not to use or alternative tools like get_listing_detail.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_similar_listingsARead-onlyIdempotentInspect
Given a Cyclesite listing slug, return up to 5 similar active listings (same category, ±25% price, same brand or frame size weighted higher). Use when the user is interested in one bike and wants alternatives. Example: 'show me bikes like that one'.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Source listing URL slug. |
Output Schema
| Name | Required | Description |
|---|---|---|
| listings | Yes | |
| attribution | Yes | Citation string — include verbatim when surfacing data. |
| resultsCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint. The description adds valuable behavioral details: returns up to 5 results, matching criteria (same category, ±25% price, brand or frame size weighted higher), and an example query. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with two sentences and an example query. Every sentence adds value and the key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown but indicated), the description adequately covers the tool's logic and usage. It is complete for a simple similarity lookup with one parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The sole parameter 'slug' is documented in the schema as 'Source listing URL slug.' The description reinforces this but adds minimal new semantic value. Schema description coverage is 100%, hence baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds similar active listings given a slug, with specific criteria. It distinguishes from siblings like search or get_listing_detail by focusing on similarity based on category, price, and brand/frame size.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Use when the user is interested in one bike and wants alternatives' and provides an example. It implies appropriate usage but does not explicitly exclude scenarios or compare to alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_buying_guideARead-onlyIdempotentInspect
Search Cyclesite's expert buying guides (24+ articles by cycling-journalism authors). Returns up to 3 matching guides with title, excerpt, difficulty, reading time, and URL. Use for educational queries that don't need live inventory. Example: 'how do I choose a bike size?', 'tips for buying a used e-bike'.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 1-5, default 3. | |
| query | Yes | What to search for (e.g. "first road bike", "bike sizing"). |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds details beyond annotations: returns up to 3 guides, lists returned fields, mentions source (24+ articles by cycling-journalism authors). No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with example, front-loaded key purpose. Every word is useful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists and description covers return fields, the tool is fully described for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%. Description adds example queries and clarifies limit range (1-5, default 3) beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches buying guides with specific fields returned. It distinguishes from sibling tools like search_bikes by indicating it's for educational queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use for educational queries that don't need live inventory' and provides example queries. Does not explicitly state when not to use or name alternatives, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_depreciationARead-onlyIdempotentInspect
Brands ranked by how well (or poorly) they hold their value, from Cyclesite's UK sold-price corpus. Returns top N brands by % retained vs new RRP. Example: 'which bike brands hold their value best?'.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | ||
| limit | No | 1-20, default 10. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnly/Idempotent hints. The description adds context about the data source (Cyclesite's UK sold-price corpus) and calculation (% retained vs new RRP), going beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. Front-loads the core purpose and provides an immediate usage example.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with 2 optional parameters, the description covers the essential aspects: what it returns, how to use it, and example context. Output schema exists, so return values need no further explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 50% of parameters (limit has description, sort has enum). The description adds value by referencing 'top N' and 'best or worst', but does not fully compensate for missing schema details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns brands ranked by depreciation, a specific verb ('get', 'ranked') and resource ('brands'), and distinguishes from siblings like list_brands and get_valuation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear usage example ('which bike brands hold their value best?') and implies the tool is for value retention questions. However, it does not explicitly exclude when to use alternatives or provide when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_listing_detailARead-onlyIdempotentInspect
Full details for a specific bike listing on Cyclesite — specs, condition, frame number presence, photos, delivery, seller's city. Provide the URL slug returned by search_bikes or get_recent_listings. Example: after the user says 'tell me more about that 2022 Trek Domane', call this with the slug from the prior result.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Listing URL slug (e.g. "used-trek-domane-sl-6-2022"). |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark the tool as readOnly, openWorld, and idempotent. The description adds valuable behavioral context: the return value includes full details (specs, condition, frame number, photos, delivery, seller's city) and the source of the slug. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that front-load the purpose and follow with concrete usage guidance and an example. Every sentence adds value, no filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown but indicated true), the description does not need to detail return values. It fully covers purpose, data returned, source of required parameter, and a realistic invocation example. With 29 siblings, it is clear when to use this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with the parameter 'slug' described as a listing URL slug. The description adds provenance: the slug comes from 'search_bikes' or 'get_recent_listings', which aids the agent in chaining calls. This additional context justifies above baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('get') and resource ('listing detail') and enumerates the exact data returned (specs, condition, frame number, photos, delivery, seller's city). It clearly distinguishes from sibling tools like 'search_bikes' by requiring a slug from prior results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides an explicit usage scenario: after a user expresses interest in a specific bike, use the slug from a prior result. It implies when to use but does not explicitly mention when not to use or list alternatives, though the context of 29 siblings makes the purpose clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_healthARead-onlyIdempotentInspect
Buyer's-vs-seller's market signal for the UK used-bike market — should the user buy or sell now? Composite indicator from days-to-sell, asking-vs-sold-price spread, and inventory levels. Example: 'is now a good time to buy a road bike?'. Refreshed nightly.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, openWorldHint, and idempotentHint. The description adds value by explaining the composite indicator components (days-to-sell, spread, inventory) and stating 'Refreshed nightly.' This provides update frequency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise: two sentences plus an example. Every word adds value, and the information is front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and an output schema exists, the description provides sufficient conceptual context. It explains the composite indicator and includes an example, but it doesn't detail output format (delegated to schema). Slight improvement could mention intended audience or typical use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so the description carries full burden. It clearly explains what the tool returns conceptually (a composite market signal), which is essential for an agent to understand the output without depending on parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's purpose: 'Buyer's-vs-seller's market signal for the UK used-bike market — should the user buy or sell now?' It specifies the verb (signal), resource (UK used-bike market), and scope. This distinguishes it from siblings like get_market_index and get_price_trends.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an example ('is now a good time to buy a road bike?') and states when to use (to decide buy/sell). However, it does not explicitly mention when not to use or provide alternatives among siblings, which is a minor gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_indexARead-onlyIdempotentInspect
Current UK used-bike market prices by category, from Cyclesite's nightly index. Returns median + range per category (road, mtb, gravel, e-bike, etc.). Example: 'how does the UK used-bike market look right now?'. Refreshed nightly from completed sales.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly, openWorld, idempotent. Description adds useful context: data refreshed nightly, returns median + range per category, source from completed sales.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, no wasted words, front-loaded with key information about data and purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully covers what the tool does, its data source, update frequency, and output structure. With an output schema present, no further detail needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters, so baseline 4 applies. Description adds no parameter info, but none needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns current UK used-bike market prices by category from Cyclesite's nightly index. Distinguishes from siblings like get_market_health, get_price_trends, get_depreciation by focusing on the broad index.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides an example query that implies when to use (asking about overall market). Does not explicitly mention when not to use or list alternatives, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_model_infoARead-onlyIdempotentInspect
Cyclesite catalogue entry for a brand+model: category, year range, AI-generated description, key specs, market summary. Reference data — refreshed when models are added or specs change. Example: 'tell me about the Specialized Allez'.
| Name | Required | Description | Default |
|---|---|---|---|
| brand | Yes | ||
| model | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and openWorldHint. The description adds context by stating it is 'reference data — refreshed when models are added or specs change', which explains the openWorldHint (data may change over time) and the read-only nature. It also mentions 'AI-generated description', implying non-deterministic generation but idempotent for same inputs. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences plus an example, all of which add value: the first sentence states the tool's purpose and output content, the second explains data freshness, and the example clarifies usage. No redundant or unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown), the description does not need to document return values. It covers the tool's input (brand, model), output content categories, and update behavior. The mention of 'AI-generated description' is a notable behavioral trait. The description is sufficiently complete for a reference data tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has two string parameters (brand, model) with 0% description coverage. The description partially compensates by providing an example ('Specialized Allez'), which indicates the parameters but does not specify valid values, formatting, or constraints. Given the low coverage, more explicit parameter guidance would be needed for a higher score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns a catalogue entry for a brand and model, listing specific content categories (category, year range, AI-generated description, key specs, market summary). It distinguishes from sibling tools like get_listing_detail or get_spec_sheet by focusing on static reference data, and the example reinforces its purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies it is reference data that refreshes when models or specs change, indicating it is for static model information. The example 'tell me about the Specialized Allez' provides a clear usage scenario. However, it does not explicitly contrast with siblings like get_spec_sheet or search_bikes, which could be alternative tools for more detailed or dynamic queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_my_enquiriesARead-onlyIdempotentInspect
Show buyer enquiries on the authenticated user's Cyclesite listings. Requires OAuth scope listings:read. Example: 'any messages about my Trek?'.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 1-20. | |
| listingId | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent behavior. The description adds the OAuth scope requirement, which is beyond annotations. No contradictions, and the example provides operational context. Missing details like pagination or filtering behavior are minor given the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise: two sentences and an example. Every sentence adds value with no redundancy. Purpose is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description doesn't need to explain return values. It covers the tool's purpose and auth requirement. However, it lacks comparison to siblings like 'respond_to_enquiry' or guidance on when no enquiries exist, which would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema describes limit (1-20) but not listingId. The tool description's example hints at filtering by listing but doesn't explicitly explain parameters. With 50% schema coverage, the description adds no additional meaning beyond the schema, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool shows buyer enquiries on the authenticated user's listings, with a specific example. It distinguishes from siblings like make_enquiry and respond_to_enquiry by focusing on listing/viewing rather than creating or responding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions required OAuth scope and provides an example, implying usage context. However, it lacks explicit guidance on when to use this tool versus related siblings (e.g., when to view vs. respond) or conditions like no enquiries exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_price_trendsARead-onlyIdempotentInspect
UK used-bike price trends over the last N months by category, from Cyclesite's index series. Example: 'how have road-bike prices changed in 2026?'. Monthly data, refreshed at month-end.
| Name | Required | Description | Default |
|---|---|---|---|
| months | No | 1-24. | |
| category | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true. The description adds 'Monthly data, refreshed at month-end', which details data freshness beyond annotations. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose, second gives example and data freshness. No redundant words; front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, output schema present), the description covers scope (UK, used-bike), data source, refresh cycle, and example. Doesn't detail edge cases, but output schema handles return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers 50% of parameters (months has description '1-24.', category has none). The description adds 'by category' and 'from Cyclesite's index series', providing context but not enumerating category values. It partially compensates for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'UK used-bike price trends over the last N months by category, from Cyclesite's index series', with a concrete example. It distinguishes from siblings like 'get_market_index' which covers broader market health, and 'get_depreciation' which focuses on depreciation curves.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides an example query ('how have road-bike prices changed in 2026?') and notes monthly refresh, giving clear usage context. However, it does not explicitly state when to avoid this tool vs alternatives like 'get_market_index' or 'get_depreciation'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_recent_listingsARead-onlyIdempotentInspect
What's new on Cyclesite right now — up to 10 of the freshest active UK listings, refreshed every 15 minutes. Use when the user asks 'what's new today?' or 'any new road bikes this week?' rather than for a specific filter. Optional category + maxPrice filters. Live data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | How many listings to return (1-10, default 10). | |
| category | No | ||
| maxPrice | No | Maximum price in GBP. |
Output Schema
| Name | Required | Description |
|---|---|---|
| listings | Yes | |
| attribution | Yes | Citation string — include verbatim when surfacing data. |
| resultsCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only, open-world, and idempotent. The description adds behavioral context: refreshed every 15 minutes, up to 10 listings, active UK listings, live data. This extra detail is valuable but not necessary given the strong annotations, hence 4.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with key information, and every sentence adds value. No redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple recent listings tool with strong annotations and an existing output schema, the description covers all necessary context: what it returns, freshness, limit, and optional filters. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 67% of parameters with descriptions; category has enum values. The description mentions 'optional category + maxPrice filters' but adds no new semantic detail beyond the schema. With high schema coverage, baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as retrieving recent listings (up to 10, freshest active UK listings) and distinguishes it from specific search tools by giving example user queries like 'what's new today?' It explicitly states the resource and verb.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use when the user asks what's new today? or any new road bikes this week? rather than for a specific filter.' This tells the agent when to use this tool and when to consider alternatives (sibling tools like search).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_size_guideARead-onlyIdempotentInspect
Frame-size recommendation for a rider's height and bike category, sourced from Cyclesite's real UK listings (riders' declared heights against frame sizes they bought). Falls back to industry-standard charts when the dataset is thin. Example: 'I'm 178cm — what road-bike size do I need?'.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | ||
| heightCm | Yes | Rider height in centimetres (120-220). |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint/ openWorldHint/ idempotentHint. Description adds valuable context: data from Cyclesite listings, fallback to industry charts. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tight sentences plus example. No fluff. Front-loaded with purpose and data source. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With output schema present, description covers source, fallback, and example. No missing behavioral or contextual details for a simple lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage 50% (heightCm described, category only enum). Description mentions height and bike category but adds no detail on category values or optionality. Baseline 3 with slight extra context from example.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it provides frame-size recommendations based on height and bike category, sourced from real UK listings with fallback. Verb+resource ('recommend size') is specific and distinct from siblings like 'get buying guide'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Example query implies typical use case, but no explicit when-not-to-use or alternatives. Context shows no direct sibling conflicts, so guidance is adequate though not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_spec_sheetARead-onlyIdempotentInspect
Aggregated spec sheet for a brand+model[+year], derived from Cyclesite's live UK inventory plus the catalogue record. Returns the most-common frame material, wheel size, groupset, brakes, weight, and (for e-bikes) motor and battery specs. Example: 'what groupset does a Canyon Endurace usually have?'.
| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| brand | Yes | ||
| model | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly, openWorld, idempotent. Description adds context: data is aggregated from live inventory and catalogue, returns most-common values, with no contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus an example, no fluff. Purpose, data source, and output fields are front-loaded. Efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description sufficiently covers the tool's purpose, inputs, and output. The UK inventory scope is mentioned, and the example aids understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% coverage; description only mentions brand+model[+year] implicitly. No format, constraints, or explanation for the year parameter. An example partly compensates but insufficiently.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool aggregates spec sheets for brand+model optionally year, specifies data sources, and lists returned fields. The example query clarifies typical use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance. The example implies a use case but does not distinguish from siblings like get_model_info or get_buying_guide.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_valuationARead-onlyIdempotentInspect
What a used UK bike is worth right now — Cyclesite's flagship tool. Returns median, range, condition breakdown, confidence level, 90-day price trend, and comparable active listings. Sourced from real completed UK sales (sold-price data, refreshed nightly), not asking prices. The data Cyclesite is uniquely the source for. Example: 'what's a 2022 Trek Domane SL 6 worth?'.
| Name | Required | Description | Default |
|---|---|---|---|
| brand | Yes | ||
| model | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | No | |
| summary | Yes | One-sentence summary safe to quote verbatim. |
| confidence | No | |
| priceTrend | No | |
| attribution | Yes | Citation string — include verbatim when surfacing data. |
| maxPriceGbp | No | Price in GBP. |
| minPriceGbp | No | Price in GBP. |
| avgDaysToSell | No | |
| medianPriceGbp | No | Price in GBP. |
| recentSalesCount | No | |
| conditionBreakdown | No | |
| activeListingsCount | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, openWorldHint. The description adds significant behavioral details: data source (real completed UK sales, not asking prices), refresh rate (nightly), and output contents (median, range, condition breakdown, confidence level, 90-day trend, comparable listings). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is three sentences plus an example, front-loading the primary purpose. Every sentence adds value: what it returns, data quality, uniqueness. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (documenting return values) and comprehensive annotations, the description completes the picture with critical context about data provenance and update frequency. An agent fully understands when to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 2 required parameters (brand, model) with no descriptions (0% coverage). The description does not explain these parameters beyond the example, so it adds no semantic value. Given low coverage, the description should compensate but fails to do so.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides the current worth of a used UK bike, listing specific outputs (median, range, condition breakdown, etc.) and emphasizing it uses real sold-price data. It distinguishes from siblings like 'get_depreciation' and 'suggest_listing_price' by calling itself the 'flagship tool' and highlighting its unique data source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for quick valuations of used UK bikes with an example ('what's a 2022 Trek Domane SL 6 worth?'), but it does not explicitly state when to use this tool vs. alternatives or provide exclusions. Still, the context is clear enough for an agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
grade_listing_qualityARead-onlyIdempotentInspect
Categorical quality grade for a Cyclesite listing (excellent / good / fair / weak) plus up to 2 wins and 2 flags. Helps a buyer assess trustworthiness; helps a seller self-audit. Example: 'is this listing trustworthy?' (provide the slug). Note: returns the categorical judgement only, not the underlying score (intentional to avoid gaming).
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that only categorical judgment is returned, not the underlying score (to avoid gaming), which is important behavioral context beyond annotations. Consistent with readOnlyHint, openWorldHint, and idempotentHint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, an example, and a note—all relevant and front-loaded. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers core behavior and constraints. Output schema exists, so the lack of detailed return values is acceptable. Minor missing details (e.g., what 'flags' and 'wins' mean) are not critical given schema coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one required parameter 'slug' with no schema description; the description mentions 'provide the slug' in an example but does not define what a slug is. Coverage is 0%, so the description partially compensates but could be more precise (e.g., 'listing slug').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a categorical quality grade (excellent/good/fair/weak) plus wins and flags, distinguishing it from siblings like get_listing_detail or check_stolen. The verb 'grade' is explicit and specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description helps a buyer assess trustworthiness and a seller self-audit, with an example query. It does not explicitly exclude cases or name alternatives, but the sibling list implies specialized use. Slightly better guidance on when not to use would improve.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_brandsARead-onlyIdempotentInspect
Paginated UK bike-brand catalogue from Cyclesite, ordered by stock level. Use to validate a brand name, surface options to a user, or paginate the catalogue. Stock counts are returned as bands (none / 1-5 / 6-25 / 26-100 / 100+) — Cyclesite doesn't expose precise per-brand inventory. Example: 'what brands of e-bike are available?'.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Optional search filter (case-insensitive contains). | |
| limit | No | 1-50, default 25. | |
| offset | No | 0-500. |
Output Schema
| Name | Required | Description |
|---|---|---|
| limit | No | |
| total | No | |
| brands | No | |
| offset | No | |
| hasMore | No | |
| attribution | No | Citation string — include verbatim when surfacing data. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral details beyond the annotations: orders by stock level, returns stock counts as bands, and notes the limitation that Cyclesite does not expose precise inventory. This fully informs the agent about what to expect, and there is no contradiction with the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three concise sentences, each adding essential information. The main purpose is front-loaded, followed by use cases, then a key behavioral detail and an example. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 optional parameters, an output schema, and helpful annotations, the description covers all necessary aspects: what it lists, ordering, filtering, pagination, stock data format, and a concrete example. It leaves no significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so baseline is 3. The description enhances meaning by clarifying that the catalogue is ordered by stock level (affects q and ordering) and that stock counts are bands, adding context that the schema alone does not convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool as a paginated UK bike-brand catalogue with specific verb (list) and resource (brands). It implicitly distinguishes from sibling tools like list_models_for_brand by focusing on brands, and provides an example usage that clarifies its role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states three use cases: validate a brand name, surface options to a user, or paginate the catalogue. This gives clear context, but it does not mention when not to use or name alternative tools, so it misses the highest bar.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_models_for_brandARead-onlyIdempotentInspect
Models for a brand on Cyclesite (paginated). Returns model names, year ranges, in-stock flag. Example: "what Trek road bikes are available?" → list_brands(q:"Trek") → list_models_for_brand(brandSlug:"trek", category:"road").
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 1-50, default 25. | |
| offset | No | 0-500. | |
| category | No | ||
| brandSlug | Yes | Brand slug from list_brands (lowercase, hyphenated). |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent. Description adds pagination behavior and output fields. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences plus an example, front-loaded with purpose and output. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool and presence of an output schema, description adequately covers input, pagination, and output fields. Complete for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema descriptions cover 75% of parameters; description adds minimal extra meaning (e.g., brandSlug origin). Category enum lacks description, but description doesn't compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it lists models for a brand on Cyclesite, paginated, returning specific fields (names, year ranges, in-stock). Differentiates from sibling tools like list_brands and get_model_info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides an explicit example workflow showing when to use it after list_brands, with category filtering. No explicit when-not-to-use or alternatives, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_my_listingsARead-onlyIdempotentInspect
Show the authenticated user's Cyclesite listings (draft / active / sold). Requires OAuth scope listings:read. Example: 'how are my listings doing?'.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 1-20. | |
| status | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations indicating readOnly, idempotent, and openWorld, the description adds the required OAuth scope, increasing transparency. However, it does not discuss other behavioral aspects like pagination or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences cover purpose, scope, auth, and usage example with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity and presence of output schema, the description adequately covers key aspects. It could mention result pagination or filtering behavior, but is largely sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions 'draft / active / sold' which aligns with the status enum, adding marginal value. The limit parameter is not elaborated. With 50% schema description coverage, the description does not significantly compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Show the authenticated user's Cyclesite listings' with a specific verb and resource, and distinguishes from sibling tools like search or get_listing_detail by emphasizing personal listings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It mentions OAuth scope requirement and provides an example utterance, but does not explicitly contrast with alternatives or specify when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
make_enquiryAInspect
Send an enquiry to a Cyclesite seller on the buyer's behalf — Cyclesite becomes the messaging layer for the AI conversation. Per-buyer-per-listing daily cap (2/day) prevents spam. The seller is emailed; the buyer's reply appears via get_my_enquiries. Requires OAuth scope enquiries:respond (note: the scope name is shared with seller-side replies). Example: 'message the seller of that Trek and ask if they'd take £1,400 collection only in Manchester next Saturday'.
| Name | Required | Description | Default |
|---|---|---|---|
| message | Yes | The buyer's question to the seller (10-2000 chars). | |
| listingId | Yes | Bike ID from search_bikes / get_listing_detail. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses email notification, buyer reply retrieval via get_my_enquiries, and rate limiting. Annotations already indicate non-destructive write; description adds useful context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences front-loading purpose, with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage, behavioral details, and parameter sources. Output schema exists, so return values are not needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with descriptions. The description adds that listingId comes from search results and reinforces message constraints. Adds value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool sends an enquiry on behalf of a buyer, with a specific verb and resource. It distinguishes from sibling tools like 'respond_to_enquiry' and 'get_my_enquiries' by explaining the flow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions daily cap (2/day) and required OAuth scope, and provides a concrete example. Implicitly contrasts with seller-side reply tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mark_as_soldAIdempotentInspect
Mark a Cyclesite listing as sold (optionally with the final sale price). Requires OAuth scope listings:manage. Example: 'mark my Trek Domane as sold for £1,750'.
| Name | Required | Description | Default |
|---|---|---|---|
| listingId | Yes | ||
| salePriceGbp | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate mutability (readOnlyHint=false) and idempotency. Description adds OAuth scope requirement but does not address irreversible nature or state change details beyond the action itself.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus a code example, no wasted words. Information is front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given annotations, output schema existence, and 2 parameters, the description covers the main action, optional param, and scope. Missing context about whose listings can be marked sold, but otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Parameter schema coverage is 0%, but description explains salePriceGbp is optional and gives an example with currency. It does not detail listingId beyond being required, but the example clarifies its role.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Mark' and resource 'listing' as sold, with an example that clarifies the action. It clearly distinguishes from sibling tools like reserve_listing or publish_listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description states required OAuth scope and provides a concrete example, giving clear context for when to use. However, it does not explicitly exclude alternatives or mention when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
publish_listingAIdempotentInspect
Publish a Cyclesite listing on the user's behalf. Multi-step: first call (no draftId) returns a phone-friendly photo upload URL; once 3+ photos are uploaded, the next call returns either step:'live' (during launch promo, no fee) or step:'payment_required' with a Stripe Checkout URL for the £10.99 listing fee. Idempotent — keep calling with the same draftId until step:'live'. Requires OAuth scope listings:publish. Example flow: user says 'sell my Trek Domane' → call publish_listing → assistant directs user to upload URL → user uploads → call again → step:'live' → seller receives a confirmation email with a 24h undo link.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | UK city. | |
| year | No | ||
| brand | No | ||
| model | No | ||
| title | No | Listing title (10-160 chars). | |
| draftId | No | Returned by a previous call. Omit on first call. | |
| category | No | ||
| groupset | No | ||
| priceGbp | No | ||
| willShip | No | ||
| condition | No | ||
| frameSize | No | ||
| description | No | Listing description (30-5000 chars). | |
| frameSerial | No | Frame/serial number — runs a stolen-bike check before publish. | |
| knownIssues | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| step | Yes | |
| draftId | No | |
| message | Yes | |
| listingUrl | No | |
| paymentUrl | No | |
| photosNeeded | No | |
| listingFeeGbp | No | Price in GBP. |
| photoUploadUrl | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Disclosures far exceed annotations: multi-step behavior, photo upload requirement, conditional payment step, idempotency, and OAuth scope. No contradiction with annotations (readOnlyHint=false, openWorldHint=true, idempotentHint=true, destructiveHint=false).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose, followed by a concise step-by-step explanation and an example flow. Every sentence adds value with no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite high parameter count and low schema coverage, the description fully explains the multi-step workflow, upload process, payment step, and required OAuth scope. The output schema exists, so return values need not be detailed here.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is only 33%, so the description must compensate. It adds meaning for draftId (omit on first call, returned later) and frameSerial (stolen bike check), and character limits for title and description. However, many parameters like brand, model, and year remain unexplained beyond the schema, which is a gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool publishes a Cyclesite listing and outlines the multi-step process. It distinguishes from sibling tools like draft_listing by detailing the specific flow involving photo upload and payment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage instructions: first call without draftId, then subsequent calls with draftId until step:'live'. It also mentions OAuth scope and idempotency, but does not explicitly state when not to use this tool versus alternatives like draft_listing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recommend_bike_for_budgetARead-onlyIdempotentInspect
Curated picks from Cyclesite's live UK inventory for a budget and intent. Prefers higher-engagement listings. Returns up to 5 picks with a one-line rationale each. Example queries: 'a road bike for £1,500 for weekend rides', 'best e-MTB I can buy under £3,000', 'commuter bike in London under £400'.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | UK city to focus on (optional). | |
| limit | No | 1-10, default 5. | |
| useCase | No | Free-text intent (e.g. "commuting", "weekend trail", "first road bike"). | |
| category | No | ||
| budgetGbp | Yes | Maximum budget in GBP. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Disclosures beyond annotations: prefers higher-engagement listings, returns up to 5 picks with rationale. Annotations already declare readOnlyHint=true and idempotentHint=true, but description adds curation behavior, which is helpful. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: main purpose, behavior (prefers high-engagement, up to 5, rationale), and examples. Front-loaded, no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers core functionality and return format (up to 5 with rationale). With output schema present, return values are handled. Missing edge cases or failure modes, but sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 80%, so baseline is 3. The description adds no new parameter-level details beyond schema, though example queries demonstrate typical combinations. Meets adequacy but does not excel.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies a clear verb (recommend), resource (bikes from Cyclesite's live UK inventory), and constraints (budget and intent). It distinguishes from sibling tools like search_bikes by emphasizing curated picks with rationale, limited to 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides example queries that illustrate ideal use cases (e.g., 'a road bike for £1,500 for weekend rides'). However, it lacks explicit guidance on when not to use this tool versus alternatives, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
report_stolenARead-onlyIdempotentInspect
Step-by-step guidance for reporting a stolen UK bike: police, insurance, listing alerts. Returns a 5-step checklist plus the official Cyclesite report URL. Example: 'my bike was just stolen, what do I do?'.
| Name | Required | Description | Default |
|---|---|---|---|
| brand | No | ||
| model | No | ||
| serial | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and openWorldHint, indicating a safe read-only operation. The description adds that it returns a checklist and URL, consistent with no side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is one sentence plus an example, concise and front-loaded. However, it could be slightly more structured to separate what it does from examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given an output schema exists, the description explains the return value (checklist and URL) adequately. Missing information about parameter usage but overall sufficient for a guidance tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has three parameters (brand, model, serial) with no schema descriptions (0% coverage). The description does not explain how these parameters are used or whether they are needed, adding no meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Step-by-step guidance for reporting a stolen UK bike' with specific outputs (checklist and URL) and an example query. It distinguishes from sibling tools like check_stolen which is for checking stolen status, not reporting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The example 'my bike was just stolen, what do I do?' implies when to use this tool. However, it does not explicitly state when not to use it or mention alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reserve_listingAIdempotentInspect
Hold a Cyclesite listing for 24 hours so other buyers can't claim it while the user decides. Optional refundable deposit via Stripe (returned if the user doesn't proceed; applied to the bike if they do). The first UK marketplace where a buyer can COMMIT inside an AI conversation. Requires OAuth scope listings:manage. Example: 'put a hold on that Trek for me, I want to view it Saturday'.
| Name | Required | Description | Default |
|---|---|---|---|
| listingId | Yes | ||
| depositGbp | No | Optional refundable deposit (£10-£500). When set, user pays via Stripe Checkout. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations include `idempotentHint: true`, `destructiveHint: false`. The description adds value by explaining the 24-hour hold, refundable deposit, and Stripe integration, going beyond annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus an example, front-loaded with purpose and key details. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists, description adequately covers behavior and parameters. Could mention output, but output schema handles it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (depositGbp has description, listingId does not). The description compensates by explaining listingId implicitly and adding context about deposit range and payment method via Stripe Checkout.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool holds a Cyclesite listing for 24 hours to prevent others from claiming, with optional deposit. It distinguishes from siblings like 'publish_listing' and 'draft_listing' by focusing on reservation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear use case with an example ('put a hold on that Trek for me'), but does not explicitly mention when not to use or alternatives, though context makes it sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
respond_to_enquiryAInspect
Reply to a buyer enquiry on the authenticated user's listing. Requires OAuth scope enquiries:respond. Example: 'reply to that enquiry — say it's still available, collection only'.
| Name | Required | Description | Default |
|---|---|---|---|
| answer | Yes | Reply text (1-2000 chars). | |
| enquiryId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=false (modification), destructiveHint=false, and idempotentHint=false. The description adds 'Reply to a buyer enquiry' which implies a non-destructive write. No extra behavioral traits are disclosed beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no extraneous words. The purpose is front-loaded, and the example is compact. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description adequately covers purpose, authentication, and gives an example. An output schema exists (though not shown), so return values are covered. It could mention that the tool only works on the user's own listings, but 'authenticated user's listing' implies that.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (answer has a description, enquiryId does not). The description does not add detail beyond the schema for the parameters, but the example implicitly shows usage. The description could elaborate on enquiryId.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Reply' and resource 'buyer enquiry on the authenticated user's listing.' It distinguishes from sibling tools like 'make_enquiry' (create) and 'get_my_enquiries' (list).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly mentions the required OAuth scope 'enquiries:respond' and provides a concrete example of how to use the tool. It does not explicitly state when not to use it, but the example and scope provide good guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_searchAInspect
Subscribe the user to alerts for new Cyclesite listings matching a filter — the AI assistant will then proactively notify them when a matching bike appears (price drop or fresh listing). Requires OAuth scope listings:read (read-only on data, but this is technically a write — it creates a SavedSearch row on the user's account). Examples: 'let me know when a Trek Domane SL 6 in Manchester under £2,000 appears', 'alert me to any e-MTB drops below £2,500 in Yorkshire'. Each user is capped at 50 active alerts.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | UK city to focus on. | |
| name | No | Optional human-readable name (e.g. "Trek Domane in Manchester"). | |
| brand | No | ||
| model | No | ||
| category | No | ||
| maxPrice | No | ||
| minPrice | No | ||
| condition | No | ||
| alertFrequency | No | How often to send digest. Default: instant. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the tool is a write operation (creates a SavedSearch row), requires OAuth scope, and has a proactive notification behavior. It adds context beyond annotations by explaining the alert mechanism and user cap, with no contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise yet informative, structured with a clear purpose, behavior explanation, authentication note, examples, and a limit. Every sentence serves a purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity with 9 parameters and its action of creating alerts, the description covers purpose, behavior, authentication, rate limiting, and provides examples. It is sufficiently complete for an agent to understand and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description provides examples that implicitly map to parameters like brand, model, and price, but does not add detailed meaning to each parameter. With only 33% schema description coverage, the description partially compensates through usage examples but does not fully clarify all parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool subscribes users to alerts for new listings matching a filter, specifying it will proactively notify about price drops or fresh listings. It distinguishes itself from sibling tools by focusing on alert creation rather than one-time searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Examples illustrate typical user queries, and the description mentions OAuth scope and a cap of 50 active alerts. However, it lacks explicit guidance on when not to use this tool or direct comparison with alternatives like search functions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchARead-onlyIdempotentInspect
OpenAI deep-research / company-knowledge compatibility. Search Cyclesite's active UK used-bike listings by free-text query (matches title, brand, model). Returns the canonical OpenAI shape: { results: [{ id, title, url }] }. Use the id to call fetch() for the full document.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Free-text query. |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, and idempotent behavior. The description adds that it returns the canonical OpenAI shape with `id`, `title`, `url`, and suggests a follow-up call. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each with a distinct purpose: compatibility/scope, matching behavior, and output format/next step. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter search tool with an output schema, the description covers the purpose, output shape, and integration with `fetch()`. It's sufficiently complete for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a brief description for `query`. The description adds meaning by explaining what fields are matched (title, brand, model), beyond the schema's 'Free-text query.'
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches Cyclesite's active UK used-bike listings by free-text query, specifying it matches title, brand, and model. It distinguishes from siblings like `fetch` and `search_bikes` by noting the output shape and follow-up action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: use for free-text search, and then use the returned `id` to call `fetch()`. It doesn't explicitly exclude alternatives, but the hint about the OpenAI shape and the specificity are helpful.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_bikesARead-onlyIdempotentInspect
Search live UK used-bike listings on Cyclesite (the UK's used bicycle marketplace). Filter by brand, category, city, price range, and condition. Returns up to 5 active listings with specs and listing URLs. Live data — refreshed continuously as new bikes are listed. Example queries: 'a Trek Domane in Manchester under £2,000', 'gravel bike, very good condition, near Bristol'.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | UK city. | |
| brand | No | Bike brand (e.g. Trek, Specialized, Canyon). | |
| category | No | Bike category. | |
| maxPrice | No | Maximum price in GBP. | |
| minPrice | No | Minimum price in GBP. | |
| condition | No | Condition rating. |
Output Schema
| Name | Required | Description |
|---|---|---|
| listings | Yes | |
| attribution | Yes | Citation string — include verbatim when surfacing data. |
| resultsCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open world, and idempotent. The description adds valuable context: live data refreshed continuously and a result limit of 5 listings. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences plus examples, front-loaded with the purpose. Every sentence adds value, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers data source, filters, result limit, and example queries. With an output schema present, it need not detail return values. It is complete enough to use correctly among many sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with each parameter described. The description adds overall filtering context and example queries, which helps but does not add new semantic meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches live UK used-bike listings on Cyclesite with filtering by brand, category, city, price range, and condition. It specifies the resource, verb, and scope, and the example queries help differentiate it from siblings like search_by_location or get_recent_listings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives clear context on what the tool does and provides example queries, but it does not explicitly state when not to use it or mention alternative tools. It implicitly guides filtering use but lacks exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_by_locationARead-onlyIdempotentInspect
Find Cyclesite listings within a radius of a UK location (lat/lng). Radius capped at 50 miles. Returns up to 10 listings ordered by distance. Live UK marketplace data. Example: 'used bikes within 25 miles of LE10 0AA' (geocode the postcode first, then call this).
| Name | Required | Description | Default |
|---|---|---|---|
| lat | Yes | Latitude (UK only, 49.5–61.0). | |
| lng | Yes | Longitude (UK only, -8.5–2.0). | |
| limit | No | 1-10, default 5. | |
| category | No | ||
| maxPrice | No | ||
| radiusMiles | No | Search radius in miles (1-50, default 25). |
Output Schema
| Name | Required | Description |
|---|---|---|
| listings | Yes | |
| attribution | Yes | Citation string — include verbatim when surfacing data. |
| resultsCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent behavior. The description adds important details like radius cap, result limit, ordering, and live data source, enhancing transparency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences plus an example. Every part adds value with no redundancy. Front-loaded with key action and constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description covers operation, constraints, and provides a concrete example. It is complete for an agent to understand and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 67%; description adds context for lat/lng and radius constraints but does not explain category or maxPrice. The description complements the schema partially.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds Cyclesite listings by UK lat/lng with a radius, specifying constraints like cap and ordering. It distinguishes from sibling search tools by focusing on location-based search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a usage example and implies geocoding is a prerequisite, but does not explicitly compare to alternatives like search or search_bikes. The guidance is clear for the core use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
suggest_listing_priceARead-onlyIdempotentInspect
For a seller about to list: suggested ask, floor, and ceiling for their bike's brand+model[+condition] on the UK market. Same Cyclesite sold-price corpus as get_valuation but framed as seller guidance. Example: 'I'm selling a 2021 Specialized Allez in good condition — what should I ask?'.
| Name | Required | Description | Default |
|---|---|---|---|
| brand | Yes | ||
| model | Yes | ||
| condition | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint, openWorldHint, idempotentHint. Description adds behavioral context: uses same sold-price corpus as get_valuation and targets UK market. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus an example; front-loaded with purpose. No wasted words, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters, no schema descriptions, annotations, and an output schema (assumed), the description covers domain (UK), output types, and data source. Missing detail on condition default or brand/model format, but sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description explains brand, model, and condition (optional) in context, and mentions output (ask, floor, ceiling). Example clarifies parameter usage. Covers key parameters but lacks format or default behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it suggests ask, floor, and ceiling for a bike's brand+model+condition for UK market sellers. It distinguishes itself from the sibling get_valuation by noting same corpus but framed as seller guidance, and provides an example.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States usage context 'for a seller about to list' and contrasts with get_valuation. Implicitly suggests when to use but lacks explicit when-not-to-use or other alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!