coinmarketcap-mcp
Server Details
Coinmarketcap - 30 tools for data, metrics, and on-chain analytics
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- junct-bot/coinmarketcap-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
30 toolsget_v1_blockchain_statistics_latestBInspect
Statistics Latest — Returns the latest blockchain statistics data for 1 or more blockchains. Bitcoin, Litecoin, and Ethereum are currently supported. Additional blockchains will be made available on a regular basis. This endpoint is available on the following API plans: - Basic - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Every 15 seconds. Plan credit use: 1 call credit per request. CMC equivalent pages: Our blockchain explorer pages like blockchain.coinmarketcap.com/. Returns:
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | One or more comma-separated cryptocurrency CoinMarketCap IDs to return blockchain data for. Pass `1,2,1027` to request all currently supported blockchains. | |
| slug | No | Alternatively pass a comma-separated list of cryptocurrency slugs. Pass `bitcoin,litecoin,ethereum` to request all currently supported blockchains. | |
| symbol | No | Alternatively pass one or more comma-separated cryptocurrency symbols. Pass `BTC,LTC,ETH` to request all currently supported blockchains. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and delivers substantial operational context: update frequency (every 15 seconds), credit cost (1 per request), supported chains (Bitcoin/Litecoin/Ethereum), and plan restrictions (Enterprise only). This gives the agent critical cost and rate-limiting information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded with purpose first, but structure is cluttered with Markdown strikethrough formatting (~~Basic~~) for plan tiers and ends with a hanging 'Returns:' that promises but fails to deliver return value documentation. The plan tier list consumes space without aiding tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema, the description ends with 'Returns:' and provides no information about what data is returned (hash rate? difficulty? block height?). This is a critical gap for agent decision-making. Operational metadata is complete, but the missing return specification severely limits usefulness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with clear examples (1,2,1027 for IDs; BTC,LTC,ETH for symbols). The description mentions '1 or more blockchains' which aligns with the comma-separated parameter pattern, but adds no additional semantic detail beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Returns the latest blockchain statistics data for 1 or more blockchains' and distinguishes itself from sibling cryptocurrency tools by referencing 'blockchain explorer pages' rather than market data. However, it opens with 'Statistics Latest —' which appears to be a title artifact rather than descriptive prose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides operational constraints (Enterprise plan required, 1 credit per call, 15-second cache) and maps to 'CMC equivalent pages' giving usage context. However, it lacks explicit guidance on when to choose this over sibling tools like get_v1_cryptocurrency_quotes_latest or get_v1_globalmetrics_quotes_latest.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_airdropAInspect
Airdrop — Returns information about a single airdrop available on CoinMarketCap. Includes the cryptocurrency metadata. This endpoint is available on the following API plans: - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Data is updated only as needed, every 30 seconds. Plan credit use: 1 API call credit per request no matter query size. CMC equivalent pages: Our free airdrops page coinmarketcap.com/airdrop/. Returns: { data: { $key: { id: string, project_name: string, description: string, status: stri
PREREQUISITE: You MUST first call a listing/map endpoint to resolve asset identifiers (id, slug, symbol) before calling data endpoints.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Airdrop Unique ID. This can be found using the Airdrops API. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden and delivers substantial operational context: cache frequency (30 seconds), credit cost (1 per request), plan tiers, and equivalent web pages. However, the return value description is truncated ('status: stri') and lacks error behavior details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with clear purpose statement, but suffers from formatting clutter (markdown bullet lists for plans) and abrupt truncation in the Returns section. The prerequisite is well-separated at the end. Could be tighter but information density is reasonable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive operational metadata provided (plans, credits, caching) for an API endpoint. Includes prerequisite workflow guidance. Only gap is the truncated return value schema description; otherwise complete for a data retrieval tool without output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (single 'id' parameter fully documented). The description references that IDs 'can be found using the Airdrops API' but this merely repeats the schema description. Since schema fully documents the parameter, baseline 3 is appropriate with minimal additive value from description text.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Returns information about a single airdrop' with specific verb (Returns) and resource (airdrop information). It distinguishes itself from sibling tool 'get_v1_cryptocurrency_airdrops' (plural) by explicitly referencing 'single airdrop' and includes scope context like 'Includes the cryptocurrency metadata'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite workflow: 'You MUST first call a listing/map endpoint to resolve asset identifiers... before calling data endpoints.' Also discloses API plan requirements (Hobbyist through Enterprise), guiding when the tool is available for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_airdropsBInspect
[DISCOVERY] Airdrops — Returns a list of past, present, or future airdrops which have run on CoinMarketCap. This endpoint is available on the following API plans: - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Data is updated only as needed, every 30 seconds. Plan credit use: 1 API call credit per request no matter query size. CMC equivalent pages: Our free airdrops page coinmarketcap.com/airdrop/. Returns: { data: { id: string, project_name: string, description: string, status: string, coin: { id: unknown, name:
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Filtered airdrops by one cryptocurrency CoinMarketCap IDs. Example: 1 | |
| slug | No | Alternatively filter airdrops by a cryptocurrency slug. Example: "bitcoin" | |
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| status | No | What status of airdrops. | |
| symbol | No | Alternatively filter airdrops one cryptocurrency symbol. Example: "BTC". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, but description discloses critical operational behaviors: cache/update frequency (30s), credit consumption (1 per request), and plan restrictions. Lacks details on pagination limits or error states, but covers cost/timing transparency well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description ends mid-sentence ('coin: { id: unknown, name:'), representing a critical structural failure. While operational details are useful, the truncation and verbose markdown formatting for API plans reduce structural quality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists and the description is truncated attempting to document return values. Missing sibling differentiation and full return documentation despite having 6 parameters and complex operational metadata to convey.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for all 6 parameters (id, slug, limit, start, status, symbol). The description mentions no parameter semantics, but with complete schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States it 'Returns a list of past, present, or future airdrops' with specific verb and resource scope. However, fails to distinguish from sibling tool `get_v1_cryptocurrency_airdrop` (singular), leaving ambiguity about list vs single-fetch use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Documents operational constraints like API plan requirements and credit costs, but provides no guidance on when to use this listing endpoint versus the singular `get_v1_cryptocurrency_airdrop` or filtering strategies.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_categoriesAInspect
[DISCOVERY] Categories — Returns information about all coin categories available on CoinMarketCap. Includes a paginated list of cryptocurrency quotes and metadata from each category. This endpoint is available on the following API plans: - Free - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Data is updated only as needed, every 30 seconds. Plan credit use: 1 API call credit per request + 1 call credit per 200 cryptocurrencies returned (rounded up) and 1 call credit per convert option beyond the first. CMC equivalent pages: Our free airdrops page [co
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Filtered categories by one or more comma-separated cryptocurrency CoinMarketCap IDs. Example: 1,2 | |
| slug | No | Alternatively filter categories by a comma-separated list of cryptocurrency slugs. Example: "bitcoin,ethereum" | |
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| symbol | No | Alternatively filter categories one or more comma-separated cryptocurrency symbols. Example: "BTC,ETH". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure beyond annotations (which are absent). Explicitly details cache/update frequency (30 seconds), complex credit consumption formula (1 credit + 1 per 200 cryptocurrencies + 1 per convert option), and plan availability tiers. Also notes pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with clear structural markers (bold headers for API plans, cache frequency, credit use). Every sentence provides actionable operational context. Minor deduction for the truncated final sentence ('[co'), though this appears to be a copy-paste error rather than verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description adequately summarizes the return value (category metadata and quotes). The comprehensive coverage of cost and caching behavior compensates somewhat for the lack of output schema details, though specific return field descriptions would strengthen it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all 5 parameters (id, slug, limit, start, symbol). The description does not add semantic detail beyond the schema, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it 'Returns information about all coin categories' and includes a 'paginated list of cryptocurrency quotes and metadata.' The plural 'categories' and '[DISCOVERY]' tag effectively distinguishes it from the singular sibling get_v1_cryptocurrency_category, indicating this is for bulk listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides rich operational context (API plans, credit costs, cache frequency) that informs usage decisions, but lacks explicit sibling comparison. It does not state when to use the plural 'categories' endpoint versus the singular 'category' endpoint, leaving agents to infer from the 'all' and 'paginated' descriptors.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_categoryAInspect
Category — Returns information about a single coin category available on CoinMarketCap. Includes a paginated list of the cryptocurrency quotes and metadata for the category. This endpoint is available on the following API plans: - Free - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Data is updated only as needed, every 30 seconds. Plan credit use: 1 API call credit per request + 1 call credit per 200 cryptocurrencies returned (rounded up) and 1 call credit per convert option beyond the first. CMC equivalent pages: Our Cryptocurrency Cate
PREREQUISITE: You MUST first call a listing/map endpoint to resolve asset identifiers (id, slug, symbol) before calling data endpoints.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The Category ID. This can be found using the Categories API. | |
| limit | No | Optionally specify the number of coins to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of coins to return. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent coverage given no annotations: details cache/update frequency (30 seconds), pagination behavior, and precise credit cost calculation (1 per request + 1 per 200 cryptos + 1 per convert option). Missing only error handling or retry behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded with bold headers, but the plan list is verbose (6 bullet items) and the description appears truncated ('Our Cryptocurrency Cate'), preventing a higher score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a read tool without output schema: covers prerequisites, behavioral costs, update frequency, and what data is returned (quotes, metadata). The truncation slightly mars completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions already in the schema. The main description adds no additional parameter-specific semantics (e.g., valid formats, examples) beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States it 'Returns information about a single coin category' with paginated quotes/metadata, distinguishing it from the plural sibling 'get_v1_cryptocurrency_categories'. However, it could more explicitly contrast the singular lookup vs. list use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the prerequisite: 'You MUST first call a listing/map endpoint to resolve asset identifiers before calling data endpoints.' This provides clear sequencing guidance, though it lacks explicit 'when not to use' alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_infoAInspect
[DISCOVERY] Metadata — Returns all static metadata available for one or more cryptocurrencies. This information includes details like logo, description, official website URL, social links, and links to a cryptocurrency's technical documentation. This endpoint is available on the following API plans: - Basic - Startup - Hobbyist - Standard - Professional - Enterprise Cache / Update frequency: Static data is updated only as needed, every 30 seconds. Plan credit use: 1 call credit per 100 cryptocurrencies returned (rounded up). CMC equivalent pages: Cryptocurrency detail page metadata like [c
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | One or more comma-separated CoinMarketCap cryptocurrency IDs. Example: "1,2" | |
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `urls,logo,description,tags,platform,date_added,notice,status` to include all auxiliary fields. | |
| slug | No | Alternatively pass a comma-separated list of cryptocurrency slugs. Example: "bitcoin,ethereum" | |
| symbol | No | Alternatively pass one or more comma-separated cryptocurrency symbols. Example: "BTC,ETH". At least one "id" *or* "slug" *or* "symbol" is required for this request. | |
| address | No | Alternatively pass in a contract address. Example: "0xc40af1e4fecfa05ce6bab79dcd8b373d2e436c4e" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden and delivers strong operational context: specifies cache/update frequency (30 seconds), credit consumption model (1 per 100 cryptocurrencies), and API plan availability. Does not disclose error behavior or 'not found' handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear section headers (DISCOVERY tag, Cache/Update frequency, Plan credit use). Front-loaded with purpose. Minor verbosity in listing all 7 API plans individually; text appears truncated at end but content up to that point is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete given no output schema exists. Describes returnable data types (logo, description, social links, technical docs) and covers operational constraints (credits, caching). Does not describe response structure or pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage documenting all 5 parameters (id, aux, slug, symbol, address) with examples. Description adds minimal semantic value beyond schema, noting only 'one or more cryptocurrencies' which aligns with the comma-separated patterns already documented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Returns') with explicit resource ('static metadata') and distinguishes effectively from siblings like 'listings' and 'quotes' tools by emphasizing static attributes (logo, description, URLs) vs dynamic market data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through '[DISCOVERY] Metadata' label and enumeration of returned fields (logos, descriptions), but lacks explicit guidance on when to choose this over 'listings_latest' or 'quotes' endpoints for dynamic data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_listings_historicalAInspect
Listings Historical — Returns a ranked and sorted list of all cryptocurrencies for a historical UTC date. Technical Notes - This endpoint is identical in format to our /cryptocurrency/listings/latest endpoint but is used to retrieve historical daily ranking snapshots from the end of each UTC day. - Daily snapshots reflect market data at the end of each UTC day and may be requested as far back as 2013-04-28 (as supported by your plan's historical limits). - The required "date" parameter can be passed as a Unix timestamp or ISO 8601 date but on
| Name | Required | Description | Default |
|---|---|---|---|
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `platform,tags,date_added,circulating_supply,total_supply,max_supply,cmc_rank,num_market_pairs` to include all auxiliary fields. | |
| date | Yes | date (Unix or ISO 8601) to reference day of snapshot. | |
| sort | No | What field to sort the list of cryptocurrencies by. | |
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| sort_dir | No | The direction in which to order cryptocurrencies against the specified sort. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| cryptocurrency_type | No | The type of cryptocurrency to include. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses that snapshots reflect 'market data at the end of each UTC day,' mentions plan-based historical limits, and clarifies the date parameter accepts multiple formats. Missing standard API behavioral details like rate limiting or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is structured with a clear summary followed by technical notes, but the final sentence is abruptly truncated ('The required 'date' parameter can be passed as a Unix timestamp or ISO 8601 date but on'). This incomplete thought significantly detracts from the structure. The opening 'Listings Historical —' is slightly redundant with the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description should ideally detail return values; instead, it references the sibling endpoint's format ('identical in format to our... latest endpoint'), which is helpful but indirect. With 9 parameters and complex pagination/sorting options, the description adequately covers the core concept but leaves operational gaps (error handling, response structure).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description adds that the date parameter can be passed as 'Unix timestamp or ISO 8601 date,' which complements the schema. However, the sentence is cut off mid-thought ('but on'), and no additional semantic context is provided for the other 8 parameters beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Returns a ranked and sorted list of all cryptocurrencies for a historical UTC date.' It explicitly distinguishes from sibling tool get_v1_cryptocurrency_listings_latest by noting this retrieves 'historical daily ranking snapshots' versus current data, providing specific verb, resource, and scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description effectively differentiates when to use this tool versus the 'latest' endpoint, stating it is 'identical in format... but is used to retrieve historical daily ranking snapshots.' It also notes the date range limits ('as far back as 2013-04-28'), helping users understand historical constraints. Could be improved with explicit 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_listings_latestBInspect
Listings Latest — Returns a paginated list of all active cryptocurrencies with latest market data. The default "market_cap" sort returns cryptocurrency in order of CoinMarketCap's market cap rank (as outlined in our methodology) but you may configure this call to order by another market ranking field. Use the "convert" option to return market values in multiple fiat and cryptocurrency conversions in the same call. You may sort against any of the following: market_cap: CoinMarketCap's market cap rank as outlined in our methodology. market_cap_strict: A strict market cap sort (latest tra
| Name | Required | Description | Default |
|---|---|---|---|
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `num_market_pairs,cmc_rank,date_added,tags,platform,max_supply,circulating_supply,total_supply,market_cap_by_total_supply,volume_24h_reported,volume_7d,volume_7d_reported,volume_30d,volume_30d_reported,is_market_cap_included_in_calc` to include all auxiliary fields. | |
| tag | No | The tag of cryptocurrency to include. | |
| sort | No | What field to sort the list of cryptocurrencies by. | |
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| sort_dir | No | The direction in which to order cryptocurrencies against the specified sort. | |
| price_max | No | Optionally specify a threshold of maximum USD price to filter results by. | |
| price_min | No | Optionally specify a threshold of minimum USD price to filter results by. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| market_cap_max | No | Optionally specify a threshold of maximum market cap to filter results by. | |
| market_cap_min | No | Optionally specify a threshold of minimum market cap to filter results by. | |
| volume_24h_max | No | Optionally specify a threshold of maximum 24 hour USD volume to filter results by. | |
| volume_24h_min | No | Optionally specify a threshold of minimum 24 hour USD volume to filter results by. | |
| cryptocurrency_type | No | The type of cryptocurrency to include. | |
| circulating_supply_max | No | Optionally specify a threshold of maximum circulating supply to filter results by. | |
| circulating_supply_min | No | Optionally specify a threshold of minimum circulating supply to filter results by. | |
| percent_change_24h_max | No | Optionally specify a threshold of maximum 24 hour percent change to filter results by. | |
| percent_change_24h_min | No | Optionally specify a threshold of minimum 24 hour percent change to filter results by. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses pagination behavior, default 'market_cap' sort, and 'convert' functionality with credit costs. However, fails to clarify read-only safety profile, rate limits, authentication requirements, or return data structure (no output schema is present).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with clear purpose statement, but opens with redundant 'Listings Latest —' header-style text. The inline enumeration of sort options is useful but verbose; description is also truncated mid-sentence ('latest tra'), indicating structural incompleteness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 19 parameters and no output schema, description should explain return structure or provide richer behavioral context. While it covers sorting and conversion features, it is cut off and leaves filtering parameters (price_min/max, volume thresholds) without descriptive context beyond the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% but 'sort' parameter lacks enum definitions. Description compensates by enumerating valid sort values (market_cap, market_cap_strict) and explaining their semantic differences, adding value beyond the schema which only states 'What field to sort...' without listing options.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States it 'Returns a paginated list of all active cryptocurrencies with latest market data' — specific about resource (cryptocurrencies), operation (paginated list), and data freshness (latest). Implicitly distinguishes from sibling 'historical' tool via 'latest', though could explicitly clarify difference from 'quotes' tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes configuration capabilities (sorting, convert) but provides no explicit guidance on when to select this tool over alternatives like 'get_v1_cryptocurrency_quotes_latest' or 'get_v1_cryptocurrency_map'. No 'when not to use' exclusions provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_listings_newAInspect
Listings New — Returns a paginated list of most recently added cryptocurrencies. This endpoint is available on the following API plans: - Startup - Standard - Professional - Enterprise Cache / Update frequency: Every 60 seconds. Plan credit use: 1 call credit per 200 cryptocurrencies returned (rounded up) and 1 call credit per convert option beyond the first. CMC equivalent pages: Our "new" cryptocurrency page coinmarketcap.com/new/ NOTE: Use this endpoint if you need a sorted and paginated list of all recently added cryptocurrencies. Re
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| sort_dir | No | The direction in which to order cryptocurrencies against the specified sort. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong operational disclosure given no annotations: specifies cache frequency (60s), credit costs (1 per 200 results, extra for convert), and API plan requirements (Startup+). This adds substantial value beyond structured fields. Could mention rate limits or auth method, but coverage is good for a data endpoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded but formatting is messy with markdown bullets and headers breaking flow. Ends abruptly ('* Re') suggesting truncation. Contains useful details but could be tightened; some repetition between credit explanation in text and schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers operational context well (plans, credits, caching) but lacks description of return data structure since no output schema exists. For a paginated listing endpoint with 5 parameters, should briefly indicate what fields (id, symbol, date_added) or data shape is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, so baseline applies. Description mentions credit implications for 'convert' parameter, but this merely echoes the schema description ('Each additional convert option...requires an additional call credit'). No additional semantic value added for pagination parameters (limit/start) or sort_dir.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns 'most recently added cryptocurrencies' (distinct from current market data), distinguishing it from sibling 'listings_latest'. References the specific 'new' page on CoinMarketCap. However, it doesn't explicitly contrast with 'latest' vs 'new' listings semantics, which could confuse users given the similar sibling names.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides positive guidance ('Use this endpoint if you need...') specifying when to use it (for recently added coins). However, lacks negative guidance or explicitAlternatives—doesn't clarify when to use 'listings_latest' instead (for current market-cap sorted data vs. newly listed coins).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_mapAInspect
[DISCOVERY] CoinMarketCap ID Map — Returns a mapping of all cryptocurrencies to unique CoinMarketCap ids. Per our Best Practices we recommend utilizing CMC ID instead of cryptocurrency symbols to securely identify cryptocurrencies with our other endpoints and in your own application logic. Each cryptocurrency returned includes typical identifiers such as name, symbol, and token_address for flexible mapping to id. By default this endpoint returns cryptocurrencies that have actively tracked markets on supported exchanges. You may receive a map of all inactive cryptocurrencies by passing `listing_s
| Name | Required | Description | Default |
|---|---|---|---|
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `platform,first_historical_data,last_historical_data,is_active,status` to include all auxiliary fields. | |
| sort | No | What field to sort the list of cryptocurrencies by. | |
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| symbol | No | Optionally pass a comma-separated list of cryptocurrency symbols to return CoinMarketCap IDs for. If this option is passed, other options will be ignored. | |
| listing_status | No | Only active cryptocurrencies are returned by default. Pass `inactive` to get a list of cryptocurrencies that are no longer active. Pass `untracked` to get a list of cryptocurrencies that are listed but do not yet meet methodology requirements to have tracked markets available. You may pass one or more comma-separated values. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses default filtering (active-only with tracked markets) and partial filtering options (inactive via parameter), and hints at return structure (name, symbol, token_address). However, lacking annotations, it omits critical operational details like rate limits, authentication requirements, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with logical flow from purpose to best practices to default behavior, but ends abruptly mid-sentence ('passing `listing_s'), indicating an incomplete structural issue. Otherwise sentences are purposeful and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately compensates for missing output schema by describing key returned fields (name, symbol, token_address) and explaining the ID mapping concept. The truncation prevents a higher score, but overall coverage is solid for a discovery endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema provides 100% coverage with detailed descriptions for all 6 parameters. The description adds minimal supplemental context for parameters (only partially describing listing_status before truncation), meeting the baseline expectation when schema coverage is comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns a mapping of cryptocurrencies to unique CoinMarketCap IDs using specific verbs and resources. The '[DISCOVERY]' prefix and explicit mention of identifying cryptocurrencies for use with 'other endpoints' effectively distinguishes this mapping tool from sibling data/price endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly recommends utilizing CMC IDs over cryptocurrency symbols per 'Best Practices,' providing clear guidance on when to use this tool (for secure identification in other endpoints). Minor gap: lacks explicit 'when not to use' or direct comparisons to sibling tools like get_v1_cryptocurrency_info.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_marketpairs_latestAInspect
Market Pairs Latest — Lists all active market pairs that CoinMarketCap tracks for a given cryptocurrency or fiat currency. All markets with this currency as the pair base or pair quote will be returned. The latest price and volume information is returned for each market. Use the "convert" option to return market values in multiple fiat and cryptocurrency conversions in the same call. This endpoint is available on the following API plans: - Basic - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Every 1 minute. Plan credit use: 1 cal
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | A cryptocurrency or fiat currency by CoinMarketCap ID to list market pairs for. Example: "1" | |
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `num_market_pairs,category,fee_type,market_url,currency_name,currency_slug,price_quote,notice,cmc_rank,effective_liquidity,market_score,market_reputation` to include all auxiliary fields. | |
| slug | No | Alternatively pass a cryptocurrency by slug. Example: "bitcoin" | |
| sort | No | Optionally specify the sort order of markets returned. By default we return a strict sort on 24 hour reported volume. Pass `cmc_rank` to return a CMC methodology based sort where markets with excluded volumes are returned last. | |
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| symbol | No | Alternatively pass a cryptocurrency by symbol. Fiat currencies are not supported by this field. Example: "BTC". A single cryptocurrency "id", "slug", *or* "symbol" is required. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| category | No | The category of trading this market falls under. Spot markets are the most common but options include derivatives and OTC. | |
| fee_type | No | The fee type the exchange enforces for this market. | |
| sort_dir | No | Optionally specify the sort direction of markets returned. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| matched_id | No | Optionally include one or more fiat or cryptocurrency IDs to filter market pairs by. For example `?id=1&matched_id=2781` would only return BTC markets that matched: "BTC/USD" or "USD/BTC". This parameter cannot be used when `matched_symbol` is used. | |
| matched_symbol | No | Optionally include one or more fiat or cryptocurrency symbols to filter market pairs by. For example `?symbol=BTC&matched_symbol=USD` would only return BTC markets that matched: "BTC/USD" or "USD/BTC". This parameter cannot be used when `matched_id` is used. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden effectively. It discloses cache frequency (1 minute), credit cost (1 cal), API tier restrictions, and data scope (returns price/volume for each market). It clarifies that the tool accepts currency as base OR quote, which is important behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose but contains visual clutter (markdown strikethroughs for plan tiers) that reduces readability. The operational metadata (plans, credits) is useful but could be more concise. Every sentence adds value, though the formatting detracts slightly from structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 14 parameters, no annotations, and no output schema, the description provides appropriate operational context (caching, credits, tiers) and clearly explains the data returned. It appropriately delegates parameter documentation to the well-covered schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal semantic value beyond the schema, though it does provide usage context for the 'convert' option ('Use the convert option to return market values in multiple... conversions'). The schema handles all heavy lifting for parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Lists all active market pairs' for a given currency and specifies that markets are returned where the currency is either base or quote. It distinguishes from quote/listings endpoints by focusing on individual market pairs. However, it does not explicitly differentiate from the sibling 'get_v1_exchange_marketpairs_latest' (which queries by exchange rather than currency).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides operational constraints (requires Standard+ plan, 1 credit cost, 1-minute cache frequency) and mentions when to use the 'convert' option. However, it lacks guidance on when to choose this over siblings like 'get_v1_cryptocurrency_quotes_latest' (aggregated data) or the exchange-specific variant.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_ohlcv_historicalBInspect
OHLCV Historical — Returns historical OHLCV (Open, High, Low, Close, Volume) data along with market cap for any cryptocurrency using time interval parameters. Currently daily and hourly OHLCV periods are supported. Volume is only supported with daily periods at this time. Technical Notes - Only the date portion of the timestamp is used for daily OHLCV so it's recommended to send an ISO date format like "2018-09-19" without time for this "time_period". - One OHLCV quote will be returned for every "time_period" between your "time_start" (exclusive) and "time_end" (inclusive). - If a "time_st
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | One or more comma-separated CoinMarketCap cryptocurrency IDs. Example: "1,1027" | |
| slug | No | Alternatively pass a comma-separated list of cryptocurrency slugs. Example: "bitcoin,ethereum" | |
| count | No | Optionally limit the number of time periods to return results for. The default is 10 items. The current query limit is 10000 items. | |
| symbol | No | Alternatively pass one or more comma-separated cryptocurrency symbols. Example: "BTC,ETH". At least one "id" *or* "slug" *or* "symbol" is required for this request. | |
| convert | No | By default market quotes are returned in USD. Optionally calculate market quotes in up to 3 fiat currencies or cryptocurrencies. | |
| interval | No | Optionally adjust the interval that "time_period" is sampled. See main endpoint description for available options. | |
| time_end | No | Timestamp (Unix or ISO 8601) to stop returning OHLCV time periods for (inclusive). Optional, if not passed we'll default to the current time. Only the date portion of the timestamp is used for daily OHLCV so it's recommended to send an ISO date format like "2018-09-19" without time. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| time_start | No | Timestamp (Unix or ISO 8601) to start returning OHLCV time periods for. Only the date portion of the timestamp is used for daily OHLCV so it's recommended to send an ISO date format like "2018-09-19" without time. | |
| time_period | No | Time period to return OHLCV data for. The default is "daily". See the main endpoint description for details. | |
| skip_invalid | No | Pass `true` to relax request validation rules. When requesting records on multiple cryptocurrencies an error is returned if any invalid cryptocurrencies are requested or a cryptocurrency does not have matching records in the requested timeframe. If set to true, invalid lookups will be skipped allowing valid cryptocurrencies to still be returned. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses important behavioral traits not inferable from schema alone: time_start is exclusive while time_end is inclusive, volume is restricted to daily periods only, and date formatting recommendations for daily OHLCV. Carries full burden due to lack of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Uses logical structure with bold 'Technical Notes' header, but suffers from apparent truncation ('If a time_st') suggesting incomplete content, and repeats schema details verbatim rather than focusing on semantic integration, reducing value per sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Truncated ending and absence of output schema description leave critical gaps for an 11-parameter tool; fails to describe response structure, pagination behavior, or error conditions despite having no output schema to reference.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, baseline is 3. Description adds value by explaining cross-parameter interactions (e.g., how time_period affects timestamp parsing) and exclusive/inclusive bounds, but also duplicates information already present in parameter descriptions (e.g., ISO date format recommendations).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns historical OHLCV (Open, High, Low, Close, Volume) data with market cap for cryptocurrencies using time intervals, distinguishing it from 'latest' siblings via the explicit 'historical' label and technical focus on time periods.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Documents specific constraints (daily/hourly periods supported, volume only for daily) that guide valid usage, but fails to explicitly differentiate when to use this versus similar siblings like get_v1_cryptocurrency_quotes_historical.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_ohlcv_latestAInspect
OHLCV Latest — Returns the latest OHLCV (Open, High, Low, Close, Volume) market values for one or more cryptocurrencies for the current UTC day. Since the current UTC day is still active these values are updated frequently. You can find the final calculated OHLCV values for the last completed UTC day along with all historic days using /cryptocurrency/ohlcv/historical. This endpoint is available on the following API plans: - Basic - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Every 5 minutes. Additional OHLCV intervals and 1 minute updates
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | One or more comma-separated cryptocurrency CoinMarketCap IDs. Example: 1,2 | |
| symbol | No | Alternatively pass one or more comma-separated cryptocurrency symbols. Example: "BTC,ETH". At least one "id" *or* "symbol" is required. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| skip_invalid | No | Pass `true` to relax request validation rules. When requesting records on multiple cryptocurrencies an error is returned if any invalid cryptocurrencies are requested or a cryptocurrency does not have matching records in the requested timeframe. If set to true, invalid lookups will be skipped allowing valid cryptocurrencies to still be returned. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses update frequency ('Every 5 minutes'), API plan restrictions (Startup+ only), and that values are frequently updated because the day is active. Minor gap: the trailing fragment 'Additional OHLCV intervals and 1 minute updates' is incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loads the essential purpose well, but includes heavy markdown formatting for API plans that disrupts flow, and ends with a sentence fragment ('Additional OHLCV intervals and 1 minute updates'). Every sentence does earn its place except the final incomplete thought.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, yet the description adequately explains the return concept (OHLCV values), temporal scope, caching behavior, and plan requirements. Sufficient for a read-only market data tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions (e.g., convert logic, skip_invalid behavior), so the description appropriately does not duplicate this information. Baseline 3 is correct.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specifically states it returns 'latest OHLCV (Open, High, Low, Close, Volume) market values' for the 'current UTC day' and explicitly distinguishes from the sibling historical endpoint by naming '/cryptocurrency/ohlcv/historical' for completed days.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the alternative: 'You can find the final calculated OHLCV values for the last completed UTC day along with all historic days using /cryptocurrency/ohlcv/historical,' clearly demarcating this tool's scope (incomplete day) vs. the sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_priceperformancestats_latestBInspect
Price Performance Stats — Returns price performance statistics for one or more cryptocurrencies including launch price ROI and all-time high / all-time low. Stats are returned for an all_time period by default. UTC yesterday and a number of rolling time periods may be requested using the time_period parameter. Utilize the convert parameter to translate values into multiple fiats or cryptocurrencies using historical rates. This endpoint is available on the following API plans: - Basic - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency:
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | One or more comma-separated cryptocurrency CoinMarketCap IDs. Example: 1,2 | |
| slug | No | Alternatively pass a comma-separated list of cryptocurrency slugs. Example: "bitcoin,ethereum" | |
| symbol | No | Alternatively pass one or more comma-separated cryptocurrency symbols. Example: "BTC,ETH". At least one "id" *or* "slug" *or* "symbol" is required for this request. | |
| convert | No | Optionally calculate quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| convert_id | No | Optionally calculate quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| time_period | No | Specify one or more comma-delimited time periods to return stats for. `all_time` is the default. Pass `all_time,yesterday,24h,7d,30d,90d,365d` to return all supported time periods. All rolling periods have a rolling close time of the current request time. For example `24h` would have a close time of now and an open time of 24 hours before now. *Please note: `yesterday` is a UTC period and currently does not currently support `high` and `low` timestamps.* | |
| skip_invalid | No | Pass `true` to relax request validation rules. When requesting records on multiple cryptocurrencies an error is returned if no match is found for 1 or more requested cryptocurrencies. If set to true, invalid lookups will be skipped allowing valid cryptocurrencies to still be returned. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It effectively discloses default behavior (all_time period), rolling period calculation logic, and that convert uses 'historical rates'. However, it has an incomplete sentence ('Cache / Update frequency:') and omits output format details and error behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is generally front-loaded with key return values first. However, it includes verbose API plan markup (strikethrough text) that adds noise, and terminates with an incomplete 'Cache / Update frequency:' header, which is a structural flaw.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 7-parameter tool with no output schema, the description adequately covers the main return values (ROI, ATH, ATL) and key parameters. However, gaps remain regarding the output structure and the incomplete cache frequency information leaves operational details unclear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. The description adds significant value beyond the schema by explaining that time_period rolling periods calculate from 'now' backwards, noting the UTC 'yesterday' limitation (no high/low timestamps), and clarifying that convert uses historical rates—context not present in the JSON schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns 'price performance statistics' including specific metrics like 'launch price ROI and all-time high / all-time low', which distinguishes it from sibling quote/listing tools. However, it lacks explicit contrast with similar data-fetching tools like get_v1_cryptocurrency_quotes_latest.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks explicit guidance on when to use this tool versus alternatives (e.g., quotes vs. performance stats). While it includes API plan restrictions (Startup+ only), it does not specify usage prerequisites or decision criteria for selecting this over sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_quotes_historicalBInspect
Quotes Historical — Returns an interval of historic market quotes for any cryptocurrency based on time and interval parameters. Technical Notes - A historic quote for every "interval" period between your "time_start" and "time_end" will be returned. - If a "time_start" is not supplied, the "interval" will be applied in reverse from "time_end". - If "time_end" is not supplied, it defaults to the current time. - At each "interval" period, the historic quote that is closest in time to the requested time will be returned. - If no historic quotes are available in a given "interval" period up un
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | One or more comma-separated CoinMarketCap cryptocurrency IDs. Example: "1,2" | |
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `price,volume,market_cap,quote_timestamp,is_active,is_fiat,search_interval` to include all auxiliary fields. | |
| count | No | The number of interval periods to return results for. Optional, required if both "time_start" and "time_end" aren't supplied. The default is 10 items. The current query limit is 10000. | |
| symbol | No | Alternatively pass one or more comma-separated cryptocurrency symbols. Example: "BTC,ETH". At least one "id" *or* "symbol" is required for this request. | |
| convert | No | By default market quotes are returned in USD. Optionally calculate market quotes in up to 3 other fiat currencies or cryptocurrencies. | |
| interval | No | Interval of time to return data points for. See details in endpoint description. | |
| time_end | No | Timestamp (Unix or ISO 8601) to stop returning quotes for (inclusive). Optional, if not passed, we'll default to the current time. If no "time_start" is passed, we return quotes in reverse order starting from this time. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| time_start | No | Timestamp (Unix or ISO 8601) to start returning quotes for. Optional, if not passed, we'll return quotes calculated in reverse from "time_end". | |
| skip_invalid | No | Pass `true` to relax request validation rules. When requesting records on multiple cryptocurrencies an error is returned if no match is found for 1 or more requested cryptocurrencies. If set to true, invalid lookups will be skipped allowing valid cryptocurrencies to still be returned. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description explains interval resolution logic (closest quote selection, reverse calculation) and default behaviors well. However, text is truncated mid-sentence ('up un'), missing completion of error handling for empty intervals, and omits rate limit or auth context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Structured with clear Technical Notes section, but marred by apparent truncation at sentence end. First sentence restates the tool name's function ('Quotes Historical') which is slightly redundant but acceptable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
10 parameters with full schema coverage and no output schema. Description covers core temporal logic but truncation leaves gap in error handling documentation. Lacks output structure description which would be valuable given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage (baseline 3), description adds valuable semantic context beyond individual parameter definitions: it clarifies temporal dependencies between time_start, time_end, count, and interval parameters (reverse lookup logic, defaults), which the schema documents individually but not relationally.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Returns' and resource 'historic market quotes' with scope 'interval...based on time'. Distinguishes from 'latest' siblings via temporal focus, though could explicitly differentiate from ohlcv_historical sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Technical Notes provide implicit usage patterns for handling time_start/time_end omissions and interval logic. However, lacks explicit guidance on when to choose this tool over get_v1_cryptocurrency_quotes_latest or get_v1_cryptocurrency_ohlcv_historical alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_quotes_latestAInspect
Quotes Latest — Returns the latest market quote for 1 or more cryptocurrencies. Use the "convert" option to return market values in multiple fiat and cryptocurrency conversions in the same call. This endpoint is available on the following API plans: - Basic - Startup - Hobbyist - Standard - Professional - Enterprise Cache / Update frequency: Every 60 seconds. Plan credit use: 1 call credit per 100 cryptocurrencies returned (rounded up) and 1 call credit per convert option beyond the first. CMC equivalent pages: Latest market data pages for specific cryptocurrencies like [coin
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | One or more comma-separated cryptocurrency CoinMarketCap IDs. Example: 1,2 | |
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `num_market_pairs,cmc_rank,date_added,tags,platform,max_supply,circulating_supply,total_supply,market_cap_by_total_supply,volume_24h_reported,volume_7d,volume_7d_reported,volume_30d,volume_30d_reported,is_active,is_fiat` to include all auxiliary fields. | |
| slug | No | Alternatively pass a comma-separated list of cryptocurrency slugs. Example: "bitcoin,ethereum" | |
| symbol | No | Alternatively pass one or more comma-separated cryptocurrency symbols. Example: "BTC,ETH". At least one "id" *or* "slug" *or* "symbol" is required for this request. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| skip_invalid | No | Pass `true` to relax request validation rules. When requesting records on multiple cryptocurrencies an error is returned if no match is found for 1 or more requested cryptocurrencies. If set to true, invalid lookups will be skipped allowing valid cryptocurrencies to still be returned. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant operational weight by disclosing cache/update frequency (every 60 seconds), credit consumption rules (1 credit per 100 cryptocurrencies, per convert option), and API plan availability. This provides crucial cost and freshness context beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses clear markdown section headers (**Cache / Update frequency:**, **Plan credit use:**) for scannable structure. However, the 'Quotes Latest —' prefix restates the name, and the description ends abruptly mid-sentence ('[coin'), indicating incomplete content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description provides excellent operational context (caching, credits, plans) that compensates partially. However, it lacks description of the return data structure or error behavior patterns, which would be helpful for a 7-parameter data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage across all 7 parameters, the schema does the heavy lifting. The description adds minimal semantic value beyond the schema, though it does provide context for the 'convert' parameter's purpose. Baseline 3 is appropriate given the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Returns the latest market quote for 1 or more cryptocurrencies' with a specific verb and resource. The 'latest' qualifier distinguishes it from the sibling 'get_v1_cryptocurrency_quotes_historical' tool, and 'quotes' differentiates it from 'listings' or 'info' endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage guidance for the 'convert' option ('Use the 'convert' option to return market values in multiple fiat...'), but lacks explicit guidance on when to choose this tool over siblings like historical quotes or listings_latest. It does not specify prerequisites or when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_trending_gainerslosersAInspect
[DISCOVERY] Trending Gainers & Losers — Returns a paginated list of all trending cryptocurrencies, determined and sorted by the largest price gains or losses. You may sort against any of the following: percent_change_24h: 24 hour trading price percentage change for each currency. This endpoint is available on the following API plans: - Startup - Standard - Professional - Enterprise Cache / Update frequency: Every 10 minutes. Plan credit use: 1 call credit per 200 cryptocurrencies returned (rounded up) and 1 call credit per convert option beyond the first. CMC equivalent pages: Our cr
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | What field to sort the list of cryptocurrencies by. | |
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| sort_dir | No | The direction in which to order cryptocurrencies against the specified sort. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| time_period | No | Adjusts the overall window of time for the biggest gainers and losers. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full behavioral disclosure burden effectively: specifies 10-minute cache frequency, API tier requirements (Startup+), and detailed credit consumption model (1 credit per 200 cryptocurrencies and per convert option). Only lacks error behavior or rate limit details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured with distinct sections for purpose, sort options, plan requirements, and operational constraints. Every sentence delivers actionable information. Minor deduction for the truncated ending ('Our cr') indicating potentially incomplete documentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 7-parameter tool with no annotations or output schema: covers pagination behavior, credit costs, and caching. However, the cutoff text leaves documentation incomplete, and without an output schema, the description should ideally detail the return structure more than the brief 'paginated list' mention.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds meaningful context beyond schema by specifying valid sort option example (percent_change_24h) and explaining the credit cost implications of using multiple convert options, which aids in cost-aware parameter selection.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool returns a paginated list of trending cryptocurrencies 'determined and sorted by the largest price gains or losses', providing specific verb, resource, and scope that distinguishes it from sibling tools like trending_latest or listings_latest.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage context through the gainers/losers sorting logic and example sort field (percent_change_24h), and clarifies operational constraints (API plans, credits). However, it lacks explicit guidance on when to choose this over sibling tools like trending_mostvisited or listings_latest.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_trending_latestAInspect
Trending Latest — Returns a paginated list of all trending cryptocurrency market data, determined and sorted by CoinMarketCap search volume. This endpoint is available on the following API plans: - Startup - Standard - Professional - Enterprise Cache / Update frequency: Every 10 minutes. Plan credit use: 1 call credit per 200 cryptocurrencies returned (rounded up) and 1 call credit per convert option beyond the first. CMC equivalent pages: Our cryptocurrency Trending page coinmarketcap.com/trending-cryptocurrencies/. Ret
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| time_period | No | Adjusts the overall window of time for the latest trending coins. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries significant behavioral load: details cache frequency (10 min), credit consumption rules (per 200 cryptos and per convert option), and plan availability. Missing explicit safety classification (read-only) and detailed return structure, but the cost/freshness transparency is valuable operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Text is literally truncated (ends with 'Ret'), indicating clear structural incompleteness. While information density is high (every section earns its place regarding plans, caching, credits), the abrupt ending and dense markdown formatting (bold headers, bullets) within a single description string hurts readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong on input-side operational details (plans, credits, caching) but weak on describing output given no output schema exists. States it returns 'market data' but doesn't specify fields or structure, leaving agents blind to return payload details despite 5 parameters being well-documented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value by contextualizing 'limit' with pagination logic and credit costs (200 items per credit), and explaining the cost implication of multiple 'convert' options. Elevates understanding beyond raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Returns) and resource (paginated list of trending cryptocurrency market data) with clear sorting criteria (CoinMarketCap search volume). Implicitly distinguishes from sibling tools like trending_gainerslosers and trending_mostvisited via the 'search volume' specificity, though explicit comparative guidance is absent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides operational constraints (API plan tiers, credit costs) that implicitly guide usage rate and eligibility, but lacks explicit 'when to use this vs sibling X' guidance. The credit cost details (1 per 200 items) help agents decide pagination strategy.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_cryptocurrency_trending_mostvisitedAInspect
Trending Most Visited — Returns a paginated list of all trending cryptocurrency market data, determined and sorted by traffic to coin detail pages. This endpoint is available on the following API plans: - Startup - Standard - Professional - Enterprise Cache / Update frequency: Every 10 minutes. Plan credit use: 1 call credit per 200 cryptocurrencies returned (rounded up) and 1 call credit per convert option beyond the first. CMC equivalent pages: The CoinMarketCap “Most Visited” trending list. [coinmarketcap.com/most-viewed-pages/](https://coinmarketcap.com/most-viewed-pages/
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| time_period | No | Adjusts the overall window of time for most visited currencies. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden and succeeds in revealing costs (credit system), caching behavior (10-minute updates), and pagination. It appropriately warns about credit multipliers for convert options. It could improve by mentioning idempotency or error response patterns, but the operational transparency is strong.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses clear markdown structure with bold headers for API plans, cache frequency, and credit usage, making it scannable. The information is dense and valuable. It is slightly verbose with the CMC URL reference at the end, but every section serves a distinct purpose for agent decision-making.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 5-parameter input with no output schema and no annotations, the description appropriately focuses on the billing/plan constraints and update frequency rather than return value speculations. It could note that all parameters are optional (0 required), but the credit cost transparency makes it contextually complete for a paid API endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although schema coverage is 100% (baseline 3), the description adds crucial semantic context by explaining that convert parameters incur additional call credits and that limit affects credit consumption (1 per 200 cryptos). This billing-side semantics isn't present in the schema definitions. However, it misses specifying valid enum values for time_period.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a 'paginated list of all trending cryptocurrency market data, determined and sorted by traffic to coin detail pages.' This specific verb-resource combination (returns + traffic-sorted trending data) clearly distinguishes it from siblings like trending_gainerslosers (price change) and listings_latest (general market data) by emphasizing the 'Most Visited' traffic metric.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent operational constraints including API plan requirements (Startup through Enterprise), cache frequency (10 minutes), and credit cost calculations (1 credit per 200 results). However, it lacks explicit guidance on when to choose this over sibling trending endpoints (e.g., trending_latest vs. trending_mostvisited), relying only on the traffic metric implication.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_exchange_infoAInspect
[DISCOVERY] Metadata — Returns all static metadata for one or more exchanges. This information includes details like launch date, logo, official website URL, social links, and market fee documentation URL. This endpoint is available on the following API plans: - Basic - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Static data is updated only as needed, every 30 seconds. Plan credit use: 1 call credit per 100 exchanges returned (rounded up). CMC equivalent pages: Exchange detail page metadata like [coinmarketcap.com/exchanges/binance/](https://coinmark
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | One or more comma-separated CoinMarketCap cryptocurrency exchange ids. Example: "1,2" | |
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `urls,logo,description,date_launched,notice,status` to include all auxiliary fields. | |
| slug | No | Alternatively, one or more comma-separated exchange names in URL friendly shorthand "slug" format (all lowercase, spaces replaced with hyphens). Example: "binance,gdax". At least one "id" *or* "slug" is required. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent behavioral disclosure given no annotations. Covers update frequency ('every 30 seconds'), cost model ('1 call credit per 100 exchanges'), and plan eligibility restrictions (Basic through Enterprise). This is critical operational context for an API with rate limiting and credit systems.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose but becomes verbose with the multi-line API plan list. Markdown formatting in a string makes it slightly hard to read. The final sentence is truncated ('https://coinmarketc'), disrupting completeness and suggesting poor formatting.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately describes returned fields (logo, URLs, launch date). Covers authentication tiers, costs, and caching. Loses a point for the truncated CMC URL and could mention pagination behavior if applicable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with clear examples. The main description mentions 'one or more exchanges' which aligns with the comma-separated schema format, but adds minimal semantic detail beyond what the schema already provides. Baseline 3 is appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Returns all static metadata for one or more exchanges' and lists specific fields (launch date, logo, website URL). The '[DISCOVERY]' prefix is non-standard but the core purpose is specific. It implicitly distinguishes from dynamic data siblings (quotes/listings) by emphasizing 'static metadata'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage by specifying 'static metadata', it lacks explicit guidance on when to use this versus sibling tools like get_v1_exchange_quotes_latest or get_v1_exchange_listings_latest. It does not state what to use when you need dynamic market data instead of descriptive metadata.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_exchange_listings_latestBInspect
Listings Latest — Returns a paginated list of all cryptocurrency exchanges including the latest aggregate market data for each exchange. Use the "convert" option to return market values in multiple fiat and cryptocurrency conversions in the same call. This endpoint is available on the following API plans: - Basic - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Every 1 minute. Plan credit use: 1 call credit per 100 exchanges returned (rounded up) and 1 call credit per convert option beyond the first. CMC equivalent pages: Our l
| Name | Required | Description | Default |
|---|---|---|---|
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `num_market_pairs,traffic_score,rank,exchange_score,effective_liquidity_24h,date_launched,fiats` to include all auxiliary fields. | |
| sort | No | What field to sort the list of exchanges by. | |
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| category | No | The category for this exchange. | |
| sort_dir | No | The direction in which to order exchanges against the specified sort. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| market_type | No | The type of exchange markets to include in rankings. This field is deprecated. Please use "all" for accurate sorting. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure of operational characteristics absent from annotations: cache/update frequency (1 minute), pagination model (per 100 exchanges), and precise credit consumption formula (1 per 100 exchanges + 1 per convert option). Also clarifies plan restrictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Contains valuable operational information but suffers from awkward structure: opens with apparent header text ('Listings Latest —'), embeds heavy markdown formatting (strikethrough lists), and ends with incomplete sentence ('Our l') indicating truncation. Information density is high but presentation is disorganized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Provides strong operational context (costs, caching) but incomplete data context. Lacks description of response structure/fields despite no output schema existing. Truncated ending removes potentially important 'CMC equivalent pages' reference. Adequate but not complete given 9-parameter complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds meaningful context for 'convert' parameter (cost implications and multi-currency capability) and implies pagination via credit structure, but does not significantly enhance semantics for other 8 parameters beyond schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (returns paginated list) and resource (cryptocurrency exchanges with latest aggregate market data). Distinguishes from historical siblings by emphasizing 'latest' data, though lacks explicit differentiation from get_v1_exchange_quotes_latest or get_v1_exchange_marketpairs_latest.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides detailed API plan availability and credit cost guidelines, but contains no guidance on when to select this tool over sibling exchange endpoints like get_v1_exchange_info or get_v1_exchange_map. No 'use when' or 'instead of' signals provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_exchange_mapAInspect
[DISCOVERY] CoinMarketCap ID Map — Returns a paginated list of all active cryptocurrency exchanges by CoinMarketCap ID. We recommend using this convenience endpoint to lookup and utilize our unique exchange id across all endpoints as typical exchange identifiers may change over time. As a convenience you may pass a comma-separated list of exchanges by slug to filter this list to only those you require or the aux parameter to slim down the payload. By default this endpoint returns exchanges that have at least 1 actively tracked market. You may receive a map of all inactive cryptocurrencies by passing
| Name | Required | Description | Default |
|---|---|---|---|
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `first_historical_data,last_historical_data,is_active,status` to include all auxiliary fields. | |
| slug | No | Optionally pass a comma-separated list of exchange slugs (lowercase URL friendly shorthand name with spaces replaced with dashes) to return CoinMarketCap IDs for. If this option is passed, other options will be ignored. | |
| sort | No | What field to sort the list of exchanges by. | |
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| crypto_id | No | Optionally include one fiat or cryptocurrency IDs to filter market pairs by. For example `?crypto_id=1` would only return exchanges that have BTC. | |
| listing_status | No | Only active exchanges are returned by default. Pass `inactive` to get a list of exchanges that are no longer active. Pass `untracked` to get a list of exchanges that are registered but do not currently meet methodology requirements to have active markets tracked. You may pass one or more comma-separated values. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Contains a factual error stating 'inactive cryptocurrencies' instead of 'inactive exchanges' in the final sentence, which could confuse the agent given sibling cryptocurrency tools exist. The sentence is also incomplete ('by passing'). With no annotations provided, the description fails to disclose rate limits, authentication requirements, or response structure, though it does note the default behavior of returning only actively tracked markets.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably front-loaded with the [DISCOVERY] tag and main purpose, but contains minor redundancy ('As a convenience... convenience endpoint'). The incomplete final sentence ('You may receive a map of all inactive cryptocurrencies by passing') significantly detracts from structural integrity and leaves the reader hanging.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema provided, the description inadequately compensates by failing to describe the response structure (what fields are returned for each exchange). The incomplete sentence regarding inactive exchanges and the cryptocurrency/exchange terminology error leave critical gaps in the documentation. For a 7-parameter discovery tool with no required fields, more context on typical usage patterns would be expected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, the description adds valuable semantic context: it explains that slug filtering helps target 'only those you require' and that aux helps 'slim down the payload.' It also clarifies the temporal stability issue ('typical exchange identifiers may change over time') motivating the use of CoinMarketCap IDs. However, it misses the opportunity to explain the crypto_id parameter's purpose (filtering exchanges by traded assets).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Returns a paginated list of all active cryptocurrency exchanges by CoinMarketCap ID' and distinguishes itself from sibling tools like get_v1_cryptocurrency_map and get_v1_fiat_map by focusing specifically on exchange identifiers. The [DISCOVERY] tag and explicit mention of the resource type (exchanges) make the purpose immediately clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on when to use: 'We recommend using this convenience endpoint to lookup and utilize our unique exchange id across all endpoints.' It explains the dependency relationship (get IDs here before using other exchange endpoints) and mentions practical use cases like filtering by slug or using aux to slim payloads. Could be improved by explicitly naming specific sibling tools that require these IDs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_exchange_marketpairs_latestAInspect
Market Pairs Latest — Returns all active market pairs that CoinMarketCap tracks for a given exchange. The latest price and volume information is returned for each market. Use the "convert" option to return market values in multiple fiat and cryptocurrency conversions in the same call.' This endpoint is available on the following API plans: - Basic - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Every 60 seconds. Plan credit use: 1 call credit per 100 market pairs returned (rounded up) and 1 call credit per convert option beyond th
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | A CoinMarketCap exchange ID. Example: "1" | |
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `num_market_pairs,category,fee_type,market_url,currency_name,currency_slug,price_quote,effective_liquidity,market_score,market_reputation` to include all auxiliary fields. | |
| slug | No | Alternatively pass an exchange "slug" (URL friendly all lowercase shorthand version of name with spaces replaced with hyphens). Example: "binance". One "id" *or* "slug" is required. | |
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| category | No | The category of trading this market falls under. Spot markets are the most common but options include derivatives and OTC. | |
| fee_type | No | The fee type the exchange enforces for this market. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| matched_id | No | Optionally include one or more comma-delimited fiat or cryptocurrency IDs to filter market pairs by. For example `?matched_id=2781` would only return BTC markets that matched: "BTC/USD" or "USD/BTC" for the requested exchange. This parameter cannot be used when `matched_symbol` is used. | |
| matched_symbol | No | Optionally include one or more comma-delimited fiat or cryptocurrency symbols to filter market pairs by. For example `?matched_symbol=USD` would only return BTC markets that matched: "BTC/USD" or "USD/BTC" for the requested exchange. This parameter cannot be used when `matched_id` is used. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates the cache/update frequency (every 60 seconds), credit consumption model (1 per 100 pairs rounded up, extra for convert options), and plan availability. It implies read-only behavior through 'Returns,' though it lacks explicit safety guarantees or error scenarios.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description embeds useful operational details but suffers from a severe truncation ('1 call credit per `convert` option beyond th'). The markdown table formatting for API plans is informationally dense but disrupts readability. The content is front-loaded with purpose, but the abrupt ending loses the credit calculation details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 11 parameters, no output schema, and no annotations, this complex retrieval tool requires substantial description support. The text covers operational context (plans, credits, cache) adequately but provides minimal description of the return structure (only mentions 'price and volume'), leaving significant gaps in explaining what data fields the agent will receive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions the 'convert' option specifically and its credit implications, but this largely repeats the schema-level parameter descriptions. It does not add significant syntactic guidance, validation rules, or format examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Returns all active market pairs that CoinMarketCap tracks for a given exchange' with 'latest price and volume information,' specifying the resource (market pairs) and scope (exchange-level). It distinguishes from the sibling get_v1_cryptocurrency_marketpairs_latest by emphasizing 'for a given exchange,' though it lacks explicit contrast with that alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides operational constraints (API plan requirements: Standard/Professional/Enterprise, credit costs: 1 per 100 pairs) and mentions the 'convert' option use case. However, it fails to specify the logical constraint that one of 'id' or 'slug' is required, and does not clarify when to prefer this over get_v1_cryptocurrency_marketpairs_latest.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_exchange_quotes_historicalAInspect
Quotes Historical — Returns an interval of historic quotes for any exchange based on time and interval parameters. Technical Notes - A historic quote for every "interval" period between your "time_start" and "time_end" will be returned. - If a "time_start" is not supplied, the "interval" will be applied in reverse from "time_end". - If "time_end" is not supplied, it defaults to the current time. - At each "interval" period, the historic quote that is closest in time to the requested time will be returned. - If no historic quotes are available in a given "interval" period up until the next
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | One or more comma-separated exchange CoinMarketCap ids. Example: "24,270" | |
| slug | No | Alternatively, one or more comma-separated exchange names in URL friendly shorthand "slug" format (all lowercase, spaces replaced with hyphens). Example: "binance,kraken". At least one "id" *or* "slug" is required. | |
| count | No | The number of interval periods to return results for. Optional, required if both "time_start" and "time_end" aren't supplied. The default is 10 items. The current query limit is 10000. | |
| convert | No | By default market quotes are returned in USD. Optionally calculate market quotes in up to 3 other fiat currencies or cryptocurrencies. | |
| interval | No | Interval of time to return data points for. See details in endpoint description. | |
| time_end | No | Timestamp (Unix or ISO 8601) to stop returning quotes for (inclusive). Optional, if not passed, we'll default to the current time. If no "time_start" is passed, we return quotes in reverse order starting from this time. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| time_start | No | Timestamp (Unix or ISO 8601) to start returning quotes for. Optional, if not passed, we'll return quotes calculated in reverse from "time_end". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full disclosure burden effectively. Technical notes explain detailed behaviors: interval sampling logic, reverse-time calculation when time_start omitted, current-time default for time_end, and closest-quote selection algorithm.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Structured with clear purpose statement followed by bulleted technical notes, but the description is truncated mid-sentence ('If no historic quotes are available...'). Some repetition of 'interval' and 'quotes' reduces efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers query logic extensively for an 8-parameter tool, but the truncation leaves the handling of missing quotes unexplained. Without output schema, completeness relies on description, which is cut off at a critical behavioral edge case.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), description adds substantial value by explaining parameter interdependencies—specifically how time_start, time_end, and interval interact to produce forward or reverse chronological results, and default behaviors not obvious from isolated parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action 'Returns' and resource 'historic quotes for any exchange' with clear scope via 'interval of historic quotes' and time parameters. Distinguishes from sibling cryptocurrency tools by specifying 'exchange', though it doesn't explicitly differentiate from get_v1_exchange_quotes_latest.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Technical notes provide implicit usage guidance explaining parameter interactions (time_start/time_end/interval combinations) and reverse-order behavior. However, lacks explicit guidance on when to choose this tool over get_v1_exchange_quotes_latest or cryptocurrency quote alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_exchange_quotes_latestAInspect
Quotes Latest — Returns the latest aggregate market data for 1 or more exchanges. Use the "convert" option to return market values in multiple fiat and cryptocurrency conversions in the same call. This endpoint is available on the following API plans: - Basic - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Every 60 seconds. Plan credit use: 1 call credit per 100 exchanges returned (rounded up) and 1 call credit per convert option beyond the first. CMC equivalent pages: Latest market data summary for specific exchanges like [co
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | One or more comma-separated CoinMarketCap exchange IDs. Example: "1,2" | |
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `num_market_pairs,traffic_score,rank,exchange_score,liquidity_score,effective_liquidity_24h` to include all auxiliary fields. | |
| slug | No | Alternatively, pass a comma-separated list of exchange "slugs" (URL friendly all lowercase shorthand version of name with spaces replaced with hyphens). Example: "binance,gdax". At least one "id" *or* "slug" is required. | |
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses cache/update frequency (60 seconds), plan availability restrictions (Standard+ only), and credit consumption rules (1 per 100 exchanges, 1 per extra convert). Missing authentication requirements and error behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with 'Quotes Latest' (redundant) before the clear purpose statement. Contains valuable but verbose plan/credit tables that push length. Text is cut off at end ('like [co'), indicating truncation issues. Information density is good but structure could be tighter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks output schema, and description provides only vague 'aggregate market data' without detailing response structure or fields. However, it compensates partially with operational details (credits, caching) crucial for API usage. Given 5 parameters with complex credit calculations, completeness is adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description mentions 'convert' option usage and '1 or more exchanges' which maps to id/slug parameters, but doesn't add semantic value beyond the detailed schema descriptions (e.g., no format examples for comma-separated values beyond schema).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it 'Returns the latest aggregate market data for 1 or more exchanges' with specific verb and resource, though it opens with redundant 'Quotes Latest'. It distinguishes from cryptocurrency-focused siblings by specifying 'exchanges', but doesn't clarify when to use this vs the historical quotes endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides useful operational constraints (plan tiers, credit costs, cache frequency) that inform usage decisions, but lacks explicit guidance on when to prefer this over get_v1_exchange_quotes_historical or get_v1_exchange_info. The convert option usage is mentioned but not contrasted with convert_id.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_fiat_mapAInspect
[DISCOVERY] CoinMarketCap ID Map — Returns a mapping of all supported fiat currencies to unique CoinMarketCap ids. Per our Best Practices we recommend utilizing CMC ID instead of currency symbols to securely identify assets with our other endpoints and in your own application logic. This endpoint is available on the following API plans: - Basic - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Mapping data is updated only as needed, every 30 seconds. Plan credit use: 1 API call credit per request no matter query size. CMC equivalent pages: No equivalent,
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | What field to sort the list by. | |
| limit | No | Optionally specify the number of results to return. Use this parameter and the "start" parameter to determine your own pagination size. | |
| start | No | Optionally offset the start (1-based index) of the paginated list of items to return. | |
| include_metals | No | Pass `true` to include precious metals. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and succeeds well. It discloses cache/update frequency (30 seconds), plan credit cost (1 credit per request), and API plan availability constraints—all critical operational constraints for an agent to make informed invocation decisions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose but contains verbose boilerplate (the exhaustive API plan list from Basic to Enterprise) that adds bulk without aiding tool selection. Structure with markdown headers is clear, but the plan enumeration consumes excessive tokens.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a discovery/mapping endpoint without output schema, the description adequately covers what is returned (fiat-to-ID mapping), operational constraints (credits, caching), and usage recommendations. Missing only specific details about response structure/format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description does not add parameter-specific semantics beyond what the schema provides (e.g., valid values for 'sort' field, or that 'include_metals' accepts string 'true').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Returns a mapping of all supported fiat currencies to unique CoinMarketCap ids'—specific verb, resource, and scope. It explicitly distinguishes itself from sibling tool get_v1_cryptocurrency_map by specifying 'fiat currencies' rather than cryptocurrencies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on when and why to use: 'recommend utilizing CMC ID instead of currency symbols to securely identify assets with our other endpoints.' This establishes the tool's role in a workflow (discovery before other API calls) and best practices, though it doesn't explicitly mention when not to use it or name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_globalmetrics_quotes_historicalBInspect
Quotes Historical — Returns an interval of historical global cryptocurrency market metrics based on time and interval parameters. Technical Notes - A historic quote for every "interval" period between your "time_start" and "time_end" will be returned. - If a "time_start" is not supplied, the "interval" will be applied in reverse from "time_end". - If "time_end" is not supplied, it defaults to the current time. - At each "interval" period, the historic quote that is closest in time to the requested time will be returned. - If no historic quotes are available in a given "interval" period up
| Name | Required | Description | Default |
|---|---|---|---|
| aux | No | Optionally specify a comma-separated list of supplemental data fields to return. Pass `btc_dominance,active_cryptocurrencies,active_exchanges,active_market_pairs,total_volume_24h,total_volume_24h_reported,altcoin_market_cap,altcoin_volume_24h,altcoin_volume_24h_reported,search_interval` to include all auxiliary fields. | |
| count | No | The number of interval periods to return results for. Optional, required if both "time_start" and "time_end" aren't supplied. The default is 10 items. The current query limit is 10000. | |
| convert | No | By default market quotes are returned in USD. Optionally calculate market quotes in up to 3 other fiat currencies or cryptocurrencies. | |
| interval | No | Interval of time to return data points for. See details in endpoint description. | |
| time_end | No | Timestamp (Unix or ISO 8601) to stop returning quotes for (inclusive). Optional, if not passed, we'll default to the current time. If no "time_start" is passed, we return quotes in reverse order starting from this time. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. | |
| time_start | No | Timestamp (Unix or ISO 8601) to start returning quotes for. Optional, if not passed, we'll return quotes calculated in reverse from "time_end". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full behavioral disclosure burden. It explains time/interval logic well (reverse order, closest quote selection), but cuts off mid-sentence ('If no historic quotes are available...'). Missing rate limits, auth requirements, and error behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Starts with descriptive fragment 'Quotes Historical'. Technical notes are structured as bullet points (dashes) earning clarity points. However, the description is truncated mid-sentence at the end, significantly impacting completeness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, yet the description fails to describe return values (what data fields come back in the 'historic quotes'). The truncation leaves the 'interval period up' clause incomplete. For a 7-parameter tool with complex time logic, this is inadequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage, providing baseline 3. The Technical Notes add valuable semantic context about parameter interactions (time_start/time_end/interval relationships, reverse calculation logic) that the schema doesn't convey, warranting a higher score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Returns an interval of historical global cryptocurrency market metrics' (specific verb+resource), and uses 'global' to distinguish from sibling get_v1_cryptocurrency_quotes_historical (specific coins) and 'historical' to distinguish from get_v1_globalmetrics_quotes_latest. However, it lacks explicit differentiation guidance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus get_v1_globalmetrics_quotes_latest or get_v1_cryptocurrency_quotes_historical. The technical notes explain parameter mechanics but not usage context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_globalmetrics_quotes_latestAInspect
Quotes Latest — Returns the latest global cryptocurrency market metrics. Use the "convert" option to return market values in multiple fiat and cryptocurrency conversions in the same call. This endpoint is available on the following API plans: - Basic - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: Every 5 minute. Plan credit use: 1 call credit per call and 1 call credit per convert option beyond the first. CMC equivalent pages: The latest aggregate global market stats ticker across all CMC pages like [coinmarketcap.com](https://coinmarket
| Name | Required | Description | Default |
|---|---|---|---|
| convert | No | Optionally calculate market quotes in up to 120 currencies at once by passing a comma-separated list of cryptocurrency or fiat currency symbols. Each additional convert option beyond the first requires an additional call credit. A list of supported fiat options can be found [here](#section/Standards-and-Conventions). Each conversion is returned in its own "quote" object. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure despite no annotations: explicitly states 5-minute cache/update frequency, detailed credit cost structure (1 credit per call plus additional credits per convert option), and API plan availability requirements. These operational characteristics are critical for an AI agent to make informed invocation decisions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information density is high with useful operational sections (plans, cache, credits), but the description suffers from abrupt truncation ('https://coinmarket'), redundant leading dash notation, and heavy markdown list formatting that fragments readability. Every sentence earns its place, but structural issues detract.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Provides strong operational context (plan tiers, credit costs, update frequency) and partial behavioral context (CMC equivalent pages), but lacks description of return value structure since no output schema exists. Only vague references to 'market metrics' and 'aggregate stats' without describing what fields or data structure is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents the convert and convert_id parameters. The description mentions the convert option usage but adds minimal semantic value beyond what the schema already provides, essentially matching the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies returning 'latest global cryptocurrency market metrics' and distinguishes this from sibling tools like get_v1_cryptocurrency_quotes_latest by emphasizing 'global' scope and 'aggregate' stats. However, the leading 'Quotes Latest —' prefix is redundant with the tool name, and the abrupt truncation of the CMC URL slightly undermines clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description provides operational constraints (API plan tiers, credit costs) but offers no guidance on when to select this tool versus alternatives like get_v1_globalmetrics_quotes_historical or get_v1_cryptocurrency_quotes_latest. The 'Use the convert option' text describes parameter mechanics, not selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_key_infoAInspect
[DISCOVERY] Key Info — Returns API key details and usage stats. This endpoint can be used to programmatically monitor your key usage compared to the rate limit and daily/monthly credit limits available to your API plan. You may use the Developer Portal's account dashboard as an alternative to this endpoint. This endpoint is available on the following API plans: - Basic - Hobbyist - Startup - Standard - Professional - Enterprise Cache / Update frequency: No cache, this endpoint updates as requests are made with your key. Plan credit use: No API credit cost. Requests to this endpoint do contribu
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden admirably: it discloses 'No cache' with real-time updates, 'No API credit cost,' and plan availability restrictions (Basic through Enterprise). The truncation at the end ('do contribu') slightly limits completeness but substantial behavioral traits are covered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is logically front-loaded (purpose → use case → alternative → plan requirements → technical specs), but the bulleted list of six API plans is verbose. The apparent text truncation at the end ('do contribu') suggests editing or length issues.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description compensates well by detailing what information is returned (usage stats, rate limits, credit limits), behavioral characteristics, and access constraints. Sufficient for a simple metadata endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters (100% coverage of empty set), which per evaluation rules establishes a baseline score of 4. No additional parameter documentation is needed or expected.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Returns API key details and usage stats' with a specific verb and resource. The '[DISCOVERY]' prefix and monitoring context effectively distinguish this from siblings, which are all cryptocurrency market data endpoints (listings, quotes, exchanges, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context ('programmatically monitor your key usage') and explicitly mentions the Developer Portal's account dashboard as an alternative, helping the agent choose between API access vs manual UI checking.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_v1_tools_priceconversionBInspect
Price Conversion — Convert an amount of one cryptocurrency or fiat currency into one or more different currencies utilizing the latest market rate for each currency. You may optionally pass a historical timestamp as time to convert values based on historical rates (as your API plan supports). Technical Notes - Latest market rate conversions are accurate to 1 minute of specificity. Historical conversions are accurate to 1 minute of specificity outside of non-USD fiat conversions which have 5 minute specificity. - You may reference a current list of all supported cryptocurrencies via the c
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | The CoinMarketCap currency ID of the base cryptocurrency or fiat to convert from. Example: "1" | |
| time | No | Optional timestamp (Unix or ISO 8601) to reference historical pricing during conversion. If not passed, the current time will be used. If passed, we'll reference the closest historic values available for this conversion. | |
| amount | Yes | An amount of currency to convert. Example: 10.43 | |
| symbol | No | Alternatively the currency symbol of the base cryptocurrency or fiat to convert from. Example: "BTC". One "id" *or* "symbol" is required. | |
| convert | No | Pass up to 120 comma-separated fiat or cryptocurrency symbols to convert the source amount to. | |
| convert_id | No | Optionally calculate market quotes by CoinMarketCap ID instead of symbol. This option is identical to `convert` outside of ID format. Ex: convert_id=1,2781 would replace convert=BTC,USD in your query. This parameter cannot be used when `convert` is used. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden and succeeds in key areas: specifies rate accuracy ('1 minute of specificity', '5 minute specificity' for non-USD fiat), mentions multi-currency output support ('one or more'), and notes API plan limitations for historical data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded and structured (purpose → historical option → technical notes), but the description is truncated ('via the c'), leaving the reference to sibling cryptocurrency listing tools incomplete. Technical notes are dense run-on sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists and description fails to characterize return values (conversion results format). Additionally, truncation leaves guidance about supported currencies unfinished. Should specify return structure for a conversion tool with 6 parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing baseline 3. Description mentions 'time' parameter in prose but does not add significant semantic detail beyond schema (e.g., format examples, conversion syntax) that would raise score above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Convert') and resource ('amount of one cryptocurrency or fiat currency') with specific scope ('into one or more different currencies'). Distinguishes from quote-fetching siblings by emphasizing amount-based conversion, though could explicitly name quote tools as alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides usage guidance for historical vs current rates ('optionally pass a historical timestamp') and notes API plan constraints. Lacks explicit comparison to sibling quote tools (e.g., when to use conversion vs get_v1_cryptocurrency_quotes_latest).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!