truss44-mcp-crypto-price
Server Details
Provide real-time cryptocurrency price data and market analysis.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- truss44/mcp-crypto-price
- GitHub Stars
- 39
- Server Listing
- Crypto Price & Market Analysis MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
7 toolsget-crypto-priceGet Crypto PriceARead-onlyIdempotentInspect
Get real-time price, 24-hour change percentage, trading volume, and market cap for any cryptocurrency by symbol or name.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Cryptocurrency symbol or name (e.g. BTC or Bitcoin) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly, idempotent, and openWorld hints. The description adds valuable context not in annotations: it specifies the data returned (price, 24h change, volume, market cap), clarifies the temporal nature ('real-time'), and notes flexible lookup (symbol or name). It does not mention potential rate limits or external API dependencies implied by openWorldHint, preventing a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence (19 words) that front-loads the action ('Get real-time...'). Every clause earns its place: specifies data types (price, change, volume, cap), lookup method (symbol/name), and temporal scope. No redundancy with title or schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool with rich annotations, the description is complete. Critically, it compensates for the missing output schema by enumerating the four return data points (price, 24h change, volume, market cap). Could mention pagination or fuzzy matching behavior for name lookups to reach 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the symbol parameter fully documented ('Cryptocurrency symbol or name'). The description mirrors this with 'by symbol or name' but adds no additional semantic value regarding format validation, case sensitivity, or disambiguation rules. Baseline 3 is appropriate when schema carries full documentation burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with clear resource (real-time cryptocurrency data including price, 24h change, volume, and market cap). It distinguishes from siblings by emphasizing 'real-time' and 'by symbol or name' (specific asset lookup), implicitly contrasting with get-historical-analysis (past data), get-market-analysis (broad trends), and get-top-assets (aggregated rankings).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through 'real-time' and specific data points, suggesting when current asset data is needed, but lacks explicit guidance on when NOT to use this (e.g., 'use get-historical-analysis for past trends instead') or explicit alternative recommendations. The agent must infer differentiation from siblings based on data type descriptors.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-exchangesGet ExchangesARead-onlyIdempotentInspect
Get top cryptocurrency exchanges ranked by 24h volume. Optionally pass an exchangeId (e.g. 'binance', 'coinbase') to get details for a specific exchange including volume, trading pairs, and market share.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of top exchanges to return when listing (1-50, default 10) | |
| exchangeId | No | Optional: specific exchange ID to look up (e.g. 'binance', 'coinbase', 'kraken') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds useful context: it specifies the ranking criterion (24h volume) and the type of details returned (volume, trading pairs, market share) for specific exchanges, which goes beyond annotations. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded: the first sentence states the primary purpose, and the second explains the optional parameter. Every sentence earns its place by providing essential information without redundancy. It is appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and 100% schema coverage, the description is mostly complete. It explains what the tool does and the optional parameter. However, without an output schema, it could benefit from more detail on return values (e.g., format of exchange details). The annotations help fill gaps, but some behavioral context is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (limit and exchangeId). The description adds minimal semantic value: it mentions 'exchangeId' with examples but does not provide additional meaning beyond what the schema already covers. Baseline score of 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get top cryptocurrency exchanges ranked by 24h volume' (specific verb+resource) and distinguishes it from siblings by focusing on exchange data rather than prices, historical analysis, or assets. The optional parameter for specific exchange details further clarifies its dual functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: get top exchanges by volume or details for a specific exchange. However, it does not explicitly state when to use this tool versus alternatives like 'get-market-analysis' or 'get-top-assets', which might overlap in providing market-related data. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-historical-analysisGet Historical AnalysisARead-onlyIdempotentInspect
Get historical price data for a cryptocurrency with trend analysis including period high/low, price change percentage, and volatility metrics over a customizable timeframe.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Number of days of historical data to retrieve (1-30) | |
| symbol | Yes | Cryptocurrency symbol or name (e.g. BTC or Bitcoin) | |
| interval | No | Data interval: m1=1min, m5=5min, m15=15min, m30=30min, h1=1hr, h2=2hr, h6=6hr, h12=12hr, d1=daily | h1 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses specific calculated metrics returned (high/low, price change percentage, volatility) beyond what annotations provide. Annotations already cover safety (readOnly, idempotent), so the description's value is in explaining the analysis type. Missing: data freshness, rate limits, or error behavior for invalid symbols.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence front-loaded with action verb and subject. Efficiently lists four specific metric types without waste. Every clause serves to define scope or outputs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description effectively compensates by listing key metrics returned (volatility, price change). Annotations cover safety hints. Minor gap: no mention of behavior when requesting data for non-existent symbols or date ranges exceeding available history.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline of 3. Description mentions 'customizable timeframe' which loosely references the days and interval parameters, adding minimal semantic context beyond the schema definitions. No additional syntax guidance provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly identifies the resource (historical price data) and specific outputs (period high/low, price change percentage, volatility metrics). Distinguishes this as a trend analysis tool versus siblings like get-crypto-price (likely current) and get-top-assets (lists), though it does not explicitly name siblings to contrast.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through the phrase 'historical price data with trend analysis,' suggesting use for past performance rather than current prices. However, lacks explicit guidance on when to prefer this over get-market-analysis or how to select appropriate timeframes (days/interval combinations).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-market-analysisGet Market AnalysisARead-onlyIdempotentInspect
Get detailed market analysis for a cryptocurrency including the top 5 exchanges by volume, price per exchange, and volume distribution percentages.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Cryptocurrency symbol or name (e.g. BTC or Bitcoin) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With annotations covering safety (readOnlyHint, destructiveHint) and idempotency, description adds valuable behavioral context by specifying the 'top 5' limitation and exact data components returned (price per exchange, volume distribution percentages). This helps the agent understand the scope and granularity of the response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 19 words. Every clause earns its place: action (Get), subject (market analysis), scope (cryptocurrency), and specifics (top 5 exchanges, price, volume distribution). No redundancy. Front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (1 param, no nested objects) and absence of output schema, description appropriately compensates by detailing exactly what the analysis includes (top 5 exchanges, price per exchange, volume distribution). Combined with rich annotations, this is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'symbol' parameter fully documented in the schema (type, minLength, description with examples). Description does not mention parameters, but with complete schema coverage, baseline 3 is appropriate — no additional semantic explanation needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' paired with clear resource 'market analysis for a cryptocurrency'. Uniquely distinguishes from siblings by specifying exact outputs: 'top 5 exchanges by volume, price per exchange, and volume distribution percentages' — clearly different from simple price lookups (get-crypto-price) or temporal analysis (get-historical-analysis).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage through specificity of returned data (exchange-level breakdowns), but lacks explicit 'when to use' guidance or named alternatives. Does not specify prerequisites or when to prefer get-crypto-price for simple queries versus this tool for exchange analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-ratesGet Currency RatesARead-onlyIdempotentInspect
Get USD-based conversion rates for fiat currencies and cryptocurrencies. Optionally pass a slug (e.g. 'euro', 'us-dollar', 'bitcoin') to look up a single rate.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | No | Optional: rate slug to fetch a single rate (e.g. 'us-dollar', 'euro', 'bitcoin') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (read-only, open-world, idempotent, non-destructive). The description adds minimal context about optional slug usage but does not disclose rate limits, auth needs, or response format details, providing only marginal value beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences that are front-loaded and zero waste—first sentence states the core purpose, second explains optional parameter usage efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity, rich annotations, and no output schema, the description is mostly complete but could better explain return values or error handling. It adequately covers the tool's scope and usage without being exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the optional 'slug' parameter. The description adds examples ('euro', 'us-dollar', 'bitcoin') but no additional syntax or format details beyond what the schema provides, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get USD-based conversion rates') and resources ('fiat currencies and cryptocurrencies'), and distinguishes it from siblings by focusing on conversion rates rather than prices, exchanges, or analysis tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage ('Optionally pass a slug... to look up a single rate'), but does not explicitly state when to use this tool versus alternatives like 'get-crypto-price' or 'get-top-assets', missing explicit sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-technical-analysisGet Technical AnalysisARead-onlyIdempotentInspect
Get the latest technical indicators for a cryptocurrency including SMA, EMA, RSI, MACD, and VWAP to assess momentum, trend direction, and overbought/oversold conditions.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Cryptocurrency symbol or name (e.g. BTC or Bitcoin) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds value by specifying the types of indicators returned (SMA, EMA, etc.) and their analytical purposes, which are not covered by annotations. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads key information (getting technical indicators) and lists specific indicators and purposes without unnecessary words. Every part of the sentence contributes to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (technical analysis with multiple indicators), annotations provide good coverage (read-only, non-destructive, etc.), and schema covers the single parameter fully. However, without an output schema, the description could benefit from more detail on return values (e.g., format of indicators). It's mostly complete but has a minor gap in output specification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'symbol' well-documented in the schema. The description does not add further details about the parameter beyond implying it's for cryptocurrencies. Baseline score of 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get the latest technical indicators') and resources ('for a cryptocurrency'), listing specific indicators (SMA, EMA, RSI, MACD, VWAP) and analytical goals (assess momentum, trend direction, overbought/oversold conditions). It distinguishes from siblings like get-crypto-price (price only) and get-historical-analysis (historical data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for technical analysis of cryptocurrencies, but does not explicitly state when to use this tool versus alternatives like get-market-analysis or get-historical-analysis. It provides context (assessing momentum, trend, etc.) but lacks explicit guidance on exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-top-assetsGet Top AssetsARead-onlyIdempotentInspect
Get the top cryptocurrencies ranked by market cap, with real-time price, 24-hour change percentage, and market cap for each asset.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of top assets to return, ranked by market cap (1-50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable context beyond annotations: specifies 'real-time' data freshness and enumerates return fields (price, 24-hour change percentage, market cap). Since annotations already cover safety profile (readOnly/idempotent), the description focuses on data semantics, though it could mention rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, zero waste. Front-loaded with action and resource, followed immediately by return data specification. Every clause serves a distinct purpose (ranking criteria, data freshness, return fields).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one optional parameter and no output schema, the description is complete. It compensates for missing output schema by listing the three data fields returned. Could mention the default limit of 10, though schema covers this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (limit parameter fully documented with range 1-50). The description entirely omits parameter discussion, but with complete schema self-documentation, baseline 3 is appropriate—the agent doesn't need redundant description text.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides specific verb ('Get') + resource ('top cryptocurrencies') + ranking criteria ('ranked by market cap'), and explicitly distinguishes from siblings by indicating this returns a ranked list view versus get-crypto-price (likely single asset) or the analysis tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the scope is clear (top assets by market cap), there are no explicit when-to-use guidelines or contrasts with sibling tools like get-crypto-price. The agent must infer that this is for ranked lists versus individual lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.