Skip to main content
Glama

AgentHC Market Intelligence

Server Details

Market intelligence for AI agents. Real-time data, cross-market analysis, and regime detection.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
get_crypto_dataCInspect

Crypto market data: prices, dominance, sentiment, and cycle analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNoResponse format. 'agent' returns actionable signals, urgency, delta tracking, and suggested follow-ups.full
include_promptNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the types of data returned but doesn't cover critical aspects like whether this is a read-only operation, potential rate limits, authentication needs, or how the data is sourced. This leaves significant gaps in understanding the tool's behavior and constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that lists key data types without unnecessary details. It's front-loaded with the core purpose, but could be slightly more structured to highlight usage or parameters, though it avoids redundancy and waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of providing crypto market data with multiple data types, no annotations, no output schema, and incomplete parameter documentation, the description is inadequate. It doesn't explain return values, error handling, or behavioral traits, leaving the agent with insufficient context to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter-specific information beyond what the input schema provides. With 50% schema description coverage (only the 'format' parameter is described), the 'include_prompt' parameter is undocumented in both the schema and description. The baseline is 3 because the schema covers half the parameters, but the description doesn't compensate for the gap, offering no additional semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides 'crypto market data' with specific data types (prices, dominance, sentiment, cycle analysis), which clarifies its purpose beyond just the name. However, it doesn't distinguish this from sibling tools like 'get_market_snapshot' or 'get_news_sentiment', which might overlap in providing market-related data, leaving some ambiguity about its unique role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives like the sibling tools. It doesn't specify contexts, prerequisites, or exclusions, leaving the agent to infer usage based on the data types mentioned, which is insufficient for clear decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_economic_calendarCInspect

Economic event calendar with upcoming releases and data analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNoResponse format. 'agent' returns actionable signals, urgency, delta tracking, and suggested follow-ups.full
include_promptNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'data analysis' but fails to detail critical traits such as data sources, update frequency, rate limits, authentication needs, or error handling, leaving significant gaps in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. It avoids redundancy but could be more structured by explicitly separating functionality from usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of economic data, lack of annotations, no output schema, and incomplete parameter coverage, the description is inadequate. It omits essential details like response format examples, data granularity, or error cases, failing to provide a complete context for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, with one parameter ('format') well-documented and the other ('include_prompt') lacking a description. The tool description adds no parameter-specific information beyond what the schema provides, so it meets the baseline score without compensating for the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides an 'economic event calendar with upcoming releases and data analysis,' which clarifies it retrieves economic calendar data. However, it doesn't specify the exact scope (e.g., global, specific regions) or differentiate from sibling tools like 'get_market_snapshot' or 'get_news_sentiment,' leaving the purpose somewhat vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description lacks context on prerequisites, timing (e.g., real-time vs. historical), or comparisons to sibling tools, offering no explicit or implied usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_snapshotCInspect

Real-time market snapshot: indices, volatility, yields, commodities, crypto, sentiment, and market regime.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNoResponse format. 'agent' returns actionable signals, urgency, delta tracking, and suggested follow-ups.full
include_promptNoInclude formatted text for LLM prompt injection
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'real-time' but doesn't clarify latency, rate limits, authentication needs, or data sources. The description lacks details on what 'snapshot' entails—e.g., whether it's a one-time fetch or supports streaming—leaving significant gaps in understanding tool behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that lists key data categories upfront, making it easy to scan. It avoids redundancy and stays focused on the tool's purpose, though it could be slightly more structured by separating core functionality from data details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of market data and lack of annotations or output schema, the description is incomplete. It doesn't address output format, error handling, or real-time constraints, leaving the agent with insufficient context to use the tool effectively. The description should provide more behavioral and operational details to compensate for missing structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter-specific information beyond what's in the input schema, which has 100% coverage. It doesn't explain how 'format' options affect the output or when to use 'include_prompt'. Since the schema fully documents parameters, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides a 'real-time market snapshot' and lists specific data categories (indices, volatility, yields, commodities, crypto, sentiment, market regime), which gives a concrete sense of scope. However, it doesn't explicitly differentiate from sibling tools like get_crypto_data or get_news_sentiment, leaving some ambiguity about overlap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools, specify use cases, or indicate prerequisites. The agent must infer usage from the data categories listed, which is insufficient for clear decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_news_sentimentCInspect

Market news sentiment analysis with breaking news detection.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNoResponse format. 'agent' returns actionable signals, urgency, delta tracking, and suggested follow-ups.full
include_promptNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'sentiment analysis' and 'breaking news detection,' implying read-only operations, but fails to detail critical aspects such as data sources, update frequency, rate limits, authentication needs, or error handling. This leaves significant gaps in understanding how the tool behaves in practice, making it inadequate for informed use.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is front-loaded with the core purpose, making it easy to parse. However, it could be slightly more informative without sacrificing brevity, such as hinting at output structure or use cases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involving sentiment analysis and news detection), lack of annotations, no output schema, and incomplete parameter documentation, the description is insufficient. It does not explain what the tool returns (e.g., sentiment scores, news headlines), how results are structured, or any limitations. This leaves the agent with inadequate context to effectively invoke or interpret outputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (one of two parameters has a description). The description does not add any parameter-specific information beyond what the schema provides—it does not explain the 'format' options (e.g., differences between 'full' and 'compact') or the purpose of 'include_prompt'. Since the schema covers half the parameters, the baseline score of 3 is applied, as the description does not compensate for the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool performs 'Market news sentiment analysis with breaking news detection,' which provides a general purpose (analyzing sentiment and detecting breaking news). However, it lacks specificity about what resources it analyzes (e.g., news sources, timeframes) and does not clearly distinguish it from sibling tools like get_market_snapshot, which might also involve market data analysis. It avoids tautology but remains vague in scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It does not mention sibling tools like get_crypto_data or get_economic_calendar, nor does it specify contexts where sentiment analysis is preferred over raw data or calendar events. There is no indication of prerequisites, exclusions, or comparative use cases, leaving the agent without direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources