Skip to main content
Glama

Server Details

Real-time curated crypto news for AI agents with sentiment, recaps, and search.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
cryptobriefing/gloria-mcp
GitHub Stars
0
Server Listing
gloria-mcp

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 7 of 7 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: get_categories lists categories, get_latest_news fetches headlines, get_news_item retrieves specific items, get_news_recap provides summaries, search_news searches by keyword, and get_ticker_summary offers paid ticker summaries. However, get_enriched_news overlaps with get_news_item as both provide detailed news content, which could cause confusion despite the paid/free distinction.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with 'get_' or 'search_' prefixes, using snake_case throughout. This predictability makes it easy for agents to understand and select tools based on their naming conventions.

Tool Count5/5

With 7 tools, the server is well-scoped for its crypto news domain. Each tool serves a clear function, from listing categories to retrieving news items and summaries, without being overly sparse or bloated.

Completeness4/5

The toolset covers core news operations like listing, retrieving, searching, and summarizing, with both free and paid tiers. A minor gap exists in the lack of update or delete tools, but this is reasonable for a read-only news service, and agents can work around this limitation.

Available Tools

7 tools
get_categoriesAInspect

List all available news categories with their recap timeframes.

Returns category codes that can be used with get_latest_news, get_news_recap,
and other tools. Each category includes its code, display name, and how
frequently recaps are generated.
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior by specifying what data is returned (category codes, display names, recap frequencies) and how the output can be used with other tools. It does not cover potential limitations like rate limits or authentication needs, but provides solid operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by supporting details about returns and usage. Every sentence earns its place by adding necessary context without redundancy, making it efficiently structured and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, output schema exists), the description is complete. It explains the purpose, output format, and how to use the results with sibling tools, covering all essential aspects without needing to detail return values since an output schema is present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so the baseline is 4. The description appropriately does not discuss parameters, as none exist, and instead focuses on the output's utility, which adds value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('List') and resource ('all available news categories'), including additional details about recap timeframes. It distinguishes from siblings by focusing on category metadata rather than news content retrieval, unlike tools like get_latest_news or search_news.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by explaining that the returned category codes can be used with specific sibling tools (get_latest_news, get_news_recap, and others). However, it does not explicitly state when NOT to use it or name direct alternatives, which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_enriched_newsAInspect

Get enriched news with full AI-generated context and analysis (paid via x402).

This premium endpoint returns the complete news data including:
- long_context: Detailed AI-generated context about the news event
- short_context: Brief contextual summary
- Full entity analysis and token mentions

Payment is handled via the x402 protocol using USDC on Base network.
This tool returns the payment endpoint and instructions.
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a paid service using x402 protocol with USDC on Base network, returns payment endpoint and instructions, and details the content included (long_context, short_context, entity analysis). It doesn't cover aspects like rate limits, error handling, or response format, but provides substantial context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and key features. Each sentence adds value: the first states the purpose and payment, the second lists included data, and the third explains payment handling. There's minimal waste, though it could be slightly more structured by separating payment details from content details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (premium service with payment) and no output schema, the description is moderately complete. It covers the payment mechanism and data included, but lacks details on response format, error cases, or usage limits. With no annotations and no output schema, it should do more to fully inform an agent, such as specifying what the 'payment endpoint and instructions' entail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description adds value by explaining the tool's premium nature and payment mechanism, which isn't captured in the schema. It doesn't need to compensate for any gaps, and the baseline for 0 parameters is 4, as it provides relevant contextual information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get enriched news with full AI-generated context and analysis.' It specifies the verb 'get' and resource 'enriched news' with distinguishing features like AI-generated context and analysis. However, it doesn't explicitly differentiate from sibling tools like 'get_news_item' or 'get_news_recap' in terms of what makes 'enriched' unique beyond payment aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning it's a 'premium endpoint' with payment via x402 protocol, suggesting it should be used when enriched analysis is needed and payment is acceptable. However, it lacks explicit guidance on when to use this tool versus alternatives like 'get_news_item' or 'search_news', and doesn't specify prerequisites or exclusions beyond payment.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_latest_newsAInspect

Get the latest curated crypto news headlines.

Returns real-time news items with headline, sentiment, categories, and sources.
Use the category parameter to filter by topic (e.g. 'bitcoin', 'defi', 'ai').
Call get_categories first to see all available category codes.

Args:
    category: Filter by category code (e.g. 'bitcoin', 'ethereum', 'defi', 'ai').
              Omit to get news across all categories.
    limit: Number of items to return (1-10, default 5).
ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
categoryNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: returns 'real-time news items' (timeliness), includes specific data fields (headline, sentiment, categories, sources), and mentions a default limit. However, it doesn't cover rate limits, authentication needs, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by return details, parameter usage, and sibling reference. Every sentence adds value with zero wasted words, and the Args section is well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but with output schema), the description is complete enough. It covers purpose, usage, parameters, and references siblings. Since an output schema exists, it doesn't need to explain return values in detail, making this well-balanced.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It adds significant meaning beyond the schema: explains what 'category' filters (by topic), provides examples ('bitcoin', 'defi', 'ai'), clarifies that omitting it returns all categories, and explains 'limit' range (1-10) and default (5) which aren't in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the latest curated crypto news headlines.' It specifies the verb ('Get'), resource ('latest curated crypto news headlines'), and distinguishes from siblings by focusing on 'latest' and 'curated' rather than searching, enriching, or getting categories/items/recaps.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use alternatives: 'Call get_categories first to see all available category codes.' It also specifies when to omit the category parameter ('Omit to get news across all categories'), offering clear context for usage decisions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_news_itemAInspect

Get a specific news item by its ID.

Returns the full free-tier details for a single news item including
headline, sentiment, categories, sources, and tweet URL.

Args:
    id: The news item ID (returned in results from get_latest_news or search_news).
ParametersJSON Schema
NameRequiredDescriptionDefault
idYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that it returns 'full free-tier details' including specific fields like headline and sentiment, which adds useful context beyond just reading data. However, it lacks details on error handling, rate limits, or authentication needs, leaving behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by return details and parameter explanation in a structured 'Args' section. Every sentence adds value without redundancy, making it efficient and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is largely complete: it states purpose, usage, return content, and parameter semantics. However, it could improve by mentioning output format or error cases, but the absence of an output schema makes this less critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate fully. It does so by explaining the 'id' parameter's purpose ('The news item ID') and its source ('returned in results from get_latest_news or search_news'), adding essential meaning not present in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a specific news item') and resource ('by its ID'), distinguishing it from siblings like get_latest_news (list) or search_news (query-based). It explicitly mentions the verb and target resource without redundancy.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: to retrieve a single news item by ID, as indicated by 'Get a specific news item by its ID.' It also hints at alternatives by noting the ID comes from get_latest_news or search_news, but does not explicitly state when not to use it or compare to all siblings like get_enriched_news.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_news_recapAInspect

Get an AI-generated news recap/summary for a specific category.

Returns a concise narrative summarizing the most important recent news
for the given category. Great for getting up to speed quickly.

Args:
    category: Category code (required). Use get_categories to see options.
              Popular choices: 'crypto', 'bitcoin', 'ethereum', 'defi', 'ai', 'macro'.
    timeframe: Time window for the recap. Use '1h' for crypto/macro (updated hourly),
               '8h' or '24h' for other categories. Default '12h'.
ParametersJSON Schema
NameRequiredDescriptionDefault
categoryYes
timeframeNo12h
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns a 'concise narrative summarizing the most important recent news' and mentions timeframes for updates (e.g., 'updated hourly' for crypto/macro). However, it lacks details on behavioral traits such as rate limits, authentication needs, error handling, or whether the recap is cached or real-time, which are important for a tool generating AI summaries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by details on the return value and usage, then parameter explanations. Every sentence adds value, with no redundant information, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does a good job covering the basics: purpose, parameters, and usage. However, it lacks details on the output format (e.g., structure of the 'concise narrative'), error cases, or limitations (e.g., category availability), which would enhance completeness for an AI tool generating summaries.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate fully. It adds significant meaning beyond the schema: it explains that 'category' is a code with examples ('crypto', 'bitcoin', etc.) and references 'get_categories' for options, and specifies that 'timeframe' is a time window with default '12h' and usage guidelines (e.g., '1h' for crypto/macro, '8h' or '24h' for others). This provides essential context not in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get an AI-generated news recap/summary for a specific category.' It specifies the verb ('Get'), resource ('news recap/summary'), and scope ('AI-generated', 'specific category'). However, it doesn't explicitly differentiate from sibling tools like 'get_latest_news' or 'get_enriched_news' beyond mentioning it's a 'recap/summary'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Great for getting up to speed quickly' and includes usage guidance for parameters (e.g., 'Use get_categories to see options' for category, timeframes for different categories). It implies alternatives by referencing sibling tools ('get_categories'), but doesn't explicitly state when to choose this over tools like 'get_latest_news' or 'search_news'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ticker_summaryAInspect

Get a 24-hour AI-generated summary for any crypto ticker or topic (paid via x402).

Returns decision-grade bullet points combining Gloria's curated news with real-time web search. Designed for fund managers and trading agents.

Payment is handled via the x402 protocol using USDC on Base network. This tool returns the payment endpoint and instructions.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: the payment mechanism (x402 protocol with USDC on Base), that it returns payment endpoint/instructions, and the tool's output format ('decision-grade bullet points combining Gloria's curated news with real-time web search'). It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three focused paragraphs: first states the core functionality, second describes the output quality and target audience, third explains the payment mechanism. Every sentence adds value with zero redundant information, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with no annotations and no output schema, the description provides substantial context about functionality, payment requirements, and output format. However, it doesn't describe what happens when payment fails, whether there are usage limits, or provide examples of the returned payment instructions, leaving some operational gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the baseline would be 4 even with no parameter information. The description appropriately doesn't discuss parameters since none exist, focusing instead on the tool's functionality and payment requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get a 24-hour AI-generated summary for any crypto ticker or topic' with specific verb ('Get'), resource ('summary'), and scope ('24-hour', 'crypto ticker or topic'). It distinguishes from sibling tools like get_latest_news or search_news by focusing on AI-generated summaries rather than raw news retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Designed for fund managers and trading agents') and mentions the payment requirement ('paid via x402'), which serves as a prerequisite. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools for different use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_newsBInspect

Search curated crypto news by keyword.

Searches across all news items for matching content. Returns headlines,
sentiment, categories, and sources.

Args:
    query: Search keyword or phrase (e.g. 'ETF', 'SEC', 'Uniswap').
    limit: Number of results to return (1-5, default 5).
ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions what the tool returns (headlines, sentiment, categories, sources) but doesn't cover important aspects like rate limits, authentication requirements, error conditions, or whether results are paginated. The description is insufficient for a tool with no annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. The Args section is well-structured but could be more integrated with the main description. There's minimal wasted text, though the second sentence slightly repeats the first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return values), no annotations, and simple parameters, the description is moderately complete. It covers the basic purpose and parameters but lacks behavioral context and sibling differentiation that would be helpful for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description provides meaningful semantic context for both parameters. It explains 'query' as a search keyword/phrase with examples ('ETF', 'SEC', 'Uniswap'), and 'limit' as the number of results with range (1-5) and default value (5). This compensates well for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Search') and resource ('curated crypto news'), and distinguishes it from siblings by specifying it searches across all news items for matching content. This is more specific than generic search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the six sibling tools listed (get_categories, get_enriched_news, get_latest_news, get_news_item, get_news_recap, get_ticker_summary). It doesn't mention alternatives, prerequisites, or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.