Gate News MCP
Server Details
Gate news MCP for crypto news, structured events, announcements, and social sentiment.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- gate/gate-mcp
- GitHub Stars
- 27
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 8 of 8 tools scored.
The tools are mostly distinct with clear boundaries, as each targets a specific news source or data type (e.g., events, social sentiment, web search). However, some overlap exists between search_news, search_ugc, and search_x, as they all involve searching content but differ by platform, which could cause minor confusion if not carefully read.
Tool names follow a consistent snake_case pattern with a clear structure: prefix (e.g., news_events, news_feed) followed by a verb_noun combination (e.g., get_event_detail, search_news). This predictability makes it easy for agents to understand and navigate the toolset.
With 8 tools, the count is well-scoped for a news-focused server, covering various aspects like events, feeds, and searches. Each tool serves a specific purpose without redundancy, making the set efficient and manageable for typical news-related tasks.
The toolset provides comprehensive coverage for news retrieval across events, social media, and web sources, with clear cross-references to avoid dead ends. A minor gap is the lack of tools for creating or updating news content, but as a read-only server focused on information gathering, it effectively supports agent workflows.
Available Tools
8 toolsnews_events_get_event_detailADestructiveInspect
[Read] Full detail for one event_id only. Filtered event list or timeline -> get_latest_events. Unknown id returns not found.
| Name | Required | Description | Default |
|---|---|---|---|
| event_id | Yes | Required event_id from get_latest_events items (spec §7.10)—opaque digest id, not a trading pair slug or headline hash. |
Output Schema
| Name | Required | Description |
|---|---|---|
| title | No | |
| content | No | |
| summary | No | |
| symbols | No | |
| event_id | Yes | |
| strength | No | |
| direction | No | |
| duration_ms | Yes | |
| occurred_at | No | |
| news_coverage | No | |
| impact_analysis | No | |
| community_reaction | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies that only one event_id is processed, explains the error behavior for unknown IDs, and references the source of valid IDs. While annotations cover safety aspects (destructiveHint: true, etc.), the description provides operational details that help the agent understand how to use the tool correctly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with three information-dense sentences that each serve distinct purposes: stating the core function, providing alternative guidance, and describing error behavior. No wasted words, and the most critical information appears first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's single parameter, comprehensive annotations, and the presence of an output schema, the description provides exactly what's needed: clear purpose, usage guidelines, and behavioral context. The output schema will handle return values, so the description appropriately focuses on when and how to use the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents the single required parameter. The description reinforces that it's 'for one event_id only' and references 'get_latest_events items' as the source, but doesn't add significant semantic value beyond what's in the schema. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Full detail for one event_id only') and resource ('event'), distinguishing it from sibling tools like 'get_latest_events' for filtered lists or timelines. It uses precise language that goes beyond just restating the name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool ('Full detail for one event_id only') and when to use alternatives ('Filtered event list or timeline -> get_latest_events'). It also provides exclusion guidance ('Unknown id returns not found'), giving complete context for proper tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
news_events_get_latest_eventsADestructiveInspect
[Read] Filtered event list or timeline; each row includes event_id. One event_id detail -> get_event_detail. Headline/news feed -> search_news.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | No | Optional comma-separated tickers e.g. BTC,ETH (spec §7.10: coin) for digest items; not the same as search_news tickers-only heat mode. | |
| limit | No | Page size (spec §7.10: limit); default 20, max 100. | |
| cursor | No | Optional pagination cursor. | |
| end_time | No | Absolute end (ISO8601 or Unix sec/ms). Mutually exclusive with time_range. | |
| event_type | No | Optional filter on structured digest event_type (spec §7.10). Not search_news headline similarity/heat. | |
| start_time | No | Absolute start (ISO8601 or Unix sec/ms). Mutually exclusive with time_range; pair with end_time or let server fill the other bound. | |
| time_range | No | Relative window for event digest list (spec §7.10: time_range): 1h / 24h / 7d. Mutually exclusive with start_time/end_time; omit all for last 24h or server default when time filter disabled. Not CPI/Fed macro series—use macro indicator tools. |
Output Schema
| Name | Required | Description |
|---|---|---|
| coin | No | |
| count | Yes | |
| items | Yes | |
| limit | Yes | |
| total | Yes | |
| end_time | No | |
| event_type | No | |
| start_time | No | |
| time_range | No | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds some behavioral context beyond annotations: it mentions that results include 'event_id' and hints at a 'filtered' list. However, annotations already cover key traits (readOnlyHint=false, destructiveHint=true, etc.), and the description doesn't fully disclose implications like what 'destructiveHint: true' means in practice (e.g., data modification risks) or rate limits. With annotations present, the bar is lower, but the description adds only moderate value, warranting a 3.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: it uses only two sentences with zero wasted words. The first sentence states the purpose and key output detail, while the second provides clear usage guidelines. Every sentence earns its place, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, annotations, and an output schema), the description is reasonably complete. It clarifies the tool's role vs. siblings and output structure. With an output schema present, it doesn't need to explain return values. However, it could better address behavioral aspects implied by annotations (e.g., destructive nature) to be fully comprehensive, so it scores a 4.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not add any parameter-specific information beyond what the input schema provides. Since schema description coverage is 100%, the baseline score is 3. The description mentions 'filtered' but doesn't elaborate on how parameters like 'coin' or 'event_type' affect filtering, so it doesn't enhance parameter semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Filtered event list or timeline; each row includes event_id.' It specifies the verb ('Filtered') and resource ('event list or timeline'), and distinguishes from sibling tools by mentioning 'get_event_detail' for details and 'search_news' for headlines. However, it doesn't explicitly differentiate from all siblings like 'news_feed_get_exchange_announcements' or 'news_feed_search_ugc', keeping it at a 4 rather than a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool vs. alternatives: it directs to 'get_event_detail' for detailed event information and 'search_news' for headline/news feed purposes. This helps the agent choose appropriately. However, it doesn't explicitly state when not to use this tool (e.g., vs. other siblings like 'news_feed_get_social_sentiment') or include prerequisites, so it scores a 4 instead of a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
news_feed_get_exchange_announcementsADestructiveInspect
[Read] Venue-published exchange notices: listings, delistings, maintenance. Media rumors or general crypto headlines -> search_news.
| Name | Required | Description | Default |
|---|---|---|---|
| to | No | Window end Unix sec inclusive; forwarded downstream and filtered locally. | |
| coin | No | Comma-separated tickers (spec §7.6: coin); omit if empty. | |
| from | No | Window start Unix sec; omit if <= 0; MCP also filters locally after fetch. | |
| limit | No | Max rows; omit if unset or <= 0; cap 100 when set. | |
| query | No | Optional text filter on official venue notices; omit if empty. For general crypto media headlines use search_news—not a substitute for venue-published listing API. | |
| exchange | No | Venue id for API platform (spec §7.6: exchange); used when platform is empty; not merged with query. | |
| platform | No | URL param platform; wins over exchange; if empty falls back to exchange. | |
| announcement_type | No | listing / delisting / maintenance / all; omit if empty. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | No | |
| coin | No | |
| from | No | |
| count | Yes | |
| items | Yes | |
| limit | No | |
| query | No | |
| total | Yes | |
| exchange | No | |
| platform | No | |
| duration_ms | Yes | |
| announcement_type | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds some behavioral context beyond annotations: it clarifies the scope ('venue-published exchange notices') and contrasts with search_news. However, annotations already provide key hints (readOnlyHint=false, destructiveHint=true, openWorldHint=true, idempotentHint=false), so the description doesn't need to repeat these. It doesn't add details like rate limits or authentication needs, but doesn't contradict annotations either.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: a single sentence states the purpose and usage guidelines, with zero wasted words. Every element (purpose, scope, alternative tool) earns its place efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, annotations covering key behavioral hints, and an output schema), the description is complete enough. It clarifies the tool's niche versus siblings, and with annotations and schema providing detailed parameter and behavioral information, no additional explanation of return values or low-level behavior is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all 8 parameters thoroughly. The description doesn't add parameter-specific semantics beyond what's in the schema, but it does reinforce the distinction between 'venue-published exchange notices' (for this tool) and 'general crypto media headlines' (for search_news), which contextualizes parameters like 'query' and 'announcement_type'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Read venue-published exchange notices') and resources ('listings, delistings, maintenance'), and explicitly distinguishes it from its sibling tool search_news by contrasting 'venue-published exchange notices' with 'media rumors or general crypto headlines'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: it specifies 'Venue-published exchange notices' as the proper use case and directs users to 'search_news' for 'Media rumors or general crypto headlines,' clearly differentiating between sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
news_feed_get_social_sentimentADestructiveInspect
[Read] Aggregate per-coin social sentiment for a time range: overall sentiment, positive/negative split, mention count, and sample tweets. X/Twitter post search or tweet-level evidence -> search_x. Multi-platform social thread search -> search_ugc.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | No | Tickers e.g. BTC or BTC,ETH for per-coin aggregates: overall sentiment, positive/negative split, mention count, sample tweets (top_tweets order); omit defaults to BTC server-side. X/Twitter post search or tweet-level evidence -> search_x. Multi-platform social thread search -> search_ugc. | |
| time_range | No | 1h / 24h (default) / 7d window for per-coin sentiment aggregation (overall sentiment, positive/negative split, mention count). |
Output Schema
| Name | Required | Description |
|---|---|---|
| coin | No | |
| time_range | No | |
| top_tweets | Yes | |
| duration_ms | Yes | |
| mention_count | Yes | |
| sentiment_label | Yes | |
| overall_sentiment | Yes | |
| sentiment_label_raw | No | |
| sentiment_distribution | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds some behavioral context beyond annotations, such as specifying the aggregation includes 'sample tweets' and the time range for sentiment aggregation. However, it does not address the annotations' implications (e.g., destructiveHint: true, idempotentHint: false) or provide details on rate limits, authentication needs, or error handling. With annotations covering basic traits, the description adds moderate value but lacks depth in behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core functionality and then providing usage guidelines. However, it includes some redundancy with the schema (e.g., repeating 'per-coin aggregates') and could be slightly more streamlined, though it remains efficient and informative without unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregation with sentiment analysis), rich annotations, and the presence of an output schema, the description is mostly complete. It covers purpose, usage guidelines, and key outputs, but could improve by addressing the behavioral implications of annotations (e.g., destructive nature) or providing more context on limitations. Overall, it is sufficient but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not add significant meaning beyond the input schema, which has 100% coverage and already details the parameters (coin and time_range). The description repeats some schema information (e.g., 'per-coin aggregates') but does not provide additional syntax, format, or usage nuances. With high schema coverage, the baseline score of 3 is appropriate as the schema handles most of the parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Aggregate per-coin social sentiment') and resources ('overall sentiment, positive/negative split, mention count, and sample tweets'), and explicitly distinguishes it from sibling tools ('X/Twitter post search or tweet-level evidence -> search_x. Multi-platform social thread search -> search_ugc'). This provides precise differentiation from alternatives like search_x and search_ugc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives, stating 'X/Twitter post search or tweet-level evidence -> search_x. Multi-platform social thread search -> search_ugc.' This clearly directs users to other tools for specific use cases, making it easy to choose the right tool among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
news_feed_search_newsADestructiveInspect
[Read] Search the platform news index for headlines, news items, and briefing-style result lists. Open-web research with synthesized answers and cited external pages -> web_search. Event catalog with event_id -> get_latest_events.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | No | Comma-separated tickers mapped to API tickers; sent only when query is empty (omitted when query is set). Heat mode supports news-item / briefing-style lists on the platform index—not open-web synthesis (web_search). | |
| lang | No | MCP-only filter on metadata.lang or top-level lang; not sent to upstream API. | |
| page | No | Page number mapped to API page; default 1. | |
| limit | No | Page size mapped to API page_size; default 10, max 100. | |
| query | No | Non-empty: similarity mode (no tickers; default similarity_score 0.6). Empty: heat mode (top_total_score 1; optional coin/tickers). Platform news index for headlines, news items, and briefing-style result lists—not open-web research with synthesized answers and cited external pages (web_search) or event catalog with event_id (get_latest_events). | |
| sort_by | No | e.g. time (default); similarity mode with top_total_score 0 may still apply MCP local time sort. | |
| end_time | No | End time mapped to API to (Unix sec). Ignored when time_range is set. End-only defaults start to 24h before end. | |
| platform | No | Source platform name for API platform (e.g. panews, theblock). | |
| start_time | No | Start time (ISO8601 or Unix sec/ms) mapped to API from. Ignored when time_range is set. Start-only defaults end to now; both empty defaults last 7d when time_range is also empty. | |
| time_range | No | Optional preset window: 1h / 24h (default when preset value invalid) / 7d / 30d. When non-empty, overrides start_time/end_time and maps to API from/to (Unix epoch seconds). Omit to use start_time/end_time or implicit last-7d when both are empty. | |
| platform_type | No | Legacy: maps to API platform when platform is omitted; omit platform when value is all. | |
| top_total_score | No | Only when query empty (0 or 1); non-empty query forces similarity mode. | |
| similarity_score | No | Similarity threshold; default 0.6 when query set; when query empty only sent if explicitly set. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | Yes | |
| coin | No | |
| from | Yes | |
| lang | No | |
| page | No | |
| count | Yes | |
| items | Yes | |
| limit | No | |
| query | No | |
| total | Yes | |
| sort_by | No | |
| end_time | No | |
| platform | No | |
| page_size | No | |
| start_time | No | |
| time_range | No | |
| duration_ms | Yes | |
| platform_type | No | |
| top_total_score | Yes | |
| similarity_score | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=false, openWorldHint=true, idempotentHint=false, and destructiveHint=true. The description adds some behavioral context about the search modes (heat mode vs similarity mode) and what types of content it returns, but doesn't elaborate on the destructive nature or open-world implications beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and scope, the second provides explicit usage guidelines with alternatives. Every sentence earns its place, though the first sentence is somewhat dense with multiple concepts.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (13 parameters), the comprehensive 100% schema description coverage, the presence of an output schema, and rich annotations, the description provides complete contextual information. It covers purpose, scope, and usage guidelines adequately for the available structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 13 parameters thoroughly. The description adds some high-level context about search modes (heat vs similarity) and platform index scope, but doesn't provide additional parameter semantics beyond what's in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches 'the platform news index for headlines, news items, and briefing-style result lists' with specific verbs and resources. It explicitly distinguishes from sibling tools web_search (open-web research) and get_latest_events (event catalog), providing excellent differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'Open-web research with synthesized answers and cited external pages -> web_search. Event catalog with event_id -> get_latest_events.' This gives clear when-not-to-use scenarios with named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
news_feed_search_ugcADestructiveInspect
[Read] Reddit/Discord/Telegram/YouTube-style UGC: non-empty query uses vector API; coin without query uses OpenSearch. Both empty invalid. X/Twitter narrative -> search_x; headlines -> search_news. Not macro economic statistics; not structured event list -> get_latest_events.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | No | Ticker filter; with query filters results; without query required with index for OpenSearch list mode. | |
| limit | No | Max items; default 10, max 50. | |
| query | No | Optional NL query; non-empty uses vector search. May combine with coin; both empty is invalid. | |
| domain | No | UGC topical bucket: crypto / defi / finance / macro / ai_agent / web3_dev / all (default all). macro = social discussion about macro themes, not CPI/Fed/unemployment statistics (use macro data tools) or get_latest_events. | |
| channel | No | Optional source channel e.g. r/ethereum or handle. | |
| sort_by | No | relevance (default) / upvotes / recent. | |
| platform | No | reddit / discord / telegram / youtube / all (default all). | |
| time_range | No | 1h / 24h / 7d (default) / 30d / all. | |
| quality_tier | No | A (default, high quality) / B / all. |
Output Schema
| Name | Required | Description |
|---|---|---|
| coin | No | |
| count | Yes | |
| items | Yes | |
| limit | No | |
| query | No | |
| total | Yes | |
| domain | No | |
| channel | No | |
| sort_by | No | |
| platform | No | |
| time_range | No | |
| duration_ms | Yes | |
| quality_tier | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false, openWorldHint=true, idempotentHint=false, destructiveHint=true. The description adds context beyond annotations by specifying the search behavior (vector API vs. OpenSearch based on query presence) and invalid input conditions ('Both empty invalid'), which are not covered by annotations. However, it does not fully explain the implications of destructiveHint=true or openWorldHint=true, leaving some behavioral traits implicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with key information (search behavior and usage guidelines) in a single, dense sentence, followed by exclusions. It is appropriately sized with no wasted words, but the structure could be slightly improved by separating the search logic from the sibling tool distinctions for better readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters, annotations, and an output schema), the description is complete enough. It covers purpose, usage guidelines, and behavioral context, while the output schema handles return values. The annotations provide safety and operational hints, and the description fills in gaps like search modes and exclusions, making it well-rounded for the tool's scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already documents all 9 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, such as implying the interaction between 'coin' and 'query' for search modes, but does not provide significant additional meaning. With high schema coverage, the baseline score of 3 is appropriate as the description adds some value but not extensive param details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for user-generated content (UGC) from specific platforms (Reddit/Discord/Telegram/YouTube) and distinguishes it from siblings by explicitly naming alternatives: 'X/Twitter narrative -> search_x; headlines -> search_news'. It specifies what it does not cover: 'Not macro economic statistics; not structured event list -> get_latest_events', making the purpose specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs. alternatives: 'X/Twitter narrative -> search_x; headlines -> search_news' and exclusions: 'Not macro economic statistics; not structured event list -> get_latest_events'. It also details usage conditions: 'non-empty query uses vector API; coin without query uses OpenSearch. Both empty invalid.', offering clear context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
news_feed_search_xADestructiveInspect
[Read] Search and analyze X/Twitter discussions for a topic, with tweet-level evidence and cited posts. Aggregate social mood, sentiment score, or positive/negative split -> get_social_sentiment. Open-web pages -> web_search. Multi-platform social search -> search_ugc.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | No | Platform fallback: maps to tickers for feed-style items when xAI path is unused—tweet-level X evidence remains the primary tool goal; per-coin sentiment KPIs -> get_social_sentiment. | |
| days | No | xAI only: lookback days; default 7, min 1. | |
| lang | No | Answer language: zh (default) / en / auto. Also used by platform fallback as MCP local lang filter. | |
| page | No | Platform fallback: page number. | |
| limit | No | Platform fallback: page_size, default 10. | |
| model | No | xAI only: override configured Grok model id. | |
| query | No | X/Twitter topic for tweet-level evidence and cited posts; English recommended. xAI: empty returns no results. Aggregate social mood, sentiment score, or positive/negative split -> get_social_sentiment. Open-web pages -> web_search. Multi-platform social search -> search_ugc. Platform fallback matches search_news.query semantics. | |
| sort_by | No | Platform fallback: sort field. | |
| end_time | No | Platform fallback: maps to to. | |
| platform | No | Platform fallback: platform. | |
| start_time | No | Platform fallback: maps to from (Unix sec). | |
| time_range | No | Preferred recency window for search_x: 1h / 24h (default) / 7d. Takes precedence over days when set. | |
| platform_type | No | Platform fallback: maps when platform omitted. | |
| allowed_handles | No | xAI only: include these X handles without @, max 10; mutually exclusive with excluded_handles. | |
| top_total_score | No | Platform fallback: heat vs similarity. | |
| excluded_handles | No | xAI only: exclude these handles, max 10. | |
| similarity_score | No | Platform fallback: similarity threshold. | |
| enable_image_understanding | No | xAI only: analyze images in posts. | |
| enable_video_understanding | No | xAI only: analyze video in posts. |
Output Schema
| Name | Required | Description |
|---|---|---|
| coin | No | |
| days | No | |
| lang | No | |
| count | Yes | |
| items | Yes | |
| model | No | |
| query | No | |
| total | Yes | |
| source | No | |
| content | Yes | Same as summary for legacy clients; tweet-level X/Twitter evidence—not headline index (search_news). |
| summary | Yes | xAI: synthesized narrative from X/Twitter discussions with tweet-level evidence (same as content; always present). Not open-web synthesis with cited external pages (web_search). Not per-coin sentiment KPIs over a time range (get_social_sentiment). |
| to_date | No | |
| platform | No | |
| from_date | No | |
| disclaimer | No | Fixed disclaimer on xAI success; empty on platform fallback. |
| key_points | Yes | xAI: bullet points from cited posts; empty array if none. Not briefing-style platform news lists (search_news). |
| duration_ms | Yes | |
| cited_tweets | Yes | xAI: tweet-level evidence and cited posts; fields depend on model and citations. |
| platform_type | No | |
| allowed_handles | No | |
| sentiment_label | Yes | xAI: bullish / bearish / neutral for the discussion; empty if unknown. Not per-coin sentiment label KPIs (get_social_sentiment). |
| sentiment_score | Yes | xAI: 0-100 tone for this X/Twitter topic; JSON null if unknown; pairs with sentiment_label. Not per-coin aggregate positive/negative split (get_social_sentiment). |
| excluded_handles | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it clarifies the tool's focus on 'tweet-level evidence and cited posts' and mentions platform fallback behavior. While annotations provide structural hints (readOnlyHint: false, destructiveHint: true, etc.), the description enriches understanding of what the tool actually does. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: purpose statement, usage guidelines, and sibling tool references. It's front-loaded with the core function and avoids unnecessary elaboration. The only minor inefficiency is the bracketed '[Read]' prefix, which adds little value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (19 parameters, annotations, output schema), the description is complete. It clearly states the purpose, provides usage guidelines, and references siblings. With comprehensive schema coverage and an output schema, the description doesn't need to explain parameters or return values, focusing instead on higher-level context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 19 parameters thoroughly. The description doesn't add significant parameter-specific details beyond what's in the schema, though it reinforces the primary use of the 'query' parameter. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search and analyze X/Twitter discussions for a topic, with tweet-level evidence and cited posts.' It specifies the verb ('search and analyze'), resource ('X/Twitter discussions'), and scope ('tweet-level evidence and cited posts'), and distinguishes it from siblings by explicitly naming alternatives (get_social_sentiment, web_search, search_ugc).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'Aggregate social mood, sentiment score, or positive/negative split -> get_social_sentiment. Open-web pages -> web_search. Multi-platform social search -> search_ugc.' This clearly defines the tool's niche and when to choose other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
news_feed_web_searchADestructiveInspect
[Read] Search the open web and return a synthesized answer with cited external pages. Built-in headline lookup, news-item search, or briefing-style news list -> search_news. X/Twitter-only discussion or tweet evidence -> search_x.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | No | Optional ticker or project (e.g. BTC, ETH) to focus the open-web synthesized answer—not platform news index tickers (search_news). | |
| lang | No | Answer language: zh (default) / en / auto. | |
| mode | No | Answer length: analysis (default, fuller synthesized text) or brief (~100 chars). Not chart/indicator technical analysis (RSI/MACD). | |
| limit | No | Max cited external pages in the answer; default 5, max 10. | |
| query | No | Required NL question: search the open web and return a synthesized answer with cited external pages—not built-in headline lookup, news-item search, or briefing-style news list (search_news), nor X/Twitter-only discussion or tweet-level evidence (search_x). | |
| time_range | No | Recency window for open-web synthesis with cited external pages: 1h / 24h (default) / 7d / 30d. Not DeFi/TVL dashboard metrics (use platform-metrics tools). |
Output Schema
| Name | Required | Description |
|---|---|---|
| coin | No | |
| lang | No | |
| mode | No | |
| count | Yes | |
| items | Yes | |
| model | No | |
| query | No | |
| total | Yes | |
| source | No | |
| summary | Yes | |
| disclaimer | No | |
| key_points | Yes | |
| time_range | No | |
| duration_ms | Yes | |
| cited_sources | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, destructiveHint=true, openWorldHint=true, and idempotentHint=false. The description adds valuable context about what the tool does (synthesized answers with citations) and what it doesn't do (platform news, Twitter-only). It doesn't contradict annotations but could elaborate more on the 'destructive' aspect (e.g., what gets destroyed).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (two sentences) and front-loaded with the core purpose. Every sentence earns its place by providing essential differentiation from sibling tools. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, open-world synthesis), the description is complete enough. It clearly states the tool's purpose, distinguishes from siblings, and the output schema exists (so return values needn't be explained). Annotations provide safety/behavioral hints, and schema covers parameters fully.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in the schema. The description doesn't add significant semantic details beyond what's in schema descriptions (e.g., it repeats the query parameter's purpose). Baseline 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search the open web and return a synthesized answer with cited external pages.' It distinguishes from siblings by explicitly naming alternatives (search_news for built-in news, search_x for Twitter-only). The verb 'search' and resource 'open web' are specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs. alternatives: 'Built-in headline lookup, news-item search, or briefing-style news list -> search_news. X/Twitter-only discussion or tweet evidence -> search_x.' This clearly defines boundaries and sibling tool relationships.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!