Zipp
Server Details
Multi-language crypto news with editorial sentiment + importance scoring; cites original publisher.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- deficlow/zipp-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 6 of 6 tools scored.
Each tool has a clearly distinct purpose: breaking news, featured stories, latest news, single post detail, category listing, and full-text search. No two tools overlap in functionality.
All tools follow a consistent verb_noun pattern using snake_case, e.g., get_breaking, list_categories, search. Even 'search' fits as a concise single verb.
With 6 tools, the server is well-scoped for a crypto news API. It covers essential operations without being too sparse or overly granular.
The tool surface covers key news retrieval needs: listing categories, getting recent/breaking/featured news, searching, and fetching full posts. A minor gap is the lack of a dedicated tool for older news beyond 24h, but search can partially address this.
Available Tools
6 toolsget_breakingAInspect
Breaking news only — last 24 hours, importance score ≥ 75. Lower volume than get_latest but every item is market-moving. Use for 'what's the most important crypto news right now?'.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | en-US | |
| limit | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It explains the filtering criteria (time and importance score) and the nature of results (market-moving). It does not mention auth or rate limits, but such details are not expected for a simple read tool. It adds value beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose, and every sentence adds value. No wasted words, perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters with defaults, output schema exists), the description covers the core filtering and usage guidance well. However, it omits any mention of the parameters, which slightly reduces completeness. The existence of an output schema means return value explanation is not required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning no parameter descriptions exist in the schema. The tool description does not explain the 'lang' or 'limit' parameters, which would help an agent set them appropriately. The description should have compensated for the low coverage but did not.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides breaking news filtered by last 24 hours and importance score ≥ 75, and it distinguishes itself from the sibling 'get_latest' by noting lower volume but higher impact. This is specific and actionable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit context when to use this tool (for most important news) and contrasts it with 'get_latest'. However, it does not explicitly mention when not to use it or alternative scenarios beyond that phrase.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_featuredAInspect
Editor-picked feature stories (is_featured=TRUE). No time window. Use when the user wants curated highlights rather than recency-sorted news.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | en-US | |
| limit | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses 'No time window' and filtering logic, but lacks details on pagination, limit behavior, or authorization. With no annotations, this is adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with key information, every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With an output schema, return values are covered. Purpose and usage are well specified, but parameter documentation is missing, leaving a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, and description does not explain parameters (lang, limit). This leaves the agent guessing about their meaning despite the tool being simple.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves editor-picked feature stories with 'is_featured=TRUE', distinguishing it from siblings like recency-sorted news tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when the user wants curated highlights rather than recency-sorted news', providing clear context and implicit differentiation from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_latestAInspect
Latest news from the last 24 hours. Optionally scoped to a category. Returns posts ordered newest-first. Use for 'what's new today?' or 'what happened in DeFi today?'.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | en-US | |
| limit | No | ||
| category | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses time window (last 24 hours) and ordering (newest-first). Without annotations, this carries full burden. It doesn't mention safety (but read-only is implied). Output schema covers return format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences: core function, ordering, example use cases. No filler, well front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given optional parameters 3, output schema exists, and no annotations, the description covers purpose, scope, ordering, and example usage. Lacks explicit defaults for limit/lang but those are in schema. Good for a list endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description only adds meaning for the 'category' parameter ('Optionally scoped to a category'). Does not mention 'lang' or 'limit', leaving gaps. Moderate compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves latest news from the last 24 hours, optionally scoped by category, with ordering. The verb 'get' and resource 'latest news' are specific, and the time scope distinguishes it from siblings like get_breaking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage examples ('what's new today?', 'what happened in DeFi today?') which clearly indicate when to use this tool. Does not explicitly exclude alternatives but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_postAInspect
Full detail of a single post — title, summary, full body, all categories, hashtags, source attribution. Accepts either a slug (from a previous tool call) or a numeric id.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | en-US | |
| slug_or_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses the output contents but lacks behavioral details such as authentication requirements, read-only nature, or error handling. The mention of 'full detail' implies completeness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that conveys the core function and input options without unnecessary words. It is front-loaded with the key purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the description appropriately focuses on input and output summary. It is complete for a simple retrieval tool, but missing guidance on the 'lang' parameter and any usage caveats.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaning to the required parameter 'slug_or_id' by explaining it can be a slug or numeric id. However, it does not mention the optional 'lang' parameter, leaving its purpose unclear despite a 0% schema description coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves full detail of a single post, listing specific fields (title, summary, full body, categories, etc.) and the input methods (slug or numeric id). This distinguishes it from sibling tools like get_latest or get_breaking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains that the input can be a slug from a previous tool call or a numeric id, providing clear guidance on argument choices. However, it does not explicitly compare with siblings or state when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_categoriesAInspect
List the full Zipp taxonomy (7 main groups × 5 leaves = 35 categories total). Use to discover valid category slugs for the search / get_latest tools.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | en-US |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully cover behavioral traits. It mentions the output structure (35 categories, 7 groups) but does not explicitly state that this is a read-only, non-destructive operation. The safety profile is implied but not confirmed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. The first sentence defines the output precisely, and the second provides immediate usage context. Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no required params, output schema exists), the description covers the core function and usage. Missing only a brief note on the 'lang' parameter's effect, which would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must add meaning for the 'lang' parameter. It does not mention the parameter at all, leaving the agent unsure whether 'lang' filters categories or just the returned labels.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'full Zipp taxonomy' with specifics (7 main groups × 5 leaves = 35 categories). It explicitly differentiates its purpose from siblings by stating it is used to discover valid category slugs for 'search' and 'get_latest' tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage guidance: 'Use to discover valid category slugs for the search / get_latest tools.' It implies a preparatory role, but does not explicitly state when not to use it (e.g., for direct content retrieval).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchAInspect
Full-text search across Zipp's news catalogue. Returns recent matching stories ordered by recency (with relevance as a tiebreaker). Use for questions like 'what's happening with Bitcoin ETFs?' or 'find news about Solana hacks'.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | en-US | |
| limit | No | ||
| query | Yes | ||
| category | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behaviors: full-text search, recency-ordering with relevance tiebreaker. Since no annotations exist, the description carries the full burden, and it covers the essential behavior adequately, though it omits details like pagination or output format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences that are direct and front-loaded with the purpose. Every sentence provides actionable information without repetition or unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters and an output schema, the description covers the core functionality but lacks detail on parameter constraints (e.g., language format, maximum limit) and edge cases. It is minimally sufficient for an agent with schema knowledge.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description should compensate by explaining parameters. It only implies the 'query' parameter via examples and provides no context for 'lang,' 'limit,' or 'category,' leaving their semantics unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it performs 'full-text search across Zipp's news catalogue,' identifies the resource and action, and distinguishes from sibling tools that return specific collections (e.g., get_latest, get_breaking).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides concrete examples like 'what's happening with Bitcoin ETFs?' that illustrate when to use this tool. It implicitly contrasts with siblings by focusing on broad queries, though it lacks explicit when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!