Skip to main content
Glama

Server Details

Multi-language crypto news with editorial sentiment + importance scoring; cites original publisher.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
deficlow/zipp-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: breaking news, featured stories, latest news, single post detail, category listing, and full-text search. No two tools overlap in functionality.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern using snake_case, e.g., get_breaking, list_categories, search. Even 'search' fits as a concise single verb.

Tool Count5/5

With 6 tools, the server is well-scoped for a crypto news API. It covers essential operations without being too sparse or overly granular.

Completeness4/5

The tool surface covers key news retrieval needs: listing categories, getting recent/breaking/featured news, searching, and fetching full posts. A minor gap is the lack of a dedicated tool for older news beyond 24h, but search can partially address this.

Available Tools

6 tools
get_breakingAInspect

Breaking news only — last 24 hours, importance score ≥ 75. Lower volume than get_latest but every item is market-moving. Use for 'what's the most important crypto news right now?'.

ParametersJSON Schema
NameRequiredDescriptionDefault
langNoen-US
limitNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It explains the filtering criteria (time and importance score) and the nature of results (market-moving). It does not mention auth or rate limits, but such details are not expected for a simple read tool. It adds value beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose, and every sentence adds value. No wasted words, perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters with defaults, output schema exists), the description covers the core filtering and usage guidance well. However, it omits any mention of the parameters, which slightly reduces completeness. The existence of an output schema means return value explanation is not required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning no parameter descriptions exist in the schema. The tool description does not explain the 'lang' or 'limit' parameters, which would help an agent set them appropriately. The description should have compensated for the low coverage but did not.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides breaking news filtered by last 24 hours and importance score ≥ 75, and it distinguishes itself from the sibling 'get_latest' by noting lower volume but higher impact. This is specific and actionable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit context when to use this tool (for most important news) and contrasts it with 'get_latest'. However, it does not explicitly mention when not to use it or alternative scenarios beyond that phrase.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_latestAInspect

Latest news from the last 24 hours. Optionally scoped to a category. Returns posts ordered newest-first. Use for 'what's new today?' or 'what happened in DeFi today?'.

ParametersJSON Schema
NameRequiredDescriptionDefault
langNoen-US
limitNo
categoryNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses time window (last 24 hours) and ordering (newest-first). Without annotations, this carries full burden. It doesn't mention safety (but read-only is implied). Output schema covers return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: core function, ordering, example use cases. No filler, well front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given optional parameters 3, output schema exists, and no annotations, the description covers purpose, scope, ordering, and example usage. Lacks explicit defaults for limit/lang but those are in schema. Good for a list endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description only adds meaning for the 'category' parameter ('Optionally scoped to a category'). Does not mention 'lang' or 'limit', leaving gaps. Moderate compensation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves latest news from the last 24 hours, optionally scoped by category, with ordering. The verb 'get' and resource 'latest news' are specific, and the time scope distinguishes it from siblings like get_breaking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage examples ('what's new today?', 'what happened in DeFi today?') which clearly indicate when to use this tool. Does not explicitly exclude alternatives but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_postAInspect

Full detail of a single post — title, summary, full body, all categories, hashtags, source attribution. Accepts either a slug (from a previous tool call) or a numeric id.

ParametersJSON Schema
NameRequiredDescriptionDefault
langNoen-US
slug_or_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses the output contents but lacks behavioral details such as authentication requirements, read-only nature, or error handling. The mention of 'full detail' implies completeness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that conveys the core function and input options without unnecessary words. It is front-loaded with the key purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema, the description appropriately focuses on input and output summary. It is complete for a simple retrieval tool, but missing guidance on the 'lang' parameter and any usage caveats.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning to the required parameter 'slug_or_id' by explaining it can be a slug or numeric id. However, it does not mention the optional 'lang' parameter, leaving its purpose unclear despite a 0% schema description coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves full detail of a single post, listing specific fields (title, summary, full body, categories, etc.) and the input methods (slug or numeric id). This distinguishes it from sibling tools like get_latest or get_breaking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that the input can be a slug from a previous tool call or a numeric id, providing clear guidance on argument choices. However, it does not explicitly compare with siblings or state when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesAInspect

List the full Zipp taxonomy (7 main groups × 5 leaves = 35 categories total). Use to discover valid category slugs for the search / get_latest tools.

ParametersJSON Schema
NameRequiredDescriptionDefault
langNoen-US

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully cover behavioral traits. It mentions the output structure (35 categories, 7 groups) but does not explicitly state that this is a read-only, non-destructive operation. The safety profile is implied but not confirmed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The first sentence defines the output precisely, and the second provides immediate usage context. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no required params, output schema exists), the description covers the core function and usage. Missing only a brief note on the 'lang' parameter's effect, which would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must add meaning for the 'lang' parameter. It does not mention the parameter at all, leaving the agent unsure whether 'lang' filters categories or just the returned labels.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and the resource 'full Zipp taxonomy' with specifics (7 main groups × 5 leaves = 35 categories). It explicitly differentiates its purpose from siblings by stating it is used to discover valid category slugs for 'search' and 'get_latest' tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage guidance: 'Use to discover valid category slugs for the search / get_latest tools.' It implies a preparatory role, but does not explicitly state when not to use it (e.g., for direct content retrieval).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.