Skip to main content
Glama

wikifeed

Server Details

Wikifeed MCP — wraps Wikimedia Feed API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-wikifeed
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose targeting different types of Wikipedia content: featured articles, most-read articles, historical events, and picture of the day. There is no overlap in functionality, making it easy for an agent to select the correct tool based on the desired information type.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with descriptive noun phrases (featured_article, most_read, on_this_day, picture_of_day). The naming is uniform and predictable, making the tool set easy to navigate and understand.

Tool Count5/5

With 4 tools, this server is well-scoped for providing Wikipedia-based content feeds. Each tool serves a unique and valuable purpose, and the count is appropriate for the domain without being too sparse or overwhelming.

Completeness5/5

The tool set comprehensively covers key Wikipedia feed types: featured content, popularity metrics, historical data, and visual media. There are no obvious gaps for the server's purpose of delivering Wikipedia feeds, as it includes all major content categories users might request.

Available Tools

4 tools
featured_articleBInspect

Get Wikipedia's featured article for a specific date.

ParametersJSON Schema
NameRequiredDescriptionDefault
dayYesTwo-digit day number (e.g., "01", "15")
yearYesFour-digit year (e.g., "2024")
monthYesTwo-digit month number (e.g., "01", "12")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what the tool does but doesn't describe behavioral traits such as whether it's read-only, potential rate limits, error handling (e.g., for invalid dates), or the format of the returned article (e.g., text, HTML, summary). This leaves significant gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without any wasted words. It directly communicates the tool's function in a structured and easily digestible manner.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the output looks like (e.g., article title, content, links), potential limitations (e.g., date range constraints), or error scenarios. This makes it inadequate for full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with clear descriptions for all three parameters (year, month, day) including format examples. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline of 3 without compensating or adding extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resource ('Wikipedia's featured article') with precise scope ('for a specific date'). It distinguishes from sibling tools like 'most_read', 'on_this_day', and 'picture_of_day' by focusing exclusively on featured articles rather than other content types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a user wants Wikipedia's featured article for a particular date, but it doesn't explicitly state when to use this tool versus alternatives like 'on_this_day' (which might provide historical events) or 'picture_of_day' (which provides images). No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

most_readBInspect

Get the most-read Wikipedia articles for a specific date.

ParametersJSON Schema
NameRequiredDescriptionDefault
dayYesTwo-digit day number (e.g., "01", "15")
yearYesFour-digit year (e.g., "2024")
monthYesTwo-digit month number (e.g., "01", "12")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool retrieves data ('Get'), implying a read-only operation, but doesn't specify whether it requires authentication, has rate limits, returns structured data, or handles errors. For a tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every element ('Get', 'most-read Wikipedia articles', 'for a specific date') earns its place by contributing essential information. There is no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 required parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks details on behavioral traits, usage context, and output format. Without annotations or output schema, the agent has incomplete information about what the tool returns or how it behaves.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with all three parameters (year, month, day) clearly documented in the input schema. The description adds no additional parameter semantics beyond implying a date context. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'most-read Wikipedia articles' with the specific context 'for a specific date'. It distinguishes itself from siblings like 'featured_article' or 'picture_of_day' by focusing on popularity metrics rather than curated content. However, it doesn't explicitly contrast with 'on_this_day' which might also involve date-based queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites like date availability or historical limits, nor does it suggest when other tools like 'featured_article' might be more appropriate. The agent must infer usage solely from the tool name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

on_this_dayBInspect

Get historical events, births, deaths, and holidays that occurred on a given month and day across all years.

ParametersJSON Schema
NameRequiredDescriptionDefault
dayYesTwo-digit day number (e.g., "01", "15", "31")
monthYesTwo-digit month number (e.g., "01" for January, "12" for December)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves data but does not describe response format, error handling, rate limits, or authentication needs. For a read-only tool with no annotations, this leaves significant gaps in understanding its operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose without unnecessary details. It is front-loaded with the core functionality and uses clear, direct language, making it easy for an agent to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 simple parameters) and high schema coverage, the description is adequate for basic understanding. However, with no output schema and no annotations, it lacks details on return values and behavioral traits, which could hinder an agent's ability to use the tool effectively in more complex scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with both parameters ('month' and 'day') fully documented in the input schema. The description adds no additional parameter semantics beyond implying the tool uses these inputs to filter historical data. This meets the baseline of 3 when the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and the resource ('historical events, births, deaths, and holidays'), with precise scope ('on a given month and day across all years'). It distinguishes itself from sibling tools like 'featured_article', 'most_read', and 'picture_of_day' by focusing on historical data retrieval rather than current or featured content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, exclusions, or specific contexts where this tool is preferred over sibling tools. The agent must infer usage based solely on the purpose statement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

picture_of_dayBInspect

Get Wikipedia's picture of the day for a specific date, including title, description, and image URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
dayYesTwo-digit day number (e.g., "01", "15")
yearYesFour-digit year (e.g., "2024")
monthYesTwo-digit month number (e.g., "01", "12")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states what the tool does (retrieves data) but lacks behavioral details such as rate limits, error handling, authentication needs, or whether it's a read-only operation. It doesn't disclose any traits beyond the basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and key details (title, description, image URL). Every word earns its place with no redundancy or waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 required parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks completeness in behavioral transparency and usage guidelines, leaving gaps for an AI agent to infer operational details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the parameters (year, month, day with formats). The description adds no additional meaning beyond implying date specificity, which is already clear from the schema. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('Wikipedia's picture of the day'), specifying what information is retrieved (title, description, image URL). It distinguishes from siblings like 'featured_article' or 'most_read' by focusing on the picture of the day, though it doesn't explicitly contrast them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for a specific date ('for a specific date'), suggesting when to use it, but provides no explicit guidance on when not to use it or alternatives (e.g., vs. 'on_this_day' for historical events). Usage context is implied but not detailed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.