Skip to main content
Glama

ProfessionalWiki-mediawiki-mcp-server

Server Details

Enable Large Language Model clients to interact seamlessly with any MediaWiki wiki. Perform action…

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

16 tools
add-wikiB
Destructive
Inspect

Adds a new wiki to the MCP resources from a URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
wikiUrlYesAny URL from the target wiki (e.g. https://en.wikipedia.org/wiki/Main_Page)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructiveHint=true, but the description does not explain what makes an 'add' operation destructive (e.g., overwriting existing wiki configurations, storage implications). No disclosure of idempotency, validation behavior, or error cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Information is front-loaded with action verb, and every phrase ('to the MCP resources', 'from a URL') contributes necessary scoping without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool with complete schema coverage, but incomplete given the unexplained destructive annotation and lack of output specification. Should clarify the persistent state change given destructiveHint=true.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter description including format example. The tool description mentions 'from a URL' which aligns with but does not extend beyond the schema documentation, warranting baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Adds'), resource ('wiki'), and mechanism ('from a URL') with destination context ('MCP resources'). However, it fails to differentiate from sibling 'set-wiki', which likely configures the active wiki versus registering a new one.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like 'set-wiki' or 'remove-wiki', nor does it mention prerequisites such as URL accessibility or authentication requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create-pageB
Destructive
Inspect

Creates a wiki page with the provided content.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesWiki page title
sourceYesPage content in the format specified by the contentModel parameter
commentNoReason for creating the page
contentModelNoType of content on the pagewikitext
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description aligns with annotations (destructiveHint: true, readOnlyHint: false) by stating 'Creates'. It mentions 'provided content' which connects to the source parameter, but adds no further context about side effects, failure modes (e.g., duplicate titles), reversibility, or authorization requirements beyond what the annotations indicate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence with no filler words or redundancy. However, 'with the provided content' is slightly implied by the existence of content parameters, making it marginally less information-dense than ideal, though not wasteful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 4-parameter tool with complete schema coverage, but minimal given the destructive nature of the operation. Lacks critical context such as error handling for existing pages, the relationship to 'update-page', and expected behavior on success. Meets minimum viability but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema carries the full burden of parameter documentation. The description mentions 'provided content' which loosely references the 'source' parameter, but adds no syntactic details, examples, or clarifications beyond the schema definitions. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a specific verb ('Creates') and resource ('wiki page'), clearly indicating the tool's function. However, it does not explicitly differentiate from sibling tools like 'add-wiki' (which creates a wiki instance vs. a page) or 'update-page', potentially causing confusion about which to use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like 'update-page' (e.g., when a page already exists) or prerequisites (e.g., whether the target wiki must exist). No error conditions or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete-pageC
Destructive
Inspect

Deletes a wiki page.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesWiki page title
commentNoReason for deleting the page
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare destructiveHint=true and readOnlyHint=false, establishing this is a mutating destruction operation. The description adds no behavioral context beyond this—omitting whether deletion is permanent (vs. recoverable via undelete-page), if it cascades to page history, or audit implications of the 'comment' parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at three words with no redundancy. However, for a destructive operation, this brevity leaves critical safety information unstated, slightly undermining its effectiveness despite the efficient structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a destructive tool with a recovery sibling (undelete-page) and no output schema, the description is insufficient. It omits reversibility details, permission requirements, and the relationship between 'comment' and audit trails—critical gaps for a destructive operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions ('Wiki page title', 'Reason for deleting the page'), so the structured fields carry the burden. The description text adds no parameter semantics beyond what the schema provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a clear verb ('Deletes') and resource ('wiki page'), distinguishing it from 'remove-wiki' (which removes entire wikis). However, it fails to differentiate from 'undelete-page' or clarify whether this is a hard or soft deletion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus alternatives like 'update-page' (for archival) or 'undelete-page' (for recovery). Does not mention prerequisites like admin permissions or warn about permanent data loss despite the destructive nature.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-category-membersA
Read-only
Inspect

Gets all members in the category. Returns only page IDs, namespaces, and titles.

ParametersJSON Schema
NameRequiredDescriptionDefault
typesNoTypes of members to include
categoryYesCategory name
namespacesNoNamespace IDs to filter by
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations indicate readOnlyHint=true, the description adds valuable behavioral context by specifying exactly what fields are returned ('only page IDs, namespaces, and titles'). This compensates for the lack of an output schema by setting clear expectations about the response structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. The first sentence front-loads the action (gets members), and the second immediately clarifies the return value scope. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 100% schema coverage, simple parameter types, and read-only annotations, the description is sufficiently complete. It compensates for the missing output schema by describing the return fields. However, it lacks notes on pagination behavior for large categories, preventing a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents all three parameters (category, types, namespaces). The description does not add additional semantic context—such as whether the category name should include the 'Category:' prefix or examples of namespace IDs—so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Gets all members in the category' with a specific verb and resource. It implicitly distinguishes from siblings like search-page and get-page by focusing on category traversal, though it doesn't explicitly mention wiki categories or contrast with alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like search-page or get-page. It does not mention prerequisites (e.g., whether the category must exist) or when users should prefer category traversal over text search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-fileB
Read-only
Inspect

Returns information about a file, including links to download the file in thumbnail, preview, and original formats.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesFile title
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive behavior. The description adds valuable context about what specifically gets returned (thumbnail, preview, and original format links) beyond the annotations. However, it omits error handling behavior, rate limits, or whether the title parameter requires exact matching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, dense sentence that front-loads the core action and efficiently details the three download formats without redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately compensates for the missing output schema by describing the return structure (links to various formats). However, given the ambiguity of 'title' as a lookup key in a wiki context, it should specify whether partial matches work or exact titles are required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'title' parameter, the schema carries the semantic burden. The description does not add supplementary context about the parameter (e.g., case sensitivity, file extension requirements), warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a specific verb ('Returns') and resource ('file'), clearly indicating this is a retrieval operation that distinguishes from upload-file and upload-file-from-url siblings. It further specifies the return content (download links in multiple formats), giving precise scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like search-page-by-prefix when the exact title is unknown, nor does it mention prerequisites or error conditions (e.g., file not found).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-pageA
Read-only
Inspect

Returns a wiki page. Use metadata=true to retrieve the revision ID required by update-page. Set content="none" to fetch only metadata without content.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesWiki page title
contentNoType of content to returnsource
metadataNoWhether to include metadata (page ID, revision info, license) in the response
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only safety (readOnlyHint: true), while the description adds valuable workflow context linking to update-page and explains the content filtering behavior. It does not describe error handling (e.g., missing pages) or authentication requirements, leaving minor behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero redundancy: sentence one states purpose, sentence two explains metadata parameter workflow, sentence three explains content parameter optimization. Perfectly front-loaded and information-dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with 100% schema coverage and safety annotations, the description is complete. It covers the essential workflow relationship with update-page (critical for this domain) and parameter optimization patterns without needing to specify output schema details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage, the description adds significant semantic value by explaining the functional purpose of parameters: metadata is for obtaining revision IDs needed for updates, and content='none' enables metadata-only retrieval. This contextual 'why' complements the schema's 'what'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'Returns a wiki page,' providing a specific verb and resource. It effectively distinguishes from siblings like get-file, get-revision, and get-page-history by specifying 'wiki page' and from update-page through the explicit workflow mention (revision ID retrieval).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use specific parameters: 'Use metadata=true to retrieve the revision ID required by update-page' establishes the prerequisite workflow with a sibling tool, and 'Set content=none to fetch only metadata' clarifies the metadata-only retrieval pattern.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-page-historyA
Read-only
Inspect

Returns information about the latest revisions to a wiki page, in segments of 20 revisions, starting with the latest revision. The response includes API routes for the next oldest, next newest, and latest revision segments.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesWiki page title
filterNoFilter that returns only revisions with certain tags. Only support one filter per request.
newerThanNoRevision ID of the newest revision to return
olderThanNoRevision ID of the oldest revision to return
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only safety (readOnlyHint: true), but the description adds valuable behavioral context: pagination size (20 revisions), ordering (latest first), and response structure details (includes API routes for next oldest/newest segments). This provides crucial context for navigating results that annotations do not cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two tightly constructed sentences with zero waste. The first front-loads the core purpose and pagination behavior; the second explains the response navigation structure. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description adequately explains the response structure (includes API routes for pagination). Combined with annotations indicating read-only behavior, this provides sufficient context for a history-retrieval tool, though explicit differentiation from get-revision would improve it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all four parameters (title, filter, newerThan, olderThan). The description does not add additional parameter semantics beyond what the schema provides, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Returns information about') and resource ('latest revisions to a wiki page') and clearly implies distinction from siblings like get-revision (single revision) and get-page (current content) through the pagination details ('segments of 20 revisions'). However, it does not explicitly name sibling tools to contrast against.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through pagination constraints ('segments of 20', 'starting with the latest') but lacks explicit guidance on when to use this versus get-revision or get-page, and does not mention prerequisites like the page needing to exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-revisionC
Read-only
Inspect

Returns a revision of a wiki page.

ParametersJSON Schema
NameRequiredDescriptionDefault
contentNoType of content to returnsource
metadataNoWhether to include metadata (revision ID, page ID, page title, user ID, user name, timestamp, comment, size, delta, minor, HTML URL) in the response
revisionIdYesRevision ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true. The description adds no behavioral context beyond this, such as the fact that revisions are immutable, what the default 'source' content represents (wikitext), or that metadata must be explicitly requested to see author/timestamp information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficient and front-loaded with the verb, containing no wasted words. However, it is arguably too minimal for the tool's complexity, lacking necessary qualifying clauses about scope or prerequisites.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of multiple related read tools (get-page, get-page-history) and the non-obvious relationship between pages and revisions, the description should explain that this retrieves a specific historical version. It fails to provide this conceptual framework or explain the output structure despite no output schema being present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured data adequately documents all parameters. The description adds no parameter-specific guidance, but the baseline score of 3 applies since the schema carries the full semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic action (returns) and resource (revision), but fails to distinguish this tool from siblings like 'get-page' (which likely returns the current version) or 'get-page-history' (which returns a list of revisions). It does not clarify that this retrieves a specific historical snapshot by ID.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus 'get-page' or 'get-page-history'. It does not mention that users typically obtain the revisionId parameter from the page history, or when to prefer 'source' vs 'html' content types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

remove-wikiB
Destructive
Inspect

Removes a wiki from the MCP resources.

ParametersJSON Schema
NameRequiredDescriptionDefault
uriYesMCP resource URI of the wiki to remove (e.g. mcp://wikis/en.wikipedia.org)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructiveHint=true, so the safety profile is covered. The description adds valuable scope context ('from MCP resources') clarifying this removes the resource reference rather than deleting wiki content. However, it omits reversibility (can it be re-added?), side effects on existing references, and confirmation requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy. Information density is high and the description is appropriately front-loaded with the action and target.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter destructive operation with no output schema. The MCP resources scope is specified, but the description could improve by noting whether the operation is reversible (via add-wiki) or permanent, and confirming it doesn't affect the actual wiki data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents the 'uri' parameter including its format example. The description adds no additional parameter semantics beyond the schema, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Removes' with clear resource target 'wiki' and scope 'from MCP resources'. This distinguishes it from sibling delete-page (which deletes content) by specifying the MCP resource registry context. However, it doesn't explicitly contrast with add-wiki or set-wiki siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. Given siblings include delete-page (content deletion) and add-wiki (registration), the description should explicitly clarify this deregisters the wiki from MCP without deleting content, and when to prefer this over other operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search-pageA
Read-only
Inspect

Search wiki page titles and contents for the provided search terms, and returns matching pages.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of search results to return
queryYesSearch terms
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds valuable behavioral context that the search covers both titles and contents (not just metadata), but omits details about ranking algorithms, result formatting, or pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 15 words with zero redundancy. Front-loaded with action verb ('Search'), immediately identifies scope ('titles and contents'), and mentions return value. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a straightforward read-only search tool with two simple parameters and safety annotations provided, the description is adequate. It mentions the return value ('matching pages') despite the absence of an output schema, though it could briefly clarify result structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for both 'query' and 'limit' parameters. The description mentions 'search terms' and implies multiple results ('matching pages'), but adds no syntax details, wildcard support, or default values beyond what the schema provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Search', 'returns') and clearly identifies the resource (wiki page titles and contents). It distinguishes from sibling 'search-page-by-prefix' by explicitly stating it searches both titles and contents, implying full-text search capability versus prefix-only matching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear scope (titles and contents) which implicitly suggests when to use this over title-only alternatives, but lacks explicit guidance such as 'use search-page-by-prefix for title prefix matching' or when to prefer get-page for exact matches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search-page-by-prefixC
Read-only
Inspect

Performs a prefix search for page titles.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
prefixYesSearch prefix
namespaceNoNamespace to search
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true (indicating a safe read operation), the description adds no behavioral context beyond what the name and annotations already provide. It does not describe the return format (titles vs page objects), pagination behavior, or namespace filtering implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely brief at six words, with no wasted text. However, given the lack of output schema and the need to distinguish from similar tools, it may be overly terse—missing an opportunity to front-load key behavioral constraints or return value hints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with three parameters and no output schema, the description meets minimum viability by explaining the core operation. However, gaps remain: it does not describe what the tool returns (titles, IDs, or full page objects), nor does it explain pagination limits or namespace behavior, which are important given the 'limit' parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents all three parameters (prefix, limit, namespace). The description mentions none of them, but the baseline score of 3 applies here since the schema carries the semantic burden completely.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a 'prefix search for page titles,' specifying the verb (search), resource (page titles), and matching logic (prefix). However, it does not differentiate from the sibling 'search-page' tool, which likely performs full-text search rather than prefix matching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It fails to mention that it should be used for autocomplete-style title matching rather than content search, and does not reference the sibling 'search-page' tool as an alternative for full-text queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set-wikiA
Destructive
Inspect

Sets the wiki to use for the current session. You MUST call this tool when interacting with a new wiki.

ParametersJSON Schema
NameRequiredDescriptionDefault
uriYesMCP resource URI of the wiki to use (e.g. mcp://wikis/en.wikipedia.org)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructiveHint: true, but description fails to explain what gets destroyed (e.g., cached page data, session context) or side effects of switching wikis. Adds no behavioral context beyond the annotation flags.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first establishes purpose, second provides usage imperative. Zero redundancy, front-loaded, and appropriately sized for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers basic purpose and trigger condition, but omits explanation of destructive behavior, session persistence scope, and whether calling this invalidates previous get-page caches. Adequate but incomplete for a stateful session tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('MCP resource URI...'). Description adds no supplementary parameter semantics, but baseline 3 is appropriate since schema fully documents the single URI parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Sets'), resource ('wiki'), and scope ('current session'). Distinguishes from siblings like add-wiki/remove-wiki by emphasizing session activation vs. repository management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states mandatory trigger condition ('You MUST call this tool when interacting with a new wiki'). Lacks explicit contrast with add-wiki to clarify when to register vs. activate a wiki.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

undelete-pageC
Destructive
Inspect

Undeletes a wiki page.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesWiki page title
commentNoReason for undeleting the page
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare this as destructive (destructiveHint: true) and not read-only, the description adds no behavioral context beyond this—such as whether undeletion restores full history, overwrites existing pages with the same title, requires specific permissions, or is idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficient and front-loaded with the key action. While extremely brief, it avoids redundancy and clearly states the tool's primary function without filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive mutation operation with no output schema, the description is insufficient. It lacks critical context such as error conditions (e.g., what happens if the page isn't deleted), side effects, or the relationship to the 'delete-page' sibling tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is appropriately met. The description itself does not mention the 'title' or 'comment' parameters, but the schema adequately documents both, so no additional compensation is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Undeletes') and resource ('wiki page'), making the core function immediately clear. However, it does not explicitly differentiate from sibling tools like 'create-page' (which makes a new page) or clarify that the page must have been previously deleted.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., using 'create-page' if the page never existed), nor does it mention prerequisites like the page being in a deleted state or potential conflicts with existing pages of the same name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update-pageB
Destructive
Inspect

Updates a wiki page. Replaces the existing content of a page with the provided content

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesWiki page title
sourceYesPage content in the same content model of the existing page
commentNoSummary of the edit
latestIdYesRevision ID used as the base for the new source
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare destructiveHint=true and readOnlyHint=false. The description confirms the replacement behavior but adds no context about revision history preservation, conflict resolution when latestId is stale, or side effects beyond what annotations indicate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with minimal redundancy. The first sentence slightly restates the tool name, but the second efficiently captures the destructive replacement nature. Appropriately front-loaded with the key action (replace).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate given 100% schema coverage and present annotations, but gaps remain for a destructive mutation tool: no description of return values, conflict behavior when latestId mismatches, or whether previous revisions remain accessible.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'provided content' (referencing source parameter) but adds no semantic detail about the optimistic locking mechanism implied by latestId or edit summary purpose beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Updates a wiki page' and specifically notes it 'Replaces the existing content,' distinguishing it from sibling tools create-page (new pages) and delete-page (removal). The verb-resource combination is explicit and specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use update-page versus create-page (e.g., 'use this only when the page exists'), or prerequisites like requiring the latestId parameter for optimistic locking. Missing explicit alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload-fileA
Destructive
Inspect

Uploads a file to the wiki from the local disk. Note: This tool is not available in the hosted (Cloudflare Workers) environment.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesWikitext on the file page
titleYesFile title
commentNoReason for uploading the file
filepathYesFile path on the local disk
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true, and the description adds critical behavioral context regarding deployment environment restrictions not captured in annotations. However, it omits details about conflict resolution (overwrites vs. errors) or authentication requirements typical for destructive operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences total with zero waste. The first sentence establishes purpose and scope; the second sentence provides critical environment constraint. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter destructive operation without output schema, the description adequately covers core functionality and environment constraints. Minor gap regarding idempotency or overwrite behavior keeps it from a 5, but sufficient for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description mentions 'local disk' which reinforces the filepath parameter's purpose, but does not add syntax details, format expectations, or semantic relationships between parameters (e.g., how title relates to filepath).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Uploads a file to the wiki from the local disk' with specific verb (uploads), resource (file), and destination (wiki). The phrase 'from the local disk' effectively distinguishes this tool from the sibling 'upload-file-from-url'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit constraint 'not available in the hosted (Cloudflare Workers) environment' which functions as a clear when-not guideline. Implicitly guides users toward the URL variant by emphasizing 'local disk,' though it does not explicitly name the sibling alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload-file-from-urlB
Destructive
Inspect

Uploads a file to the wiki from a web URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL of the file to upload
textYesWikitext on the file page
titleYesFile title
commentNoReason for uploading the file
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations indicate destructiveHint: true, the description fails to disclose what gets destroyed (e.g., overwrites existing files with same title), side effects (creates file description pages), or return behavior. It adds no behavioral context beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficiently structured with the action verb front-loaded. However, it lacks any qualifying clauses that could provide critical safety context without sacrificing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive file operation with 4 parameters and no output schema, the description is inadequate. It omits overwrite behavior, the relationship between 'text' (wikitext) and the uploaded file, and error conditions (e.g., invalid URL, duplicate titles).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all four parameters (url, title, text, comment). The description mentions the URL source but adds no additional semantic context about parameter relationships or the wikitext formatting requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (uploads), resource (file), and distinct mechanism (from a web URL) that clearly differentiates this tool from the sibling 'upload-file' tool. It precisely communicates the core functionality in a single clause.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus the sibling 'upload-file', no mention of prerequisites (e.g., authentication, file permissions), and no warnings about requirements for the URL (e.g., direct links, supported formats).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources