Skip to main content
Glama

MCPfinder

Server Details

The MCP server that finds MCP servers. Aggregates Official Registry, Glama, and Smithery.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
lksrz/mcpfinder
GitHub Stars
3

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no ambiguity: get_mcp_server_details retrieves details for a specific server, list_trending_servers shows trending servers, search_mcp_servers searches the registry, and test_echo is a distinct test utility. The purposes are well-defined and non-overlapping.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case: get_mcp_server_details, list_trending_servers, search_mcp_servers, and test_echo. The naming is predictable and uniform throughout the set.

Tool Count4/5

With 4 tools, the count is reasonable for a server registry domain, though slightly thin for comprehensive coverage. It includes core operations like search, list, and get, but could benefit from additional tools like update or delete for full lifecycle management.

Completeness3/5

The toolset covers basic discovery and retrieval operations (search, list, get) but lacks update, delete, or creation tools for managing the registry. This creates notable gaps in CRUD coverage, though agents can still perform core lookup tasks effectively.

Available Tools

4 tools
get_mcp_server_detailsCInspect

Get detailed information about a specific MCP server

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesExact name of the MCP server to look up
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool retrieves information (implying read-only), but doesn't clarify if it requires authentication, has rate limits, what happens if the server doesn't exist, or the format of returned data. For a tool with zero annotation coverage, this is insufficient behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized for a simple lookup tool and front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete for a tool that presumably returns structured data. It doesn't explain what 'detailed information' includes, potential error conditions, or response format. For a lookup tool with these gaps, more context is needed to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'name' parameter clearly documented as 'Exact name of the MCP server to look up'. The description doesn't add any parameter details beyond what the schema provides, such as format examples or constraints. Baseline 3 is appropriate when the schema does all the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('detailed information about a specific MCP server'). It distinguishes from sibling tools like 'list_trending_servers' and 'search_mcp_servers' by focusing on a single server rather than multiple servers. However, it doesn't explicitly mention what 'detailed information' includes, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing to know the server name), contrast with siblings like 'search_mcp_servers' for unknown names, or specify use cases. This leaves the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_mcp_serversCInspect

Search for MCP servers in the MCPfinder registry

ParametersJSON Schema
NameRequiredDescriptionDefault
tagNoFilter results by a specific tag
limitNoMaximum number of results to return
queryNoSearch query to find servers by name, tags, or description
capabilityNoFilter by capability type
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states the basic action without disclosing behavioral traits. It doesn't mention aspects like rate limits, authentication needs, pagination, or what happens on no results (e.g., returns empty list or error). For a search tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and wastes no space, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a search tool with 4 parameters and no annotations or output schema, the description is incomplete. It lacks details on return values (e.g., structure of results), error handling, or behavioral context, which are crucial for effective use by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, documenting all parameters (tag, limit, query, capability) with clear details. The description adds no additional meaning beyond the schema, such as examples or usage tips. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search for') and resource ('MCP servers in the MCPfinder registry'), providing a specific purpose. However, it doesn't explicitly differentiate from sibling tools like 'list_trending_servers' or 'get_mcp_server_details', which likely serve different functions (e.g., listing trending vs. detailed lookup).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives like 'list_trending_servers' or 'get_mcp_server_details'. It lacks context on scenarios where searching is preferred over listing or fetching details, leaving usage decisions ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

test_echoBInspect

Test tool that echoes back the input

ParametersJSON Schema
NameRequiredDescriptionDefault
messageYesMessage to echo back
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'echoes back' implies a read-only operation that returns the input unchanged, it doesn't specify whether there are any side effects, rate limits, authentication requirements, or error conditions. The description is too minimal for a tool that might have behavioral nuances.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise at just 6 words. Every word earns its place: 'Test tool' establishes context, 'that echoes back' specifies the action, and 'the input' identifies the resource. There's zero waste or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple echo tool with one parameter and no output schema, the description is adequate but minimal. It covers the basic purpose but lacks details about behavioral characteristics, usage context, or output format. Given the tool's simplicity, this is acceptable but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions 'the input' but doesn't elaborate on parameter specifics beyond what the schema already provides. With 100% schema description coverage (the 'message' parameter is fully documented in the schema), the description adds minimal value. This meets the baseline expectation when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('echoes back') and resource ('the input'), making it immediately understandable. However, it doesn't distinguish this test tool from potential sibling tools that might also perform echo operations, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it's labeled as a 'Test tool', there's no explicit mention of when it should be used, what testing scenarios it supports, or how it differs from the three sibling tools listed (get_mcp_server_details, list_trending_servers, search_mcp_servers).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.