MCP Finder
Server Details
Find the right MCP server for your task. 4,500+ servers ranked by community trust.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsfind_capabilityCInspect
Find the best tool for your task across MCP servers and OpenClaw skills. Optionally filter by ecosystem: 'mcp' or 'openclaw'.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| ecosystem | No | ||
| max_results | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It mentions finding the 'best' tool without explaining ranking logic, and omits return format, pagination behavior, or safety characteristics (read-only vs destructive).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states purpose, second states optional filter. Every word earns its place and critical info is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a meta-tool (finding capabilities) with 3 parameters and no output schema, the description lacks critical context: return value structure, how results are ranked, and complete parameter documentation. Insufficient given the tool's complexity and zero annotation coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, description partially compensates: it implies 'query' is task-based ('for your task') and documents valid 'ecosystem' values ('mcp' or 'openclaw'). However, 'max_results' is completely undocumented in both schema and description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Find') and resource ('tool') with clear scope ('across MCP servers and OpenClaw skills'). However, it doesn't explicitly differentiate from sibling 'search' or 'find_server', though the scope hints at distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions the optional ecosystem filter but provides no explicit guidance on when to use this versus siblings like 'search' or 'find_server', nor prerequisites for the query format.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_serverCInspect
Find MCP servers for a given task. Describe what you need in natural language.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| max_results | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure, yet it reveals nothing about return format, error handling, external dependencies, or whether results are ranked by relevance. The agent cannot predict what the tool returns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient, front-loaded sentences with zero redundancy. Every word serves a purpose in defining the tool's function and input style.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero schema descriptions, no annotations, and no output schema, the description is insufficient for a search tool. It fails to document the 'max_results' parameter or describe what constitutes a successful response (e.g., list of server names, descriptions, connection details).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While it hints that 'query' accepts natural language ('Describe what you need'), it completely omits any explanation of 'max_results' or the expected format/constraints of the query parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Find MCP servers for a given task,' providing a specific verb and resource. However, it doesn't explicitly differentiate from sibling tools like 'find_capability' or 'search' to help the agent choose between them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence 'Describe what you need in natural language' provides input formatting guidance but fails to specify when to use this tool versus alternatives like 'find_capability' or 'search', and mentions no prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
outcomeAInspect
Report whether a search result helped complete your task. Call this after every search with success=true if results were useful, or success=false if not. This is what makes Meyhem rankings improve over time.
| Name | Required | Description | Default |
|---|---|---|---|
| success | Yes | ||
| metadata | No | ||
| search_id | Yes | ||
| signal_type | No | explicit | |
| selection_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses the key side effect ('This is what makes Meyhem rankings improve over time'). However, it omits details about idempotency, error conditions, return values, or what happens if called multiple times for the same search_id.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: purpose first ('Report whether...'), usage instructions second ('Call this after every search...'), and behavioral context third ('This is what makes Meyhem rankings...'). Zero redundancy; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero schema coverage, no annotations, and five parameters, the description covers the essential happy path (reporting success/failure) but leaves significant gaps. The undocumented optional parameters (metadata, signal_type, selection_id) and lack of output schema explanation make this minimally viable with clear omissions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While it excellently explains the 'success' parameter (mapping true/false to useful/not useful) and implies 'search_id' context, it completely fails to document three parameters: 'metadata', 'signal_type', and 'selection_id', leaving the agent unaware of their purposes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Report') and clear resource ('search result'), explicitly distinguishing this feedback tool from its siblings (find_capability, find_server, search, select) which are retrieval-oriented. It clearly defines the scope as reporting task completion outcomes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit timing ('Call this after every search') and clear parameter guidance ('success=true if results were useful, or success=false if not'). However, it lacks explicit guidance on when NOT to use it (e.g., if no search was performed) or prerequisites for the search_id parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchAInspect
Search the web and return ranked results with feedback-driven scoring. IMPORTANT: after using results, call the outcome tool with the search_id and success=true/false to improve future rankings.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| agent_id | No | ||
| freshness | No | ||
| session_id | No | ||
| max_results | No | ||
| include_content | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full burden. It discloses the feedback mechanism ('improve future rankings') but omits critical operational details like rate limits, authentication requirements, or error handling. It mentions 'search_id' implying output structure but without output schema confirmation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose, second delivers critical workflow instruction. The 'IMPORTANT:' flag appropriately prioritizes the outcome tool requirement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 undocumented parameters and no output schema, the description is incomplete. While it explains the unique feedback workflow, it provides no guidance on parameter semantics, return value structure, or error scenarios necessary for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fails to compensate for any of the 6 parameters. It confusingly references 'search_id' (likely an output field, not input) without explaining input parameters like 'freshness', 'include_content', or 'agent_id' which are non-obvious.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action ('Search the web') and distinctive feature ('feedback-driven scoring'). It implicitly distinguishes from sibling tools like find_capability/find_server by focusing on web search, though explicit differentiation is lacking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: it mandates calling the 'outcome' sibling tool after using results with specific parameters (search_id, success=true/false), establishing a clear workflow sequence and when to use the complementary tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
selectCInspect
Select a search result to get its full content
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| position | Yes | ||
| provider | Yes | ||
| search_id | Yes | ||
| is_terminal | No | ||
| token_count | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions retrieving 'full content' versus snippets, but fails to disclose potential side effects, authentication requirements, rate limiting, or what happens if the search_id is stale or invalid. The 'is_terminal' and 'token_count' parameters suggest complex behavior that isn't explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is front-loaded with the action and resource, containing no wasted words. However, for a tool with six parameters and zero schema documentation, this brevity borders on underspecification rather than optimal conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given six parameters (four required), zero schema description coverage, no annotations, and no output schema, the description is insufficiently complete. It fails to explain the parameter model, return format, or workflow integration necessary for correct invocation without trial and error.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While 'select a search result' provides semantic context for the parameter group (search_id, url, position), it fails to explain individual parameter meanings, relationships (why both url and position?), or the purpose of optional flags like 'is_terminal' and 'token_count'. It adds minimal value beyond the schema's titles.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action (select) and resource (search result), and implies the outcome (get full content). It distinguishes from the 'search' sibling by specifying this retrieves full content rather than just finding results. However, it lacks explicit differentiation from other siblings like 'outcome' or 'find_capability'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies a workflow by referencing 'search result,' suggesting use after a search operation, but provides no explicit when-to-use guidance, prerequisites (e.g., requiring a prior search_id), or when-not-to-use alternatives. The agent must infer the relationship to the 'search' tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!