Skip to main content
Glama

Server Details

Find the right MCP server for your task. 4,500+ servers ranked by community trust.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
find_capabilityCInspect

Find the best tool for your task across MCP servers and OpenClaw skills. Optionally filter by ecosystem: 'mcp' or 'openclaw'.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes
ecosystemNo
max_resultsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It mentions finding the 'best' tool without explaining ranking logic, and omits return format, pagination behavior, or safety characteristics (read-only vs destructive).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first states purpose, second states optional filter. Every word earns its place and critical info is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a meta-tool (finding capabilities) with 3 parameters and no output schema, the description lacks critical context: return value structure, how results are ranked, and complete parameter documentation. Insufficient given the tool's complexity and zero annotation coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, description partially compensates: it implies 'query' is task-based ('for your task') and documents valid 'ecosystem' values ('mcp' or 'openclaw'). However, 'max_results' is completely undocumented in both schema and description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Find') and resource ('tool') with clear scope ('across MCP servers and OpenClaw skills'). However, it doesn't explicitly differentiate from sibling 'search' or 'find_server', though the scope hints at distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions the optional ecosystem filter but provides no explicit guidance on when to use this versus siblings like 'search' or 'find_server', nor prerequisites for the query format.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_serverCInspect

Find MCP servers for a given task. Describe what you need in natural language.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes
max_resultsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, yet it reveals nothing about return format, error handling, external dependencies, or whether results are ranked by relevance. The agent cannot predict what the tool returns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient, front-loaded sentences with zero redundancy. Every word serves a purpose in defining the tool's function and input style.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero schema descriptions, no annotations, and no output schema, the description is insufficient for a search tool. It fails to document the 'max_results' parameter or describe what constitutes a successful response (e.g., list of server names, descriptions, connection details).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. While it hints that 'query' accepts natural language ('Describe what you need'), it completely omits any explanation of 'max_results' or the expected format/constraints of the query parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Find MCP servers for a given task,' providing a specific verb and resource. However, it doesn't explicitly differentiate from sibling tools like 'find_capability' or 'search' to help the agent choose between them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The second sentence 'Describe what you need in natural language' provides input formatting guidance but fails to specify when to use this tool versus alternatives like 'find_capability' or 'search', and mentions no prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

outcomeAInspect

Report whether a search result helped complete your task. Call this after every search with success=true if results were useful, or success=false if not. This is what makes Meyhem rankings improve over time.

ParametersJSON Schema
NameRequiredDescriptionDefault
successYes
metadataNo
search_idYes
signal_typeNoexplicit
selection_idNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and discloses the key side effect ('This is what makes Meyhem rankings improve over time'). However, it omits details about idempotency, error conditions, return values, or what happens if called multiple times for the same search_id.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently structured: purpose first ('Report whether...'), usage instructions second ('Call this after every search...'), and behavioral context third ('This is what makes Meyhem rankings...'). Zero redundancy; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero schema coverage, no annotations, and five parameters, the description covers the essential happy path (reporting success/failure) but leaves significant gaps. The undocumented optional parameters (metadata, signal_type, selection_id) and lack of output schema explanation make this minimally viable with clear omissions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. While it excellently explains the 'success' parameter (mapping true/false to useful/not useful) and implies 'search_id' context, it completely fails to document three parameters: 'metadata', 'signal_type', and 'selection_id', leaving the agent unaware of their purposes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Report') and clear resource ('search result'), explicitly distinguishing this feedback tool from its siblings (find_capability, find_server, search, select) which are retrieval-oriented. It clearly defines the scope as reporting task completion outcomes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit timing ('Call this after every search') and clear parameter guidance ('success=true if results were useful, or success=false if not'). However, it lacks explicit guidance on when NOT to use it (e.g., if no search was performed) or prerequisites for the search_id parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

selectCInspect

Select a search result to get its full content

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
positionYes
providerYes
search_idYes
is_terminalNo
token_countNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions retrieving 'full content' versus snippets, but fails to disclose potential side effects, authentication requirements, rate limiting, or what happens if the search_id is stale or invalid. The 'is_terminal' and 'token_count' parameters suggest complex behavior that isn't explained.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is front-loaded with the action and resource, containing no wasted words. However, for a tool with six parameters and zero schema documentation, this brevity borders on underspecification rather than optimal conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given six parameters (four required), zero schema description coverage, no annotations, and no output schema, the description is insufficiently complete. It fails to explain the parameter model, return format, or workflow integration necessary for correct invocation without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. While 'select a search result' provides semantic context for the parameter group (search_id, url, position), it fails to explain individual parameter meanings, relationships (why both url and position?), or the purpose of optional flags like 'is_terminal' and 'token_count'. It adds minimal value beyond the schema's titles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action (select) and resource (search result), and implies the outcome (get full content). It distinguishes from the 'search' sibling by specifying this retrieves full content rather than just finding results. However, it lacks explicit differentiation from other siblings like 'outcome' or 'find_capability'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies a workflow by referencing 'search result,' suggesting use after a search operation, but provides no explicit when-to-use guidance, prerequisites (e.g., requiring a prior search_id), or when-not-to-use alternatives. The agent must infer the relationship to the 'search' tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources