MCP Verify
Server Details
Discovery, comparison, readiness scoring, and validation reports for remote MCP servers.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 9 of 9 tools scored.
Multiple tools deal with retrieving server information (fetch, get_server_report, search, search_servers, compare_servers, recommend_servers) leading to potential confusion. The distinction between 'search' and 'search_servers' is mitigated by descriptions, but overlap in purpose remains.
Most tools use verb_noun pattern (e.g., compare_servers, export_policy), but 'fetch' and 'search' are single verbs without a noun, breaking the pattern. The presence of both 'search' and 'search_servers' adds inconsistency.
9 tools is well-scoped for the server's verification and policy domain. Each tool addresses a distinct aspect without being overwhelming.
The tool set covers search, retrieval, comparison, recommendation, policy export, and decision routing. Minor gaps exist (e.g., no tool to manage subscriptions or update policies), but core verification workflows are well-supported.
Available Tools
9 toolscompare_serversARead-onlyInspect
Compare up to four MCP servers side by side across score, verdict, auth, tool count, prompts/resources, and freshness.
| Name | Required | Description | Default |
|---|---|---|---|
| identifiers | Yes | Canonical server identifiers in namespace/name format. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations set readOnlyHint: true, consistent with a compare operation. The description lists the fields included in comparison, adding context beyond the annotation. No behavioral surprises mentioned, which is appropriate for a read-only tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded with the verb 'compare' and immediately specifies the resource and attributes. No wasted words, highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one parameter, no output schema), the description covers the core functionality and compared fields. It could mention the output format, but overall it is adequate for a compare tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% and the parameter 'identifiers' has a clear schema description. The tool description does not add further parameter details, but the schema already provides sufficient meaning. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares up to four MCP servers side by side across specific attributes (score, verdict, auth, tool count, prompts/resources, freshness). This explicitly distinguishes it from siblings like search_servers or get_server_report.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for comparison of multiple servers, but it does not explicitly exclude alternatives like using get_server_report for a single server or search_servers for filtering. However, the context from sibling names makes the intended use case clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
export_policyARead-onlyInspect
Export a JSON TrustOps policy for one MCP server with allow, blocked tools, required scopes, freshness, and approval gates.
| Name | Required | Description | Default |
|---|---|---|---|
| client | No | ||
| max_risk | No | ||
| identifier | Yes | Canonical server identifier in namespace/name format. | |
| requires_oauth | No | ||
| max_freshness_hours | No | ||
| no_write_without_approval | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, and description reinforces a read operation. It adds the detail that the policy includes specific components, but no further behavioral traits like authentication or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single concise sentence covers the core purpose and components. No fluff, but could be structured for easier scanning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description does not specify the return format beyond 'JSON', and omits that 'identifier' is required. Given no output schema, more detail on the output structure would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only 17% schema description coverage, the description compensates by explaining what the policy includes (allow, blocked tools, etc.), which maps to parameters like no_write_without_approval and max_freshness_hours. However, 'client' and 'max_risk' are not explicitly covered.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Export', the resource 'JSON TrustOps policy', and the scope 'for one MCP server'. It lists key policy components, distinguishing it from sibling tools like 'fetch' and 'search'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. It does not mention when not to use it or provide context for selecting among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetchARead-onlyInspect
ChatGPT-compatible read-only fetch alias. Returns a full MCP server item with id, title, text, url, and metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Canonical MCP server identifier in namespace/name format. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Consistent with readOnlyHint annotation, adds that it returns a full MCP server item with specific fields and notes ChatGPT compatibility, providing useful behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with key purpose and return details, no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully sufficient for a simple read-only tool with one parameter and good annotations; states what it does and what it returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% and the description does not add extra meaning to the id parameter beyond the schema's note about namespace/name format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it is a read-only fetch alias that returns a full MCP server item with specific fields (id, title, text, url, metadata). Distinct from siblings by specifying single-item retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use when needing a full server item by ID and no filtering, but lacks explicit guidance on when not to use it or suggestions for alternatives like search_servers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_server_reportARead-onlyInspect
Return the full machine-readable verify report for a specific MCP server identifier.
| Name | Required | Description | Default |
|---|---|---|---|
| identifier | Yes | Canonical server identifier in namespace/name format. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context beyond the readOnlyHint annotation by specifying the output is a 'full machine-readable verify report', clarifying the report's completeness and format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that conveys all essential information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema) and the readOnlyHint, the description adequately covers the action and resource, though it could mention the report's structure or potential limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description does not add new meaning beyond the schema's definition of 'identifier'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Return') and resource ('full machine-readable verify report for a specific MCP server identifier'), clearly distinguishing it from siblings like 'search_servers' or 'compare_servers'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention any exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_subscription_optionsARead-onlyInspect
Return alert types, subscription channels, watch scopes, and existing subscription endpoints.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The read-only behavior is declared via annotation, so the description adds value by enumerating the returned data types. However, it does not discuss any potential side effects or additional behaviors, which is acceptable for a simple read-only no-parameter tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence efficiently communicates the tool's purpose without wordiness. The verb 'Return' is front-loaded, making the action immediate clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no parameters and no output schema, the description adequately lists the four categories of data returned. It lacks details on structure or format, but this does not significantly hinder understanding given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description has no parameters to explain. Schema coverage is trivially 100%, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Return' and specifies four distinct resources: alert types, subscription channels, watch scopes, and existing subscription endpoints. It distinguishes itself from sibling tools like get_server_report or recommend_servers which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as fetch or search_servers. The description is purely declarative with no contextual usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recommend_serversARead-onlyInspect
Recommend MCP servers for a plain-language task and return ranked matches with install config.
| Name | Required | Description | Default |
|---|---|---|---|
| task | Yes | Plain-language task description such as 'I need an MCP for healthcare denial scoring in OpenAI connectors'. | |
| limit | No | Maximum number of recommendations to return. | |
| capabilities | No | Optional explicit capability constraints that override or extend the inferred task capabilities. | |
| client_target | No | Optional target client or integration surface. | |
| risk_tolerance | No | How much tool-surface risk is acceptable. | |
| auth_preference | No | Preferred auth mode. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, and the description adds context about returning ranked matches with install config, which is beneficial. No contradictions; the description supplements the annotation well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 15 words, front-loaded with verb and resource, no fluff. Every word contributes to meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the basic purpose and output (ranked matches with install config) but does not explain ranking criteria or the effect of optional parameters. Given the complexity (6 parameters), more detail would improve completeness. Annotations help but don't fully compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all 6 parameters, so the baseline is 3. The description does not add significant meaning beyond 'plain-language task', but the schema already covers parameter semantics adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it recommends MCP servers for a plain-language task and returns ranked matches with install config. It uses a specific verb and resource, but does not explicitly differentiate from sibling tools like search_servers or compare_servers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as search_servers or compare_servers. The description implies it is for plain-language tasks but lacks explicit context or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
route_taskBRead-onlyInspect
Return a runtime MCP TrustOps decision for a task: recommended server, allowed tools, blocked tools, and approval requirement.
| Name | Required | Description | Default |
|---|---|---|---|
| task | Yes | Task the agent wants to perform. | |
| max_risk | No | Maximum allowed tool risk. | |
| candidate | No | Optional namespace/name server identifier to evaluate directly. | |
| capabilities | No | Optional capability constraints such as healthcare, search, database, read, or write. | |
| client_target | No | ||
| requires_oauth | No | ||
| max_freshness_hours | No | ||
| no_write_without_approval | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the description does not contradict. It adds value by detailing output components (recommended server, allowed tools, etc.). However, it does not disclose behavioral traits like rate limits, auth needs, or behavior under max_risk thresholds.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is front-loaded and to the point. Every word adds value; no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description gives a high-level overview of output, it does not explain how parameters influence the decision or provide details on the decision algorithm. Given 8 parameters and no output schema, the description is adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50%, meaning half the parameters lack descriptions. The tool description provides no additional parameter-level information, failing to compensate for undocumented parameters like 'client_target' or 'max_freshness_hours'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool returns a runtime MCP TrustOps decision, specifying components like recommended server, allowed/blocked tools, and approval requirement. It's a specific verb and resource, and distinguishes from sibling tools like 'recommend_servers' or 'compare_servers'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives such as 'recommend_servers' or 'search_servers'. The description does not mention prerequisites, contexts, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchBRead-onlyInspect
ChatGPT-compatible read-only search alias. Returns MCP server results with id, title, and url.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of search results to return. | |
| query | Yes | Search query such as healthcare MCP, web search MCP, ChatGPT-compatible MCPs, or OAuth MCP servers. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark read-only (readOnlyHint=true). The description adds return structure (id, title, url) but no further behavioral details like pagination or rate limits, which is acceptable for a minimal read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single front-loaded sentence that efficiently communicates core functionality. No wasted words, but could be slightly longer to include usage hints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with 2 well-documented parameters, the description suffices. However, it lacks guidance on query scope, result limits, or how it differs from sibling tools, which would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for both parameters. The description adds only the phrase 'ChatGPT-compatible', which is not parameter-specific. Baseline 3 is appropriate since schema handles semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is a 'read-only search alias' returning MCP server results with id, title, and url. However, it does not distinguish itself from sibling 'search_servers', which likely has overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'search_servers'. The description omits exclusions or context that would help an agent decide.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_serversARead-onlyInspect
Search and rank MCP servers by capability, auth preference, client target, and risk tolerance.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of ranked candidates to return. | |
| query | No | Optional free-text query to bias ranking toward a specific use case or domain. | |
| capabilities | No | Normalized capability taxonomy terms such as healthcare, search, read, exec, files, oauth, or prompts. | |
| client_target | No | Optional target client or integration surface. | |
| risk_tolerance | No | How much tool-surface risk is acceptable. | |
| auth_preference | No | Preferred auth mode. Use unauthenticated when the agent must avoid login flows. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true; the description adds no behavioral details beyond that. No contradiction but minimal added transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the purpose with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 6 parameters and no output schema, the description is adequate but does not explain return format or ranking behavior, leaving gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description lists some parameters but adds no new meaning beyond the schema's own descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches and ranks MCP servers by multiple criteria, distinguishing it from siblings like compare_servers or recommend_servers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for finding servers based on preferences but does not explicitly contrast with siblings or specify when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!