Skip to main content
Glama

Server Details

Discovery, comparison, readiness scoring, and validation reports for remote MCP servers.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.7/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation3/5

Multiple tools deal with retrieving server information (fetch, get_server_report, search, search_servers, compare_servers, recommend_servers) leading to potential confusion. The distinction between 'search' and 'search_servers' is mitigated by descriptions, but overlap in purpose remains.

Naming Consistency3/5

Most tools use verb_noun pattern (e.g., compare_servers, export_policy), but 'fetch' and 'search' are single verbs without a noun, breaking the pattern. The presence of both 'search' and 'search_servers' adds inconsistency.

Tool Count5/5

9 tools is well-scoped for the server's verification and policy domain. Each tool addresses a distinct aspect without being overwhelming.

Completeness4/5

The tool set covers search, retrieval, comparison, recommendation, policy export, and decision routing. Minor gaps exist (e.g., no tool to manage subscriptions or update policies), but core verification workflows are well-supported.

Available Tools

9 tools
compare_serversA
Read-only
Inspect

Compare up to four MCP servers side by side across score, verdict, auth, tool count, prompts/resources, and freshness.

ParametersJSON Schema
NameRequiredDescriptionDefault
identifiersYesCanonical server identifiers in namespace/name format.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations set readOnlyHint: true, consistent with a compare operation. The description lists the fields included in comparison, adding context beyond the annotation. No behavioral surprises mentioned, which is appropriate for a read-only tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded with the verb 'compare' and immediately specifies the resource and attributes. No wasted words, highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (one parameter, no output schema), the description covers the core functionality and compared fields. It could mention the output format, but overall it is adequate for a compare tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% and the parameter 'identifiers' has a clear schema description. The tool description does not add further parameter details, but the schema already provides sufficient meaning. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares up to four MCP servers side by side across specific attributes (score, verdict, auth, tool count, prompts/resources, freshness). This explicitly distinguishes it from siblings like search_servers or get_server_report.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for comparison of multiple servers, but it does not explicitly exclude alternatives like using get_server_report for a single server or search_servers for filtering. However, the context from sibling names makes the intended use case clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

export_policyA
Read-only
Inspect

Export a JSON TrustOps policy for one MCP server with allow, blocked tools, required scopes, freshness, and approval gates.

ParametersJSON Schema
NameRequiredDescriptionDefault
clientNo
max_riskNo
identifierYesCanonical server identifier in namespace/name format.
requires_oauthNo
max_freshness_hoursNo
no_write_without_approvalNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, and description reinforces a read operation. It adds the detail that the policy includes specific components, but no further behavioral traits like authentication or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single concise sentence covers the core purpose and components. No fluff, but could be structured for easier scanning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description does not specify the return format beyond 'JSON', and omits that 'identifier' is required. Given no output schema, more detail on the output structure would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 17% schema description coverage, the description compensates by explaining what the policy includes (allow, blocked tools, etc.), which maps to parameters like no_write_without_approval and max_freshness_hours. However, 'client' and 'max_risk' are not explicitly covered.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Export', the resource 'JSON TrustOps policy', and the scope 'for one MCP server'. It lists key policy components, distinguishing it from sibling tools like 'fetch' and 'search'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. It does not mention when not to use it or provide context for selecting among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fetchA
Read-only
Inspect

ChatGPT-compatible read-only fetch alias. Returns a full MCP server item with id, title, text, url, and metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesCanonical MCP server identifier in namespace/name format.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Consistent with readOnlyHint annotation, adds that it returns a full MCP server item with specific fields and notes ChatGPT compatibility, providing useful behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with key purpose and return details, no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully sufficient for a simple read-only tool with one parameter and good annotations; states what it does and what it returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% and the description does not add extra meaning to the id parameter beyond the schema's note about namespace/name format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it is a read-only fetch alias that returns a full MCP server item with specific fields (id, title, text, url, metadata). Distinct from siblings by specifying single-item retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use when needing a full server item by ID and no filtering, but lacks explicit guidance on when not to use it or suggestions for alternatives like search_servers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_server_reportA
Read-only
Inspect

Return the full machine-readable verify report for a specific MCP server identifier.

ParametersJSON Schema
NameRequiredDescriptionDefault
identifierYesCanonical server identifier in namespace/name format.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds context beyond the readOnlyHint annotation by specifying the output is a 'full machine-readable verify report', clarifying the report's completeness and format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that conveys all essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no output schema) and the readOnlyHint, the description adequately covers the action and resource, though it could mention the report's structure or potential limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description does not add new meaning beyond the schema's definition of 'identifier'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Return') and resource ('full machine-readable verify report for a specific MCP server identifier'), clearly distinguishing it from siblings like 'search_servers' or 'compare_servers'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_subscription_optionsA
Read-only
Inspect

Return alert types, subscription channels, watch scopes, and existing subscription endpoints.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The read-only behavior is declared via annotation, so the description adds value by enumerating the returned data types. However, it does not discuss any potential side effects or additional behaviors, which is acceptable for a simple read-only no-parameter tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence efficiently communicates the tool's purpose without wordiness. The verb 'Return' is front-loaded, making the action immediate clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no parameters and no output schema, the description adequately lists the four categories of data returned. It lacks details on structure or format, but this does not significantly hinder understanding given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so the description has no parameters to explain. Schema coverage is trivially 100%, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Return' and specifies four distinct resources: alert types, subscription channels, watch scopes, and existing subscription endpoints. It distinguishes itself from sibling tools like get_server_report or recommend_servers which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as fetch or search_servers. The description is purely declarative with no contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_serversA
Read-only
Inspect

Recommend MCP servers for a plain-language task and return ranked matches with install config.

ParametersJSON Schema
NameRequiredDescriptionDefault
taskYesPlain-language task description such as 'I need an MCP for healthcare denial scoring in OpenAI connectors'.
limitNoMaximum number of recommendations to return.
capabilitiesNoOptional explicit capability constraints that override or extend the inferred task capabilities.
client_targetNoOptional target client or integration surface.
risk_toleranceNoHow much tool-surface risk is acceptable.
auth_preferenceNoPreferred auth mode.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, and the description adds context about returning ranked matches with install config, which is beneficial. No contradictions; the description supplements the annotation well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 15 words, front-loaded with verb and resource, no fluff. Every word contributes to meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the basic purpose and output (ranked matches with install config) but does not explain ranking criteria or the effect of optional parameters. Given the complexity (6 parameters), more detail would improve completeness. Annotations help but don't fully compensate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all 6 parameters, so the baseline is 3. The description does not add significant meaning beyond 'plain-language task', but the schema already covers parameter semantics adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it recommends MCP servers for a plain-language task and returns ranked matches with install config. It uses a specific verb and resource, but does not explicitly differentiate from sibling tools like search_servers or compare_servers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as search_servers or compare_servers. The description implies it is for plain-language tasks but lacks explicit context or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

route_taskB
Read-only
Inspect

Return a runtime MCP TrustOps decision for a task: recommended server, allowed tools, blocked tools, and approval requirement.

ParametersJSON Schema
NameRequiredDescriptionDefault
taskYesTask the agent wants to perform.
max_riskNoMaximum allowed tool risk.
candidateNoOptional namespace/name server identifier to evaluate directly.
capabilitiesNoOptional capability constraints such as healthcare, search, database, read, or write.
client_targetNo
requires_oauthNo
max_freshness_hoursNo
no_write_without_approvalNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the description does not contradict. It adds value by detailing output components (recommended server, allowed tools, etc.). However, it does not disclose behavioral traits like rate limits, auth needs, or behavior under max_risk thresholds.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is front-loaded and to the point. Every word adds value; no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description gives a high-level overview of output, it does not explain how parameters influence the decision or provide details on the decision algorithm. Given 8 parameters and no output schema, the description is adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, meaning half the parameters lack descriptions. The tool description provides no additional parameter-level information, failing to compensate for undocumented parameters like 'client_target' or 'max_freshness_hours'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool returns a runtime MCP TrustOps decision, specifying components like recommended server, allowed/blocked tools, and approval requirement. It's a specific verb and resource, and distinguishes from sibling tools like 'recommend_servers' or 'compare_servers'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives such as 'recommend_servers' or 'search_servers'. The description does not mention prerequisites, contexts, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_serversA
Read-only
Inspect

Search and rank MCP servers by capability, auth preference, client target, and risk tolerance.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of ranked candidates to return.
queryNoOptional free-text query to bias ranking toward a specific use case or domain.
capabilitiesNoNormalized capability taxonomy terms such as healthcare, search, read, exec, files, oauth, or prompts.
client_targetNoOptional target client or integration surface.
risk_toleranceNoHow much tool-surface risk is acceptable.
auth_preferenceNoPreferred auth mode. Use unauthenticated when the agent must avoid login flows.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true; the description adds no behavioral details beyond that. No contradiction but minimal added transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the purpose with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 6 parameters and no output schema, the description is adequate but does not explain return format or ranking behavior, leaving gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description lists some parameters but adds no new meaning beyond the schema's own descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches and ranks MCP servers by multiple criteria, distinguishing it from siblings like compare_servers or recommend_servers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for finding servers based on preferences but does not explicitly contrast with siblings or specify when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources