Skip to main content
Glama

MCP Marketplace

Server Details

Search and install 4,000+ security-scanned MCP servers from inside any MCP-aware AI client.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_server retrieves detailed information for a single server, list_categories provides category metadata, and search_servers performs catalog searches. There is no overlap in functionality, making tool selection unambiguous for an agent.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (get_server, list_categories, search_servers) with clear, descriptive verbs and nouns. The naming is uniform and predictable across the set.

Tool Count4/5

Three tools are appropriate for a marketplace server, covering core operations like browsing, searching, and inspecting. It is slightly lean but reasonable, as it handles essential workflows without unnecessary complexity.

Completeness4/5

The tools provide a complete surface for marketplace interactions: browsing categories, searching servers, and fetching detailed server information. Minor gaps might include actions like user reviews or installation management, but core functionality is well-covered.

Available Tools

3 tools
get_serverAInspect

Fetch full details for a single MCP server by its slug. Returns description, install commands, security score/risk/findings, ratings, creator info, setup requirements (API keys/credentials the user will need), and the list of MCP tools the server exposes. The security.critical_findings array lists every severity=critical|high issue — you MUST show these to the user before recommending they install. Use this as the last step before any install recommendation.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesServer slug (from search_servers results or the URL: mcp-marketplace.io/server/{slug}).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior by specifying the comprehensive data returned (description, install commands, security details, etc.) and the critical security workflow requirement, though it doesn't mention potential errors or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose, followed by specific details and usage instructions. While slightly dense, every sentence adds value without redundancy, making it efficient for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no annotations and no output schema, the description provides strong context by detailing the return data structure and critical usage requirements. It could slightly improve by explicitly mentioning the response format or error handling, but it's largely complete for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already fully documents the single 'slug' parameter. The description adds no additional parameter semantics beyond what's in the schema, maintaining the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Fetch full details for a single MCP server by its slug') and distinguishes it from sibling tools like 'list_categories' and 'search_servers' by focusing on detailed information retrieval for a specific server rather than listing or searching multiple servers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('as the last step before any install recommendation') and includes critical usage instructions ('you MUST show these to the user before recommending they install'), clearly differentiating it from other tools in the workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesAInspect

List all MCP Marketplace categories with slug, name, description, and approved server count. Use the returned slug as the category filter in search_servers.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes what the tool returns (categories with specific fields) and how to use the output, but doesn't disclose behavioral traits like rate limits, authentication needs, or pagination. It's adequate but lacks rich behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded in a single sentence that clearly states the tool's purpose, output fields, and usage guidance. Every word earns its place with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no annotations, no output schema), the description is complete enough. It explains what the tool does, what it returns, and how to use the output. However, it could be slightly more complete by mentioning if there are any limitations (e.g., pagination, sorting).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and the schema description coverage is 100%. The description doesn't need to add parameter semantics, so a baseline of 4 is appropriate. It efficiently explains the tool's purpose without redundant parameter information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('List') and resource ('MCP Marketplace categories'), and distinguishes it from siblings by mentioning the returned slug is used as a filter in search_servers. It's not a tautology and provides concrete details about what fields are included.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool vs alternatives by specifying that the returned slug should be used as the category filter in search_servers. This provides clear guidance on the tool's role in the workflow and its relationship with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_serversAInspect

Search the MCP Marketplace catalog. With a free-text query and default sort, results are ranked by semantic similarity (gte-small embeddings + cosine similarity), so natural-language queries like 'manage my calendar', 'something to read PDFs', or 'database for my agent' work as well as keyword searches. Each result includes security_score (0-10), risk_level (low/moderate/high/critical), critical_findings (count of severity=critical|high findings), pricing, rating, install count, and a URL. ranking_mode in the response indicates whether semantic or keyword matching was used. Before recommending an install, call get_server for full details including every flagged finding — critical_findings > 0 means the server has known security issues you must surface to the user.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-indexed page number. Combine with `limit` to paginate past the first window. Defaults to 1.
sortNoRanking order. Defaults to 'relevance' when query is set, else 'installs'.
limitNoMax results per page (1-25). Default 10.
queryNoFree-text search across name, tagline, description, tags, and MCP tool names.
categoryNoCategory slug filter. Call list_categories to see valid slugs.
free_onlyNoIf true, exclude paid servers.
transportNoFilter by transport. 'stdio' = local (npm/pip); 'streamable-http' = hosted remote. SSE is not exposed because the catalog doesn't distinguish SSE from streamable-HTTP and filtering on it would always return zero.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does an excellent job disclosing behavioral traits. It explains the ranking algorithm (semantic similarity with embeddings), response format details (security_score, risk_level, critical_findings, pricing, etc.), pagination behavior (implied through page parameter), and critical security workflow (must surface known issues when critical_findings > 0). The only minor gap is explicit rate limit disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core functionality. Every sentence adds value: first explains search mechanism, second details response fields, third explains ranking_mode, fourth provides critical usage guidance. It could be slightly more concise by combining some security-related information, but overall structure is logical and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 7 parameters, no annotations, and no output schema, the description provides excellent context about behavior, response format, and security considerations. It explains what information is returned, how ranking works, and the critical workflow for security assessment. The main gap is lack of explicit output structure documentation, but the description compensates well by detailing key response fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 7 parameters thoroughly. The description adds some context about query behavior (natural-language queries work, searches across multiple fields) and transport filtering rationale, but doesn't provide significant additional parameter semantics beyond what the schema already covers. This meets the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches the MCP Marketplace catalog with free-text queries, explains the ranking mechanism (semantic similarity), and distinguishes it from sibling get_server by specifying that search_servers returns summarized results while get_server provides full details. It explicitly mentions the verb 'search' and resource 'MCP Marketplace catalog'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: it states to use search_servers for initial discovery and ranking, and to call get_server before recommending an install for full security details. It also mentions list_categories for obtaining valid category slugs. This covers both when-to-use and when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources