Skip to main content
Glama

MCP Marketplace

Server Details

Search and install 4,000+ security-scanned MCP servers from inside any MCP-aware AI client.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_server retrieves detailed information for a single server, list_categories provides category metadata, and search_servers performs catalog searches. There is no overlap in functionality, making tool selection unambiguous for an agent.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (get_server, list_categories, search_servers) with clear, descriptive verbs and nouns. The naming is uniform and predictable across the set.

Tool Count4/5

Three tools are appropriate for a marketplace server, covering core operations like browsing, searching, and inspecting. It is slightly lean but reasonable, as it handles essential workflows without unnecessary complexity.

Completeness4/5

The tools provide a complete surface for marketplace interactions: browsing categories, searching servers, and fetching detailed server information. Minor gaps might include actions like user reviews or installation management, but core functionality is well-covered.

Available Tools

7 tools
compareAInspect

Compare 2-5 MCP servers side by side on the fields users actually decide on: security score, critical findings, pricing, transport mode, tool count, and install command availability. Use when a user is choosing between candidates from a search. Returns a structured comparison table plus a short per-field summary, so the agent can surface the important contrasts without a second pass over each server.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugsYesArray of 2-5 server slugs to compare.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it specifies the input range (2-5 servers), lists the exact comparison fields, describes the output format ('structured comparison table plus a short per-field summary'), and explains the agent's benefit ('surface the important contrasts without a second pass'). It doesn't mention rate limits or error conditions, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first establishes purpose and scope, the second provides usage context and output benefits. Every element serves a clear purpose with zero redundant information, making it easy to parse while being comprehensive.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (comparison operation with 6 specific fields), no annotations, and no output schema, the description does well by explaining what fields are compared, the output format, and when to use it. It could potentially mention error handling for invalid slugs or what happens if servers lack some comparison fields, but it provides sufficient context for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the 'slugs' parameter (array of 2-5 server slugs). The description adds no additional parameter semantics beyond what's in the schema, maintaining the baseline score of 3 for adequate coverage when schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: 'Compare 2-5 MCP servers side by side on the fields users actually decide on' with specific fields listed (security score, critical findings, pricing, transport mode, tool count, and install command availability). It clearly distinguishes this from sibling tools like 'get_server' (single server) or 'search_servers' (finding candidates) by focusing on comparative analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use when a user is choosing between candidates from a search.' This clearly indicates when to invoke this tool versus alternatives like 'get_server' (for single server details) or 'search_servers' (for finding candidates), establishing a specific context for comparison after initial filtering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

creator_profileAInspect

List all MCP servers by a single creator, plus aggregate trust signals. Use to evaluate a publisher holistically: 'do they ship consistently?', 'what's their security track record?', 'are there other servers by the same author?'. Match is case-insensitive on display name. Returns aggregate stats (total servers, avg security score, grade distribution, critical-finding count) plus the per-server list.

ParametersJSON Schema
NameRequiredDescriptionDefault
creatorYesCreator display name (case-insensitive) OR GitHub username. From a search_servers result, use the `creator` field.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well. It discloses key behavioral traits: case-insensitive matching on display name, returns both aggregate stats and per-server list, and specifies what metrics are included (total servers, avg security score, etc.). It doesn't mention rate limits, authentication needs, or error conditions, but covers the core functionality thoroughly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero waste. The first sentence states the core functionality, the second provides usage context with concrete evaluation questions, and the third clarifies behavioral details. Every sentence earns its place and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema, the description provides good completeness. It explains what the tool returns (aggregate stats plus per-server list with specific metrics) and the matching behavior. The main gap is lack of output format details, but given the tool's relatively simple purpose and good parameter coverage, this is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds some value by clarifying that 'creator' can be either display name OR GitHub username, and suggesting to use the 'creator' field from search_servers results. However, it doesn't provide additional syntax or format details beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List all MCP servers by a single creator, plus aggregate trust signals.' It specifies the verb ('List'), resource ('MCP servers by a single creator'), and additional functionality ('aggregate trust signals'). It distinguishes from siblings by focusing on creator-specific aggregation rather than general listing/searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Use to evaluate a publisher holistically' with specific evaluation questions. It doesn't explicitly state when NOT to use it or name alternatives among siblings, but the context strongly implies this is for creator-focused analysis rather than general server discovery.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_serverAInspect

Fetch full details for a single MCP server by its slug. Returns description, install commands, security score/risk/findings, ratings, creator info, setup requirements (API keys/credentials the user will need), and the list of MCP tools the server exposes. The security.critical_findings array lists every severity=critical|high issue — you MUST show these to the user before recommending they install. Use this as the last step before any install recommendation.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesServer slug (from search_servers results or the URL: mcp-marketplace.io/server/{slug}).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior by specifying the comprehensive data returned (description, install commands, security details, etc.) and the critical security workflow requirement, though it doesn't mention potential errors or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose, followed by specific details and usage instructions. While slightly dense, every sentence adds value without redundancy, making it efficient for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no annotations and no output schema, the description provides strong context by detailing the return data structure and critical usage requirements. It could slightly improve by explicitly mentioning the response format or error handling, but it's largely complete for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already fully documents the single 'slug' parameter. The description adds no additional parameter semantics beyond what's in the schema, maintaining the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Fetch full details for a single MCP server by its slug') and distinguishes it from sibling tools like 'list_categories' and 'search_servers' by focusing on detailed information retrieval for a specific server rather than listing or searching multiple servers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('as the last step before any install recommendation') and includes critical usage instructions ('you MUST show these to the user before recommending they install'), clearly differentiating it from other tools in the workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesAInspect

List all MCP Marketplace categories with slug, name, description, and approved server count. Use the returned slug as the category filter in search_servers.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes what the tool returns (categories with specific fields) and how to use the output, but doesn't disclose behavioral traits like rate limits, authentication needs, or pagination. It's adequate but lacks rich behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded in a single sentence that clearly states the tool's purpose, output fields, and usage guidance. Every word earns its place with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no annotations, no output schema), the description is complete enough. It explains what the tool does, what it returns, and how to use the output. However, it could be slightly more complete by mentioning if there are any limitations (e.g., pagination, sorting).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and the schema description coverage is 100%. The description doesn't need to add parameter semantics, so a baseline of 4 is appropriate. It efficiently explains the tool's purpose without redundant parameter information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('List') and resource ('MCP Marketplace categories'), and distinguishes it from siblings by mentioning the returned slug is used as a filter in search_servers. It's not a tautology and provides concrete details about what fields are included.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool vs alternatives by specifying that the returned slug should be used as the category filter in search_servers. This provides clear guidance on the tool's role in the workflow and its relationship with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recently_addedAInspect

List the most recently added MCP servers. Use for discovery: 'what's new', 'latest servers', 'servers from this week'. Optionally constrain to the last N days. Ordered by creation date descending. Each result carries the same security/risk/pricing fields as search_servers.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoOnly include servers created within the last N days (1-365). Omit for no date bound.
limitNoMax servers to return (1-25). Default 10.
free_onlyNoIf true, exclude paid servers.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behaviors: it's a read operation (implied by 'list'), returns ordered results ('Ordered by creation date descending'), and references security/risk/pricing fields from another tool. However, it doesn't mention pagination, rate limits, or authentication requirements, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences that each serve distinct purposes: stating the core function, providing usage examples, and explaining ordering and field relationships. There's no wasted text, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only list tool with 3 parameters and 100% schema coverage but no output schema, the description is mostly complete. It explains the tool's purpose, usage context, ordering, and field relationships. The main gap is lack of output format details, but given the reference to 'search_servers' fields, it's reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds some value by explaining the optional 'days' parameter ('Optionally constrain to the last N days') and implying temporal filtering, but doesn't provide additional semantic context beyond what's already in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List the most recently added MCP servers' with specific verbs ('list', 'discovery') and resource ('MCP servers'). It distinguishes from siblings by mentioning 'search_servers' as having similar fields, implying this tool is specifically for temporal filtering rather than general search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidance: 'Use for discovery: 'what's new', 'latest servers', 'servers from this week''. It also distinguishes from 'search_servers' by noting this tool is for recent additions while search_servers is for broader queries, providing clear alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_serversAInspect

Search the MCP Marketplace catalog. With a free-text query and default sort, results are ranked by semantic similarity (gte-small embeddings + cosine similarity), so natural-language queries like 'manage my calendar', 'something to read PDFs', or 'database for my agent' work as well as keyword searches. Each result includes security_score (0-10), risk_level (low/moderate/high/critical), critical_findings (count of severity=critical|high findings), pricing, rating, install count, and a URL. ranking_mode in the response indicates whether semantic or keyword matching was used. Before recommending an install, call get_server for full details including every flagged finding — critical_findings > 0 means the server has known security issues you must surface to the user.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-indexed page number. Combine with `limit` to paginate past the first window. Defaults to 1.
sortNoRanking order. Defaults to 'relevance' when query is set, else 'installs'.
limitNoMax results per page (1-25). Default 10.
queryNoFree-text search across name, tagline, description, tags, and MCP tool names.
categoryNoCategory slug filter. Call list_categories to see valid slugs.
free_onlyNoIf true, exclude paid servers.
transportNoFilter by transport. 'stdio' = local (npm/pip); 'streamable-http' = hosted remote. SSE is not exposed because the catalog doesn't distinguish SSE from streamable-HTTP and filtering on it would always return zero.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does an excellent job disclosing behavioral traits. It explains the ranking algorithm (semantic similarity with embeddings), response format details (security_score, risk_level, critical_findings, pricing, etc.), pagination behavior (implied through page parameter), and critical security workflow (must surface known issues when critical_findings > 0). The only minor gap is explicit rate limit disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core functionality. Every sentence adds value: first explains search mechanism, second details response fields, third explains ranking_mode, fourth provides critical usage guidance. It could be slightly more concise by combining some security-related information, but overall structure is logical and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 7 parameters, no annotations, and no output schema, the description provides excellent context about behavior, response format, and security considerations. It explains what information is returned, how ranking works, and the critical workflow for security assessment. The main gap is lack of explicit output structure documentation, but the description compensates well by detailing key response fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 7 parameters thoroughly. The description adds some context about query behavior (natural-language queries work, searches across multiple fields) and transport filtering rationale, but doesn't provide significant additional parameter semantics beyond what the schema already covers. This meets the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches the MCP Marketplace catalog with free-text queries, explains the ranking mechanism (semantic similarity), and distinguishes it from sibling get_server by specifying that search_servers returns summarized results while get_server provides full details. It explicitly mentions the verb 'search' and resource 'MCP Marketplace catalog'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: it states to use search_servers for initial discovery and ranking, and to call get_server before recommending an install for full security details. It also mentions list_categories for obtaining valid category slugs. This covers both when-to-use and when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

similar_toAInspect

Find MCP servers that are semantically similar to a reference server. Use when a user picked a candidate but wants alternatives — e.g. 'like this but safer', 'like this but free', 'what else does this'. Reuses the catalog's gte-small embeddings: the reference server's embedding is the query vector. Returns servers sorted by cosine similarity (highest first), excluding the reference itself. Each result carries the same security/risk/pricing fields as search_servers so callers can immediately compare on security_score, has_critical_findings, and pricing.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesReference server slug (the one you want similar alternatives to).
limitNoMax similar servers to return (1-10). Default 5.
free_onlyNoIf true, exclude paid servers from the comparison set.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behaviors: it uses gte-small embeddings for semantic similarity, returns results sorted by cosine similarity, excludes the reference server, and includes security/risk/pricing fields for comparison. However, it doesn't mention potential limitations like performance or accuracy constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage guidelines and behavioral details in a logical flow. Every sentence adds value without redundancy, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description provides strong context on behavior and usage. It explains the similarity mechanism, sorting, exclusions, and result fields, though it could briefly mention the output format or error handling for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema, maintaining the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Find MCP servers that are semantically similar to a reference server') and distinguishes it from siblings by specifying the semantic similarity approach using embeddings, unlike search_servers which likely uses keyword matching or other criteria.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool ('Use when a user picked a candidate but wants alternatives') with concrete examples ('like this but safer', 'like this but free', 'what else does this'), and distinguishes it from alternatives by noting it reuses catalog embeddings rather than performing a new search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources