MCPAmpel
Server Quality Checklist
- Disambiguation5/5
Each tool has a clearly distinct purpose: quota checking, local server scanning, URL scanning, and result retrieval. No functional overlap exists between the four tools.
Naming Consistency4/5Tools follow a consistent snake_case pattern with leading action verbs (check, get, scan). Minor deviation with 'scan_my_servers' including the possessive pronoun 'my' while others use direct nouns (scan_url, get_scan_results).
Tool Count5/5Four tools is an appropriate, focused count for a security scanning service covering status checks, two distinct scan targets (local servers vs remote URLs), and result retrieval.
Completeness3/5Core scanning workflows are present, but the absence of a list_scans tool creates a notable gap—agents cannot discover historical scan IDs without perfect memory of previous scan operations, limiting the utility of get_scan_results.
Average 3.8/5 across 4 of 4 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.2.1
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 4 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds context that results are 'full detailed' (suggesting comprehensive data retrieval), but fails to disclose whether this is a read-only operation, if there are rate limits, or what happens if the scan is incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient 10-word sentence that front-loads the action ('Get') and immediately qualifies the scope. There is no redundant or wasted language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter schema and lack of output schema, the description is minimally adequate. However, it could improve by clarifying that this retrieves results from previously initiated scans (complementing the sibling scan tools) or hinting at the output structure since no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('UUID of the scan'), establishing a baseline of 3. The description mentions 'by its ID' which aligns with the scan_id parameter, but adds no additional semantics regarding ID format, validation rules, or where to obtain the ID from sibling tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'full detailed results for a specific scan', distinguishing it from sibling tools like 'scan_my_servers' or 'scan_url' which likely initiate scans. However, it does not explicitly differentiate from 'check_status' (which might return metadata vs. full results) or explicitly state this is for retrieving completed scan data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings. It does not mention that this should be used after initiating a scan with 'scan_my_servers' or 'scan_url', nor when to prefer 'check_status' instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context by specifying '16 engines,' indicating the scanning depth. However, it omits critical operational details: whether the scan is synchronous or asynchronous, authentication requirements, rate limits, or what data is retained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence with zero waste. Every clause earns its place: the verb defines the action, 'single URL' scopes the operation, the parenthetical lists valid inputs, 'security issues' defines the target, and '16 engines' specifies methodology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and the existence of sibling 'get_scan_results,' the description should ideally indicate what this tool returns (e.g., a scan ID vs. full results). While adequate for basic invocation, it leaves ambiguity about the async workflow pattern suggested by the sibling tool names.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The main description essentially mirrors the schema's enumeration of supported platforms (GitHub, GitLab, npm, PyPI) without adding syntax details, validation rules, or examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Scan'), resource ('URL'), and scope ('security issues with 16 engines'). It distinguishes from sibling 'scan_my_servers' via the explicit 'single URL' qualifier and lists supported platforms (GitHub, GitLab, npm, PyPI), though it could explicitly clarify that this initiates the scan versus 'get_scan_results'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage constraints by listing the four supported platform types (GitHub, GitLab, npm, PyPI), guiding users toward valid inputs. However, it lacks explicit workflow guidance regarding the relationship to 'get_scan_results' (e.g., whether this returns immediately with a scan ID or blocks for results) and doesn't specify when to prefer this over 'scan_my_servers'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions 'daily quota' and 'remaining scans' revealing rate-limiting domain model, but omits whether this consumes quota itself, output format/structure, or read-only safety guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, seven words. Information-dense with zero redundancy. Front-loaded with action verb and specific resource targets.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter status tool. Describes what information is returned (quota and remaining scans) despite lack of output schema. Could improve by indicating return structure or that results are real-time.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present per schema. With no parameters to document, baseline score applies. Description appropriately focuses on behavior rather than inventing parameter semantics where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Show' paired with clear resources 'daily quota usage and remaining scans'. Distinctly differs from siblings (scan_url, scan_my_servers, get_scan_results) by focusing on account limits rather than performing or retrieving scans.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context (checking limits before scanning) through content but lacks explicit when-to-use guidance or warnings. Does not state 'use this before scan_my_servers to verify remaining quota' or clarify that it performs no scanning itself.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses scan intensity ('16 engines') and output type ('trust scores'), but omits critical behavioral traits: safety profile (read-only vs. destructive), execution duration expectations, side effects (does it modify configs?), or authentication requirements. Adequate but has clear gaps for a security scanning operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence covers the full operation (discovery + scanning with 16 engines), second covers return values. Front-loaded with the most important information (what it does and where it looks). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description adequately compensates by mentioning the return value ('trust scores'). For a scanning tool, it covers the essential invocation context. Could be improved by describing the output structure or format since no output schema exists, but sufficient for correct tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters. As per scoring rules, zero-parameter tools receive a baseline score of 4. The description appropriately requires no parameter explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verbs ('Discover', 'scan') and clearly identifies the resource ('MCP servers installed in your editor/agent config'). Lists specific editors (Claude Code, Cursor, Windsurf, Gemini CLI) to scope the operation. Explicitly states the return value ('trust scores'), clearly distinguishing it from sibling 'scan_url' (which targets URLs) and 'get_scan_results' (which retrieves existing results).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear contextual boundaries ('installed in your editor/agent config') that implicitly define when to use this versus 'scan_url'. However, lacks explicit 'when-not' guidance or named alternatives (e.g., doesn't state 'use scan_url for remote servers instead'). The specific editor list strongly implies the local/config use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/MCPAmpel/mcpampel'
If you have feedback or need assistance with the MCP directory API, please join our Discord server