AgentGrade
Server Details
Scan any website for AI agent readiness, payment protocols, and discovery endpoints
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.3/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: get_history retrieves historical scan data, scan_compact provides a streamlined scan for agent decisions, scan_url conducts a comprehensive scan, and validate_x402_json performs schema validation. There is no overlap or ambiguity between these functions.
All tools follow a consistent verb_noun naming pattern (get_history, scan_compact, scan_url, validate_x402_json) using snake_case throughout. The naming is predictable and readable without any deviations.
With 4 tools, the server is well-scoped for its apparent domain of URL scanning and validation. Each tool earns its place by covering distinct aspects of the workflow, avoiding bloat or thinness.
The tool set covers core scanning and validation operations well, with scan_url for full analysis, scan_compact for agent-focused results, get_history for tracking, and validate_x402_json for schema checks. A minor gap is the lack of update or delete operations for scan history, but agents can work around this.
Available Tools
4 toolsget_historyBInspect
Get scan history for a URL. Requires database to be configured.
| Name | Required | Description | Default |
|---|---|---|---|
| url | No | Filter by URL (optional) | |
| limit | No | Max results (default 20, max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a database requirement, which is useful context, but fails to describe key traits like whether this is a read-only operation, potential rate limits, error conditions, or what the output looks like (e.g., list format, timestamps). This leaves significant gaps for an agent to understand the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two short sentences with zero waste, front-loading the core purpose and a key prerequisite. It's appropriately sized for a simple tool, though it could be slightly more structured (e.g., bullet points) for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is incomplete. It lacks details on output format, error handling, and behavioral traits, which are crucial for an agent to use it correctly. The database requirement is helpful but insufficient for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with clear documentation for both parameters (url as optional filter, limit with defaults). The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'scan history for a URL', making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'scan_url' or 'scan_compact', which might also involve scanning operations, so it misses full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context by stating 'Requires database to be configured', which implies a prerequisite. However, it doesn't explicitly guide when to use this tool versus alternatives like 'scan_url' or 'validate_x402_json', leaving usage decisions ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_compactBInspect
Compact scan returning only rails, capabilities, and a numeric score. Optimized for agent decision-making.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL to scan |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool is 'optimized for agent decision-making' which hints at performance characteristics, but fails to disclose critical behavioral traits like whether it's read-only/destructive, authentication requirements, rate limits, error handling, or what 'rails' and 'capabilities' specifically refer to.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences that efficiently communicate the tool's purpose and optimization. Every word earns its place with no redundant information, making it front-loaded and easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's scanning function with no annotations and no output schema, the description is insufficiently complete. It doesn't explain what 'rails' and 'capabilities' mean in this context, doesn't describe the format or range of the 'numeric score', and provides minimal behavioral context for a tool that presumably interacts with external systems.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the single parameter 'url' well-documented in the schema. The description adds no additional parameter semantics beyond what the schema provides, maintaining the baseline score of 3 for adequate but not enhanced parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a 'compact scan' that returns specific data types (rails, capabilities, numeric score) and is optimized for agent decision-making. It distinguishes from generic scanning by specifying the limited output format, though it doesn't explicitly differentiate from sibling tools like 'scan_url'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('optimized for agent decision-making') suggesting this tool is for quick assessments rather than comprehensive scanning. However, it provides no explicit guidance on when to use this versus alternatives like 'scan_url' or 'validate_x402_json', leaving the agent to infer based on the 'compact' nature.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_urlCInspect
Scan a URL for agent capabilities and payment protocols. Returns full results including x402, MPP, L402, discovery, bazaar, MCP, plugins, OpenAPI, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL to scan (http or https) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the scan returns 'full results including x402, MPP, L402, discovery, bazaar, MCP, plugins, OpenAPI, and more,' which gives some output context. However, it lacks critical details like whether this is a read-only operation, potential rate limits, authentication needs, or error handling. For a tool with no annotations, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, stating the core purpose in the first sentence and elaborating on results in the second. Both sentences earn their place by clarifying scope and output. It could be slightly more structured by explicitly separating purpose from output details, but it's efficient with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (scanning for multiple capabilities) and lack of annotations or output schema, the description is minimally adequate. It covers the purpose and output types but misses behavioral traits and usage context. With no output schema, it should ideally describe return values more thoroughly, but the listed result types provide some completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'url' parameter fully documented in the schema. The description adds no additional parameter semantics beyond implying the URL should be scannable for the listed capabilities. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Scan a URL for agent capabilities and payment protocols.' It specifies both the action (scan) and the target (URL), and mentions what it looks for (agent capabilities, payment protocols). However, it doesn't explicitly differentiate from sibling tools like scan_compact or validate_x402_json, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There are sibling tools like scan_compact and validate_x402_json, but no indication of when this comprehensive scan is preferred over more focused alternatives. No prerequisites, exclusions, or comparative context are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_x402_jsonAInspect
Validate x402.json content against expected schema. Returns errors, warnings, and suggestions. No network requests — pure validation.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | The x402.json content to validate |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it's a validation tool that returns errors, warnings, and suggestions, and it performs no network requests. However, it lacks details on error handling, performance characteristics, or whether it modifies input data, leaving some gaps in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, with two sentences that efficiently convey purpose, output, and a key behavioral constraint ('No network requests'). Every sentence adds essential information without redundancy, making it easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (validation with one parameter but no output schema), the description is adequate but incomplete. It covers purpose and key behavior but lacks details on output structure (errors, warnings, suggestions format) and doesn't reference siblings for context, leaving room for improvement in guiding the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents the single parameter 'content' as an object for x402.json validation. The description adds minimal value beyond this, mentioning 'x402.json content' but not elaborating on format or constraints beyond what the schema implies, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('validate x402.json content against expected schema') and distinguishes it from siblings by mentioning 'No network requests — pure validation', which differentiates it from tools like scan_url that likely involve network operations. It uses precise verb+resource terminology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for validating x402.json content against a schema, with the explicit exclusion of network requests. However, it doesn't specify alternatives among siblings or when not to use it, such as compared to scan_compact which might handle different formats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!