aiprox-mcp
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.4.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 5 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses return values (endpoint and pricing) which compensates for the missing output schema, but fails to mention safety characteristics (read-only vs. destructive), side effects, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. The first states purpose; the second explains inputs and outputs. Every clause earns its place and the description is appropriately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter tool without output schema or annotations, the description adequately covers the functional context by naming the service (AIProx) and specifying the return payload (endpoint, pricing). Minor gap: no error handling or edge case guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description loosely maps 'Describe what you need' to the task parameter but adds no specific syntax, format constraints, or semantic details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States a specific verb (Find) and resource (agent) with clear scope ('for a specific task'). The task-based discovery purpose distinguishes it from siblings like get_agent (likely ID-based retrieval) and list_agents (enumeration).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance through 'Describe what you need,' suggesting natural language input for discovery. However, it lacks explicit when-to-use criteria or comparisons to alternatives (e.g., when to use get_agent instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden and adds valuable behavioral context: 'Free to register' (cost) and 'pending until verified' (post-invocation state requiring manual team review). Does not mention auth requirements or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences with zero redundancy. Front-loaded with core action, followed by cost and lifecycle state. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 9-parameter mutation tool with no output schema, description adequately covers business logic (verification workflow) but omits return value structure and error scenarios that the missing output schema would have provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description does not add parameter-specific guidance (e.g., rail/payment_address relationship, endpoint format requirements) beyond what schema properties already document.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Register) and resource (new agent in AIProx registry). The verb 'Register' clearly distinguishes this creation tool from retrieval siblings (find_agent, get_agent, list_agents).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides workflow context that registrations are 'pending until verified,' implying async usage expectations. However, lacks explicit comparison to siblings or guidance on when to use vs. alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It explains what gets returned ('full spec including all required and optional fields'), which is valuable given the lack of an output schema. However, it omits operational details like idempotency, caching behavior, or whether this is a safe read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with no redundant words. The first sentence identifies the action and resource; the second explains the return value content and purpose. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (zero parameters, no nested objects) and absence of an output schema, the description adequately covers the essentials: what the tool retrieves and the scope of the returned data. It appropriately compensates for the missing output schema by describing the return content.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. According to the scoring rubric, zero parameters establishes a baseline score of 4, as there are no parameter semantics to clarify beyond what the empty schema already conveys.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the 'AIProx agent manifest specification' using the specific verb 'Get'. It effectively distinguishes itself from siblings (get_agent, list_agents) by targeting the specification/schema rather than agent instances, and links to register_agent by mentioning it's used 'for registering agents'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context by stating the spec is for 'registering agents', hinting it should be used before registration. However, it lacks explicit guidance on when to use this versus alternatives like get_agent, or prerequisites for invoking the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively compensates by detailing the return structure ('Returns agent names, capabilities, pricing, endpoints, and payment rails') and scale ('15 agents live'), though it omits rate limits, caching behavior, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by filters, return values, examples, and cardinality. The capability list is lengthy but earns its place by enumerating valid domain values. The '15 agents live' phrasing is slightly informal but efficiently conveys scale.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description appropriately details the return structure and fields. It adequately covers the tool's functionality for a registry listing operation, though it could benefit from mentioning pagination if the '15 agents' count grows, or explicit references to sibling tools for discovery workflows.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although the schema has 100% coverage with examples, the description adds value by emphasizing the optional nature of filters ('Optionally filter by') and providing an extensive list of 12+ capability examples beyond the four mentioned in the schema, helping users understand the domain of valid values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'List[s] all active agents in the AIProx registry,' specifying the verb (list), resource (agents), and scope (active, AIProx registry). It distinguishes from siblings like get_agent and find_agent by emphasizing the 'all' aggregation and optional filtering capability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage through 'Optionally filter by,' it provides no explicit guidance on when to use list_agents versus find_agent (search) or get_agent (specific retrieval). It lacks explicit when-to-use or when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It compensates well by disclosing the specific return payload fields ('endpoint, pricing, payment rail, capabilities, and models'), which is critical behavioral information given the lack of an output schema. It does not mention error states or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the action and target; the second efficiently lists return fields. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter lookup tool without output schema, the description is complete. It covers the lookup mechanism, the resource scope, and compensates for missing output schema by listing return fields. No additional information is necessary for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mentions 'by name' which aligns with the 'name' parameter, but does not add additional semantic detail (syntax rules, case sensitivity) beyond what the schema already provides with its examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') with clear resource ('agent') and scope ('full details', 'AIProx registry'). The phrase 'by name' effectively distinguishes this from sibling tools like find_agent (likely search) and list_agents (likely returns collection).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'by name' provides clear context that this tool is for exact-name lookups, implying when to use it versus find_agent. However, it does not explicitly name alternative tools or state exclusion criteria (e.g., 'do not use if you only have partial name').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/unixlamadev-spec/aiprox-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server