AXIS Toolbox — Agentic Commerce Codebase Intelligence
Server Details
Generate AGENTS.md, AP2 compliance docs, checkout rules, debug playbook & MCP configs from any repo.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- lastmanupinc-hub/AXIS-iliad
- GitHub Stars
- 0
- Server Listing
- AXIS iliad
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 12 of 12 tools scored. Lowest: 3.1/5.
The tool set has clear functional groupings (analysis, discovery, referral, artifact retrieval), but some tools overlap in purpose. For example, discover_agentic_commerce_tools, discover_agentic_purchasing_needs, list_programs, and search_and_discover_tools all serve discovery functions with varying scopes, which could confuse an agent about which to use. However, descriptions help differentiate them by specific use cases.
Most tools follow a consistent verb_noun or verb_my_noun pattern (e.g., analyze_files, get_artifact, improve_my_agent_with_axis), making them predictable and readable. Minor deviations include check_referral_credits (which uses 'check' instead of 'get') and prepare_for_agentic_purchasing (a longer phrase-based name), but overall the naming is coherent.
With 12 tools, the count is well-scoped for the server's purpose of providing codebase intelligence and commerce hardening. Each tool appears to serve a distinct role in the workflow, from analysis and discovery to artifact retrieval and referral management, without feeling overly sparse or bloated.
The tool surface covers the core workflows of analyzing codebases, retrieving artifacts, discovering tools, and managing referrals and commerce readiness. Minor gaps include no explicit update or delete operations for artifacts or snapshots, but the domain focus on analysis and generation means these may not be necessary, and agents can work around this with the provided tools.
Available Tools
12 toolsanalyze_filesAIdempotentInspect
Analyze source files directly and generate the full 99-artifact AXIS bundle without using GitHub. Returns snapshot_id plus artifact listing; use this for local, generated, or unsaved code. Requires Authorization: Bearer <api_key>. Use analyze_repo for GitHub URLs or improve_my_agent_with_axis for recommendation-first agent hardening.
| Name | Required | Description | Default |
|---|---|---|---|
| files | Yes | Source files to analyze | |
| goals | Yes | Analysis goals | |
| frameworks | Yes | Detected or known frameworks | |
| project_name | Yes | Name of the project | |
| project_type | Yes | Project type (web_application, api_service, cli_tool, library, monorepo) |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes | |
| artifacts | Yes | |
| project_id | Yes | |
| snapshot_id | Yes | |
| artifact_count | Yes | |
| programs_executed | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: 'Deterministic: same input → byte-identical output' explains consistency, 'Requires API key' states authentication needs, and 'Returns snapshot_id' describes the output. It also mentions 'max varies by tier' for file limits. While comprehensive, it doesn't cover rate limits or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. Each subsequent sentence adds important information (artifacts, parameters, output, determinism, authentication). While slightly dense due to listing 86 artifacts, every sentence earns its place by providing necessary context for tool usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 parameters, no annotations, no output schema), the description does well by explaining the output ('Returns snapshot_id'), authentication ('Requires API key'), and behavioral consistency ('Deterministic'). It could be more complete by detailing error cases or the structure of returned artifacts, but it covers the essential context for a complex analysis tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds some value by explaining the 'files' parameter format ('Pass files as [{path, content}] array') and mentioning 'max varies by tier', but doesn't provide additional meaning for other parameters like 'goals' or 'frameworks' beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Analyze source files directly (no GitHub required) and receive all 86 AXIS artifacts'. It specifies the verb ('analyze'), resource ('source files'), and distinguishes from sibling 'analyze_repo' by emphasizing 'no GitHub required'. The detailed list of artifacts (AGENTS.md, .cursorrules, etc.) provides concrete examples of what the analysis produces.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'Analyze source files directly (no GitHub required)' implicitly suggests using 'analyze_repo' for GitHub-based analysis. It also mentions 'Use get_artifact to retrieve any specific file' as a follow-up action. However, it doesn't explicitly state when NOT to use this tool or provide detailed alternatives beyond the GitHub distinction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
analyze_repoBIdempotentInspect
Analyze a GitHub repository and generate 99 structured AXIS artifacts across 18 programs. Returns snapshot_id plus an artifacts listing; use get_artifact to read files and get_snapshot to re-enumerate outputs without re-running analysis. Requires Authorization: Bearer <api_key>. Use this when the source of truth is a GitHub repo URL. Pricing: $0.50 standard, $0.15 lite budget mode per repo. This is the paid path for full repo analysis and can return authentication, quota, payment-required, invalid-URL, or GitHub-fetch errors. private repos require a stored GitHub token. Use analyze_files instead for inline file payloads or list_programs/search_and_discover_tools when you are still selecting a workflow.
| Name | Required | Description | Default |
|---|---|---|---|
| github_url | Yes | GitHub repository URL (https://github.com/owner/repo) |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes | |
| artifacts | Yes | |
| project_id | Yes | |
| snapshot_id | Yes | |
| artifact_count | Yes | |
| programs_executed | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns a 'snapshot_id' for artifact retrieval and requires an API key, which are useful behavioral details. However, it lacks information on rate limits, error conditions, authentication specifics, or what happens if the repo is private, leaving gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core functionality but includes a lengthy list of artifact examples (e.g., 'AGENTS.md, .cursorrules...') that could be summarized. While informative, this reduces conciseness. The sentences are clear, but the list feels excessive for a tool description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description partially compensates by explaining the return value ('snapshot_id') and API key requirement. However, it lacks details on output structure, error handling, or operational constraints, making it incomplete for a tool that performs complex analysis with multiple artifacts.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the single parameter 'github_url'. The description adds no additional parameter semantics beyond what the schema provides, such as format constraints or examples. Baseline score of 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool analyzes public GitHub repositories and produces structured AI-context artifacts, with specific examples like AGENTS.md and .cursorrules. It distinguishes from sibling 'analyze_files' by focusing on entire repositories rather than individual files, though it doesn't explicitly name this distinction. The verb 'analyze' and resource 'public GitHub repo' are specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Requires API key' as a prerequisite but provides no guidance on when to use this tool versus alternatives like 'analyze_files' or 'get_snapshot'. It doesn't specify scenarios where this tool is preferred over siblings or any exclusions, leaving the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_agentic_purchasing_needsARead-onlyIdempotentInspect
Discover the best AXIS workflow for a purchasing or compliance task. Free, no auth, and logs lightweight task metadata for intent analytics. Example: task_description='prepare for autonomous Visa checkout'. Use this when you need commerce-specific triage and next-step guidance. Use search_and_discover_tools instead for non-commerce keyword routing across all programs.
| Name | Required | Description | Default |
|---|---|---|---|
| focus_areas | No | Optional: specific areas to focus on | |
| task_description | No | What the agent is trying to accomplish | |
| current_readiness | No | Optional: current Purchasing Readiness Score (0-100) if known |
Output Schema
| Name | Required | Description |
|---|---|---|
| readiness | Yes | |
| task_description | Yes | |
| matched_capabilities | Yes | |
| recommended_next_step | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden and does well: it discloses that no authentication is required, describes the return content (tools, readiness score methodology, etc.), includes a call-to-action for next steps, and mentions searchable terms for context. It doesn't cover rate limits or error behaviors, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with core purpose, followed by key details (returns, auth status, usage context). The searchable terms list is slightly verbose but serves as useful context. Overall, most sentences earn their place, though it could be tighter by integrating the searchable terms more seamlessly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides strong context: purpose, usage guidelines, behavioral traits (no auth, returns specific outputs), and parameter hints. It doesn't detail output structure or error handling, but given the discovery nature and lack of structured fields, it's largely complete for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by contextualizing parameters: it implies 'task_description' should detail a commerce challenge, and the searchable terms (e.g., 'AP2 compliance', 'negotiation playbook') help clarify what 'focus_areas' might include, enhancing understanding beyond the schema's technical descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to analyze a user's purchasing/commerce task and return specific AXIS tools, readiness methodology, compliance generators, and self-onboarding steps. It distinguishes from siblings by focusing on discovery/assessment rather than execution (e.g., 'prepare_for_agentic_purchasing' is mentioned as a next step).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('to understand what AXIS can do for your specific commerce challenge before committing to an authenticated call') and when not to use (no authentication required, implying it's for preliminary assessment). Mentions 'prepare_for_agentic_purchasing' as an alternative for authenticated calls, and the searchable terms help differentiate from other discovery tools like 'discover_agentic_commerce_tools'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_commerce_toolsARead-onlyIdempotentInspect
Discover AXIS install metadata, pricing, and shareable manifests for commerce-capable agents. Free, no auth, and no mutation beyond read access. Example: call before wiring AXIS into Claude Desktop, Cursor, or VS Code. Use this when you need onboarding and ecosystem setup details. Use search_and_discover_tools instead for keyword routing or discover_agentic_purchasing_needs for purchasing-task triage.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| tools | Yes | |
| install | Yes | |
| axis_iliad | Yes | |
| free_tools | Yes | |
| shareable_manifest | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explicitly states 'Free, no auth, and no mutation beyond read access', which clarifies cost, authentication, and safety aspects. While annotations already indicate readOnlyHint=true and destructiveHint=false, the description reinforces this with plain language and adds practical constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: first sentence states purpose, second adds behavioral traits, third gives concrete example, fourth provides usage guidelines, and fifth distinguishes from alternatives. Every sentence earns its place with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters, comprehensive annotations, and an output schema exists, the description provides complete context. It covers purpose, behavioral traits, usage scenarios, and sibling differentiation, making it fully adequate for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline would be 4. The description doesn't need to explain parameters, but it provides context about what the tool discovers (metadata, pricing, manifests) which helps understand the output semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('discover AXIS install metadata, pricing, and shareable manifests') and resources ('commerce-capable agents'). It explicitly distinguishes from siblings by naming alternatives ('search_and_discover_tools' and 'discover_agentic_purchasing_needs') for different use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('call before wiring AXIS into Claude Desktop, Cursor, or VS Code', 'when you need onboarding and ecosystem setup details') and when to use alternatives ('Use search_and_discover_tools instead for keyword routing or discover_agentic_purchasing_needs for purchasing-task triage').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_artifactARead-onlyIdempotentInspect
Read one generated artifact by snapshot_id and path. Requires access to the snapshot and may return snapshot-not-found, invalid-path, or artifact-not-found errors. Example: snapshot_id=abc-123, path=AGENTS.md. Use this when you need the full text of one artifact. Use get_snapshot instead when you first need the artifact list.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Artifact file path as returned in the artifacts list | |
| snapshot_id | Yes | Snapshot ID |
Output Schema
| Name | Required | Description |
|---|---|---|
| content | Yes | UTF-8 artifact content |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly indicates this is a read operation ('Read the full UTF-8 content'), specifies the encoding (UTF-8), and explains the dependency on prior operations (snapshot_id requirement). However, it doesn't mention potential errors (e.g., invalid paths or snapshot IDs) or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose with helpful examples, the second explains prerequisites and related tools. Every sentence adds value with zero wasted words, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with 2 parameters and 100% schema coverage, the description provides good context about workflow dependencies and path examples. However, without an output schema, it doesn't describe the return format (e.g., string content structure) or potential error cases, leaving some gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters. The description adds some context about path format ('as returned in the artifacts list') and provides concrete examples of valid paths, but doesn't significantly expand beyond what the schema provides. Baseline 3 is appropriate when the schema does most of the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Read the full UTF-8 content') and resource ('any generated artifact by path'), with concrete examples of artifact paths. It distinguishes this tool from sibling tools like get_snapshot (which enumerates artifacts) and analyze_repo/analyze_files (which create snapshots).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Requires snapshot_id from a prior analyze_repo or analyze_files call') and provides an alternative for discovering available paths ('Use the artifacts list from get_snapshot to enumerate all available paths'). It clearly defines prerequisites and related workflows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_referral_codeAIdempotentInspect
Get or create the caller's AXIS referral token. Requires Authorization: Bearer <api_key>, has no usage charge, and may persist a new referral code if one does not exist yet. Example: call before sharing AXIS with another agent or workspace. Use this when you need the shareable token itself. Use get_referral_credits instead when you need balances, milestones, and discount status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| cost | Yes | |
| next_milestone | Yes | |
| referral_token | Yes | |
| current_earnings | Yes | |
| share_instruction | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses key behavioral traits: it explains the referral program's financial incentives ($0.001 per conversion, caps, discounts), promotional details (5th paid call free), and prerequisites ('Requires API key'), though it lacks information on response format or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence, followed by relevant details. While efficient, it could be slightly more structured by separating program rules from tool usage, but all sentences add value without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 params, no annotations, no output schema), the description is quite complete, covering purpose, program rules, and prerequisites. It could improve by specifying the return value format, but for a straightforward retrieval tool, it provides sufficient context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so no parameter details are needed. The description appropriately focuses on context and usage without redundant parameter information, meeting the baseline for zero-param tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Get') and resource ('your agent's unique referral code for the Share-to-Earn program'), distinguishing it from siblings like 'check_referral_credits' or 'list_programs' by focusing on code retrieval rather than credit checking or program listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Share this code with other agents') and mentions the program's purpose, but does not explicitly state when not to use it or name specific alternatives among sibling tools, though it implicitly differentiates from 'check_referral_credits'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_referral_creditsARead-onlyIdempotentInspect
Get the caller's referral earnings, milestones, and free-call status. Requires Authorization: Bearer <api_key>, has no usage charge, and returns the current discount ledger without creating a new analysis. Example: call after a referral campaign to inspect earned credits. Use this when you need balances and milestones. Use get_referral_code instead when you only need the shareable token.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| cost | Yes | |
| tier | Yes | |
| next_milestone | Yes | |
| referral_token | Yes | |
| discount_active | Yes | |
| earned_discount | Yes | |
| paid_call_count | Yes | |
| lifetime_referrals | Yes | |
| free_calls_remaining | Yes | |
| earned_credits_millicents | Yes | |
| persistence_credits_remaining | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies authorization requirements ('Requires Authorization: Bearer <api_key>'), indicates no usage charge, clarifies that it returns a 'current discount ledger without creating a new analysis', and provides an example use case. While annotations cover read-only and idempotent aspects, the description enhances understanding with practical details, though it doesn't mention rate limits or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with key information: purpose, authorization, cost, and return value. Each sentence adds value, such as the example and sibling differentiation, with no wasted words. It efficiently communicates necessary details in a compact form.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, rich annotations (readOnlyHint, idempotentHint), and an output schema, the description is complete. It covers purpose, usage guidelines, authorization, cost, behavioral traits, and sibling differentiation, providing all needed context without needing to explain return values due to the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4. The description appropriately does not discuss parameters, as none exist, and instead focuses on the tool's purpose and usage, which is efficient and avoids redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and the resource ('caller's referral earnings, milestones, and free-call status'), distinguishing it from the sibling tool get_referral_code which is for shareable tokens. It provides a concrete example of when to use it, making the purpose unambiguous and well-differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Use this when you need balances and milestones') and when to use an alternative ('Use get_referral_code instead when you only need the shareable token'). It also provides a contextual example ('call after a referral campaign to inspect earned credits'), offering clear guidance on usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_snapshotARead-onlyIdempotentInspect
Retrieve status and the full artifact listing for a prior analysis by snapshot_id. Use this to re-enumerate artifact paths without re-running analysis. Snapshots persist and can be shared between agents to avoid duplicate analysis costs.
| Name | Required | Description | Default |
|---|---|---|---|
| snapshot_id | Yes | Snapshot ID returned by analyze_repo or analyze_files |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes | |
| artifacts | Yes | |
| project_id | Yes | |
| snapshot_id | Yes | |
| artifact_count | Yes | |
| programs_executed | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the tool retrieves status and listings (read-only implied), snapshots persist (durability), and sharing snapshot_id avoids duplicate costs (performance/efficiency consideration). It doesn't cover rate limits or error conditions, but provides substantial context beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with three sentences that each earn their place: first states core functionality, second explains usage context, third adds important behavioral details about persistence and cost avoidance. No wasted words, front-loaded with the main purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (retrieving persisted analysis results), no annotations, and no output schema, the description does well by explaining what the tool returns (status and artifact listing), persistence characteristics, and cost implications. It could mention the return format or error cases, but covers the essential context for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter snapshot_id, which is already documented in the schema. The description adds minimal value by noting that snapshot_id comes from analyze_repo or analyze_files, but doesn't provide additional syntax or format details beyond what the schema states. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve status and full artifact listing') and resource ('a prior analysis by snapshot_id'), distinguishing it from siblings like analyze_files (which creates snapshots) and get_artifact (which retrieves individual artifacts). It explicitly mentions the purpose of re-enumerating artifact paths without re-running analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('to re-enumerate artifact paths without re-running analysis') and when not to use it (avoid duplicate analysis costs). It also distinguishes from siblings by noting that snapshot_id comes from analyze_repo or analyze_files, making the workflow clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
improve_my_agent_with_axisAInspect
Analyze an agent codebase and return a prioritized AXIS hardening plan. Requires Authorization: Bearer <api_key>; this creates a snapshot and may return auth, quota, file-limit, or validation errors. Example: pass your agent source files to see missing AGENTS.md, CLAUDE.md, and MCP config gaps. Use this when you want recommendations and missing-context detection. Use analyze_files instead when you want the full artifact bundle directly.
| Name | Required | Description | Default |
|---|---|---|---|
| files | Yes | Source files of the agent to analyze | |
| project_name | Yes | Name of the agent/project to improve |
Output Schema
| Name | Required | Description |
|---|---|---|
| analysis | Yes | |
| call_again | Yes | |
| mcp_config | Yes | |
| snapshot_id | Yes | |
| project_name | Yes | |
| improvement_plan | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns a 'hardening report' with specific artifacts, mentions it requires an API key, and implies it performs analysis. However, it lacks details on rate limits, error handling, authentication specifics beyond the API key, or whether the operation is idempotent. For a meta-tool with no annotation coverage, this leaves behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by details on returns and usage. It avoids redundancy, but could be slightly more concise by integrating the 'Essentially' clause more smoothly. Overall, it's efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (meta-analysis with 2 parameters) and lack of annotations and output schema, the description does a fair job by outlining purpose, returns, and prerequisites. However, it doesn't fully compensate for the missing output schema (no details on report structure or error formats) and sparse behavioral transparency, leaving room for improvement in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('files' and 'project_name'). The description adds context by explaining that 'files' are 'source files of the agent to analyze' and implies 'project_name' is used for the 'agent/project to improve', but doesn't provide additional syntax, format, or constraints beyond what the schema offers. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'analyze your own agent's codebase and get back a hardening report with specific AXIS artifacts that will improve your agent's capabilities.' It uses specific verbs ('analyze', 'get back') and resources ('codebase', 'hardening report', 'AXIS artifacts'), and clearly distinguishes from siblings like 'analyze_files' or 'analyze_repo' by focusing on agent improvement with AXIS-specific outputs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Pass your source files and get back a prioritized improvement plan.' It distinguishes from alternatives by specifying the AXIS context and output types (e.g., 'recommended programs', 'missing context files'), and mentions prerequisites: 'Requires API key.' This clearly defines the tool's niche among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_programsARead-onlyIdempotentInspect
Inventory mode. List all 18 AXIS programs, their generators, pricing tier, and artifact paths. Free, no auth, and no side effects. Use search_and_discover_tools instead when you only have a keyword, or discover_commerce_tools when you need install and onboarding metadata.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| programs | Yes | |
| pro_programs | Yes | |
| free_programs | Yes | |
| total_programs | Yes | |
| total_generators | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates key behavioral traits: the tool requires no authentication, enumerates a fixed set of 18 programs with 86 generators, and returns specific data fields (tier and artifact paths). However, it doesn't mention potential limitations like rate limits or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with only two sentences that each serve a distinct purpose: the first defines the tool's function and output, the second provides usage guidelines. There is zero wasted language and it's perfectly front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description provides excellent coverage of what the tool does, when to use it, and key behavioral aspects. The only minor gap is the lack of output format details, but for a simple enumeration tool with no output schema, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so the baseline would be 3. However, the description adds value by explicitly stating 'No authentication required' and clarifying that this is a parameterless enumeration tool, which compensates for the lack of parameter documentation and elevates the score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all 18 AXIS programs') and resources involved (programs, generators, tier, artifact paths). It explicitly distinguishes this tool from its sibling 'search_and_discover_tools' by contrasting 'complete enumeration' versus 'keyword-based discovery', providing clear differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('for complete enumeration') and when to use an alternative ('Use search_and_discover_tools for keyword-based discovery'). It clearly defines the appropriate context and excludes the alternative use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prepare_agentic_purchasingAInspect
Prepare a codebase for agentic purchasing and return a readiness score plus commerce artifacts. Requires Authorization: Bearer <api_key>; paid analysis records a new snapshot and may return auth, quota, payment, file-limit, or validation errors. Example: submit checkout files with focus_areas=["sca","dispute"]. Use this when you need AP2/UCP/Visa, CE 3.0 dispute evidence, checkout, dispute, and negotiation hardening. Use discover_agentic_purchasing_needs instead when you only need workflow triage.
| Name | Required | Description | Default |
|---|---|---|---|
| files | Yes | Array of {path, content} objects representing source files | |
| focus | No | Analysis focus (default: purchasing) | |
| goals | Yes | Project goals | |
| agent_type | No | Consuming agent type hint | |
| frameworks | Yes | Detected or known frameworks | |
| focus_areas | No | Compliance focus areas | |
| project_name | Yes | Name of the project | |
| project_type | Yes | Project type (web_application, api_service, cli_tool, library, monorepo) | |
| referral_token | No | Optional referral token from another agent | |
| spending_window | No | Agent spending window | |
| budget_per_run_cents | No | Agent budget for this call in cents |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes | |
| summary | Yes | |
| project_id | Yes | |
| snapshot_id | Yes | |
| artifact_count | Yes | |
| programs_executed | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it mentions authorization requirements ('Requires Authorization: Bearer <api_key>'), potential errors ('may return auth, quota, payment, file-limit, or validation errors'), and that paid analysis records a new snapshot. Annotations cover basic safety (readOnlyHint=false, destructiveHint=false), but the description provides operational details that help the agent understand execution implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose. Each sentence adds value: purpose, requirements/errors, example, usage guidelines. While slightly dense, there's minimal waste - every clause serves to clarify tool behavior or usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (11 parameters, mutation capability), rich annotations, and the presence of an output schema, the description provides excellent contextual completeness. It covers purpose, authorization, error conditions, usage guidelines, and distinguishes from alternatives - addressing what the structured fields don't explicitly convey about operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 11 parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it only mentions 'focus_areas' in the example and implies 'files' usage. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('prepare a codebase for agentic purchasing') and resources ('return a readiness score plus commerce artifacts'). It explicitly distinguishes from sibling 'discover_agentic_purchasing_needs' by stating when to use each tool, making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when you need AP2/UCP/Visa, CE 3.0 dispute evidence, checkout, dispute, and negotiation hardening') and when to use an alternative ('Use discover_agentic_purchasing_needs instead when you only need workflow triage'). This gives clear context for selection among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_and_discover_toolsARead-onlyIdempotentInspect
Search AXIS programs by keyword and return ranked matches with artifact paths. Free, no auth, and no stateful side effects. Example: q=checkout returns commerce-relevant programs first. Use this when you know the outcome you want but not the right program. Use list_programs instead for the full catalog, discover_commerce_tools for install metadata, or discover_agentic_purchasing_needs for purchasing-specific triage.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Search query — keyword or phrase | |
| program | No | Optional: filter results to a specific program name |
Output Schema
| Name | Required | Description |
|---|---|---|
| query | Yes | |
| results | Yes | |
| total_matches | Yes | |
| program_filter | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the tool is 'context-efficient' for token-saving, 'No authentication required' clarifies access needs, and it specifies that results include 'ranked matches with capability tags, artifact paths, and example API calls'. However, it doesn't mention rate limits or error handling, leaving some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose. Every sentence adds value: the first defines the tool, the second provides usage guidance, the third gives examples, and the fourth covers authentication. While slightly dense due to example lists, it avoids redundancy and efficiently communicates essential information without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is largely complete. It covers purpose, usage, parameters, authentication, and examples. However, without an output schema, it could benefit from more detail on result structure (e.g., format of 'ranked matches'), and it doesn't address potential limitations like search scope or performance. Still, it provides sufficient context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds significant value beyond the schema by providing semantic context: it explains that 'q' accepts keywords or phrases with concrete examples ('checkout payment', 'debug logs'), clarifies that omitting 'q' lists all programs, and lists searchable terms like 'AP2 compliance' and 'Visa Intelligent Commerce'. This enhances understanding of parameter usage beyond basic schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Keyword search across all 18 AXIS programs and 86 generators. Returns ranked matches with capability tags, artifact paths, and example API calls.' This is a specific verb ('search') with clear resources ('AXIS programs and generators') and output details. It distinguishes itself from siblings like 'list_programs' by emphasizing search functionality rather than simple listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'call this before loading full tool schemas to find the right program without wasting tokens.' It also distinguishes usage from alternatives by noting that omitting 'q' lists all programs alphabetically, which contrasts with more targeted sibling tools like 'discover_agentic_commerce_tools'. Examples further clarify appropriate contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.