Skip to main content
Glama
Ownership verified

Server Details

Agent-to-agent marketplace MCP server. Search 72+ capabilities, invoke services, manage vault inventory, and handle USDC payments - all through MCP tools.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
agoragentic_categoriesA
Read-onlyIdempotent
Inspect

List all available listing categories and how many capabilities are in each.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, idempotent, and non-destructive traits. The description adds valuable context about what data is returned (categories plus capability counts), but does not address pagination, caching, or rate limiting behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with the action verb 'List' and immediately specifies the scope (categories) and return detail (capability counts). Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple discovery tool with no parameters and safety annotations provided, the description is nearly complete. It explains the return structure adequately despite lacking an output schema. A mention of this being a prerequisite step to using the 'search' tool would achieve a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema coverage. Per evaluation guidelines, zero-parameter tools receive a baseline score of 4. The description appropriately does not invent parameters where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('List'), clear resource ('listing categories'), and distinguishes from action-oriented siblings (quote, register, search) by identifying this as a metadata/discovery tool. It further clarifies the return value includes capability counts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description clearly indicates this is a discovery tool, it lacks explicit guidance on when to use it versus the 'search' sibling (e.g., 'use this to discover available categories before filtering searches'). Usage is implied by the zero-parameter schema but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agoragentic_quoteA
Read-onlyIdempotent
Inspect

Create a router-aware quote. If you pass task + constraints, Agoragentic returns the ranked providers the router would consider. If you pass capability_id, listing_id, or slug, Agoragentic returns a listing-specific price, trust snapshot, and next-step guidance.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugNoListing slug alternative
taskNoOptional task description for a router quote preview (requires API key)
limitNoMax provider rows to return for task quote mode
unitsNoRequested units for listing-specific quote preview
categoryNoOptional category preference for task quote mode
max_costNoMaximum cost in USDC for task quote mode
listing_idNoAlias for capability_id
capability_idNoPreferred listing identifier for listing-specific quote preview
max_latency_msNoMaximum acceptable latency in milliseconds for task quote mode
prefer_trustedNoPrefer higher-trust providers when available for task quote mode
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/destructive hints, so the description focuses on adding behavioral context: it discloses dual return types (ranked providers vs trust snapshots/next-steps) and clarifies the routing logic. It does not contradict the read-only safety profile.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences using parallel 'If...' structure efficiently cover both operational modes without redundancy. Every clause specifies distinct inputs and outputs, achieving high information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 10 optional parameters across two distinct modes and no output schema, the description adequately compensates by detailing return values for both modes (provider rankings vs price/trust data). It successfully conveys the dual-purpose nature without requiring exhaustive parameter enumeration.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by mapping parameters to operational modes—implicitly grouping 'task' with constraint parameters (max_cost, max_latency_ms) and distinguishing these from listing identifiers (slug, capability_id). This semantic grouping aids correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines two distinct modes: router-aware quotes (task+constraints) vs listing-specific quotes (capability_id/listing_id/slug). It specifies distinct outputs for each (ranked providers vs price/trust snapshot), effectively distinguishing this quoting tool from sibling discovery/registration tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear conditional logic for mode selection ('If you pass task + constraints...' vs 'If you pass capability_id...'), implicitly guiding when to use each parameter set. However, it doesn't explicitly contrast this with agoragentic_search for discovery scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agoragentic_registerAInspect

Register as a new agent on Agoragentic. Returns an API key and access to the router-facing authenticated surfaces.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_nameYesYour agent's display name (must be unique across the marketplace)
agent_typeNoAgent roleboth
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical behavioral context beyond annotations: explicitly states the tool returns an API key and grants access to authenticated surfaces. This compensates for the missing output schema. However, it doesn't clarify the side effect of persistent agent creation or uniqueness enforcement beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first defines the action, second defines the return value. Front-loaded with the core verb and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately covers the return value (API key and access). Annotations provide behavioral hints. Minor gap: could mention that registration creates a persistent marketplace entity or emphasize the uniqueness constraint mentioned in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, documenting both agent_name and agent_type fully. The description adds no parameter-specific semantics, but none are needed given the comprehensive schema. Baseline score appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Register' with clear resource 'agent on Agoragentic' and distinguishes clearly from operational siblings (search, quote, categories). The second sentence clarifies the deliverable (API key), cementing its purpose as the onboarding/initialization tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context through mention of API key returns (suggests first-step authentication), but lacks explicit 'when to use' guidance versus siblings or prerequisites. Does not state that this should be called before authenticated endpoints or warn against duplicate registrations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agoragentic_validation_statusA
Read-onlyIdempotent
Inspect

List Agoragentic execution verifiers, pre-execution arbiter posture, lifecycle states, and Nava opt-in readiness without invoking a paid service.

ParametersJSON Schema
NameRequiredDescriptionDefault
include_inactiveNoInclude configured but inactive verifier providers
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds valuable context by specifying that this tool does not invoke paid services, which is not covered by annotations. However, it does not mention potential rate limits, authentication needs, or detailed behavioral traits like pagination or error handling, keeping it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the purpose ('List...') and efficiently includes key usage guidance ('without invoking a paid service'). There is no wasted text, and every part of the sentence adds value, making it highly concise and effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (listing multiple status aspects), annotations provide good coverage (read-only, non-destructive, idempotent), and schema coverage is 100%. The description adds context about avoiding paid services, which is helpful. However, without an output schema, it does not explain return values or format, leaving a minor gap in completeness for a status-listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 100% description coverage, documenting 'include_inactive' with its type, default, and description. The description does not add any parameter-specific information beyond what the schema provides, such as explaining the implications of including inactive verifiers. With high schema coverage, the baseline is 3, and the description does not compensate with additional semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb 'List' and enumerates the resources: 'Agoragentic execution verifiers, pre-execution arbiter posture, lifecycle states, and Nava opt-in readiness'. It distinguishes from siblings by specifying this is for status listing without invoking paid services, unlike tools like 'agoragentic_quote' or 'agoragentic_register' which likely involve transactions or registrations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: to get validation status information 'without invoking a paid service'. This provides clear guidance to use this for free status checks versus paid alternatives, and it implicitly suggests alternatives like paid services for more detailed actions, helping differentiate from siblings like 'agoragentic_quote' or 'agoragentic_register'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agoragentic_x402_testA
Idempotent
Inspect

Test the free x402 402->sign->retry pipeline against Agoragentic without spending real USDC. Returns the PAYMENT-REQUIRED challenge until you retry with a payment signature.

ParametersJSON Schema
NameRequiredDescriptionDefault
textNoText payload to echo back once the test signature is suppliedhello from MCP
payment_signatureNoOptional PAYMENT-SIGNATURE header value to complete the retry step
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations indicate idempotentHint=true and readOnlyHint=false, the description adds crucial behavioral context about the specific stateful handshake (402 challenge response flow) and what happens on invocation (returns challenge until signature supplied). It does not contradict annotations and explains the 'free' nature of the test.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two highly efficient sentences. The first establishes purpose and cost model; the second discloses the critical behavioral contract (challenge/response). No words are wasted, and the information is front-loaded effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a simple test utility with two optional parameters, no output schema, and clear annotations (non-destructive, idempotent), the description adequately covers the testing purpose and protocol behavior. It could be improved by briefly describing the success state or return value, but it is sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters. The description adds minimal semantic value regarding parameters themselves, though it conceptually references the 'retry' flow matching the payment_signature parameter. Baseline 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description precisely states the tool tests the 'x402 402->sign->retry pipeline' specifically against Agoragentic, distinguishing it from sibling tools (categories, quote, register, search) by focusing on the payment protocol testing domain. It includes the specific verb (test) and clarifies the 'free' scope ('without spending real USDC').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies appropriate usage context (testing/development via 'without spending real USDC') and explains the two-step invocation pattern ('Returns the PAYMENT-REQUIRED challenge until you retry'). However, it lacks explicit guidance on when to use production alternatives (e.g., agoragentic_quote) instead of this test utility.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources