Skip to main content
Glama

Server Details

Multi-party agent barter — non-cash capability exchange with cryptographic receipts

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
srotzin/hive-mcp-barter
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 3 of 3 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose: barter_arbitrage_book reports P&L, barter_discover lists endpoints, and barter_quote_curve estimates prices. No functional overlap.

Naming Consistency4/5

All tools use the 'barter_' prefix, but the second part mixes verbs (discover) and nouns (arbitrage_book, quote_curve), creating slight inconsistency.

Tool Count4/5

With 3 read-only tools, the count is at the lower end of the typical range but still acceptable for a focused discovery and analysis server.

Completeness2/5

The server lacks any write or trading tools (e.g., execute_swap, create_offer), which are natural for a barter platform. Read-only analysis only.

Available Tools

3 tools
barter_arbitrage_bookAInspect

Get today's P&L: probes sent, counters accepted, USDC realized spread. Tier 0, free, read-only.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It states 'read-only' indicating no side effects, and 'free' for cost. No contradictions or missing behavioral traits for a simple retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence front-loaded with the key action and result, followed by succinct details. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool, the description fully explains what is returned (three specific metrics) and provides cost and safety info. No output schema needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist; input schema is empty with 100% coverage. Description adds no parameter info, but none is needed. Baseline for 0 parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns today's P&L with specific metrics (probes sent, counters accepted, USDC realized spread). It is distinct from sibling tools barter_discover and barter_quote_curve, which are about discovery and quotes respectively.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Tier 0, free, read-only', providing clear context on when to use (any time, safe, no cost). No explicit exclusions or alternatives, but usage is well implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

barter_discoverAInspect

List 402-enabled MCP endpoints from a public registry (glama, smithery, mcp.so) with last-seen asking prices. Tier 0, free, read-only.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax endpoints to return (default 50).
registryNoSource registry to scrape.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It explicitly states 'read-only' and 'free', which are behavioral traits. The term '402-enabled' hints at payment-related endpoints, but the exact side effects (if any) are not clarified. Overall, good disclosure for a simple read tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence plus a brief tagline, making it very concise. However, the term '402-enabled' might be opaque to some users, slightly reducing clarity. Overall, every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, and the description does not explain the return format or what fields are included beyond 'last-seen asking prices'. For a list tool, this omission reduces completeness. Input parameters are well-covered, but output expectations are missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters adequately. The description adds no extra meaning beyond mentioning the registries by name, which is already in the schema. No parameter-specific guidance is given.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool lists 402-enabled MCP endpoints from three registries with last-seen asking prices. The verb 'list' and resource 'MCP endpoints' are specific, and the description distinguishes from sibling tools like barter_arbitrage_book and barter_quote_curve.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Indicates the tool is free and read-only ('Tier 0, free, read-only'), but does not explicitly state when to use it over alternatives or provide exclusions. The sibling names are different enough to imply separate use cases, but no direct guidance is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

barter_quote_curveCInspect

Build a demand curve for a target by sending up to 5 probes at descending floor pcts. Returns the reservation-price estimate. Tier 0, free, read-only.

ParametersJSON Schema
NameRequiredDescriptionDefault
n_probesNoNumber of probes (default 3).
target_urlYesHTTPS endpoint that speaks 402 quote envelopes.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full burden. It claims 'read-only' but sending probes to an external endpoint may have side effects (e.g., network calls, possible impact on target). It does not disclose idempotency, rate limits, failure behavior, or data persistence. 'Tier 0, free' adds cost info but not behavioral detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at three sentences, front-loading the purpose and key details. No redundant phrases. However, it could be more structured (e.g., bullet points) without increasing length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers basic purpose, process, and constraints (max 5 probes, free, read-only). But lacks return format, error handling, and potential side effects of probes. For a tool hitting an external endpoint, more completeness on network behavior and response structure is needed for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so baseline is 3. The description adds 'descending floor pcts' which is not in schema, but otherwise does not significantly enhance understanding of parameters beyond schema defaults.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool builds a demand curve by sending probes and returns a reservation-price estimate. It uses specific verbs and resources, distinguishing it from siblings like barter_arbitrage_book and barter_discover, though it does not explicitly differentiate them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance. The phrase 'Tier 0, free, read-only' hints at conditions but does not explain alternatives or context for selecting this tool over siblings. Usage is merely implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.