paycrow
Server Details
Escrow protection for agent payments on Base — USDC held in smart contract until job completion.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 10 of 10 tools scored.
Most tools have distinct purposes, but there is some overlap between trust_gate, trust_onchain_quick, and trust_score_query, which all relate to checking agent trustworthiness. While each has a slightly different focus (quick check vs. full breakdown vs. decision recommendation), an agent might struggle to choose the right one in some scenarios. The escrow and payment tools are clearly differentiated.
The naming is mostly consistent with a verb_noun pattern (e.g., escrow_create, escrow_release, rate_service), but there are deviations like safe_pay (adjective_verb) and x402_protected_call (alphanumeric prefix). These outliers break the pattern slightly, though the majority of tools follow a clear convention.
With 10 tools, the count is well-scoped for the server's purpose of handling escrow, payments, and trust management. Each tool appears to serve a specific function in the workflow, from creating escrows to checking trust scores, without feeling excessive or insufficient for the domain.
The tool set provides comprehensive coverage for the escrow and payment domain, including creation, dispute, release, status checks, rating, and trust assessment. It supports both automated (safe_pay) and manual (x402_protected_call) workflows, with no obvious gaps in the lifecycle or functionality needed for secure transactions.
Available Tools
10 toolsescrow_createAInspect
Create a USDC escrow with built-in dispute resolution. Funds are locked on-chain until delivery is confirmed (release) or a problem is flagged (dispute). If disputed, an arbiter reviews and rules — the only escrow service with real dispute resolution on Base.
| Name | Required | Description | Default |
|---|---|---|---|
| seller | Yes | Ethereum address of the seller/service provider | |
| amount_usdc | Yes | Amount in USDC (e.g., 5.00 for $5) | |
| service_url | Yes | URL or identifier of the service being purchased (used for tracking) | |
| timelock_minutes | No | Minutes until escrow expires and auto-refunds (default: 30) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden and successfully explains the core behavioral flow: funds locked on-chain, release trigger (delivery confirmation), dispute trigger (problem flagged), and arbitration outcome. Mentions network context (Base). Missing technical details like gas fees, authentication requirements, and expiration behavior (though timelock parameter is documented in schema).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with logical progression: definition, mechanism, and dispute resolution details. Slight inefficiency in the marketing claim ('the only escrow service with real dispute resolution on Base') which could be more concise, but overall every sentence contributes material information about the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a financial tool with 100% schema coverage but no output schema or annotations. Describes the escrow lifecycle and dispute resolution well, but omits what the tool returns (likely an escrow ID/address) and lacks warnings about irreversible blockchain transactions or prerequisites (e.g., sufficient USDC balance for amount + gas).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds valuable semantic context by connecting parameters to the business logic: 'seller' linked to 'delivery confirmation', 'amount_usdc' described as 'locked' funds, and the overall escrow mechanism contextualizes the purpose of all four parameters beyond their individual schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb (Create) + resource (USDC escrow) + distinguishing feature (built-in dispute resolution). Clearly distinguishes from siblings like safe_pay or trust_gate by emphasizing on-chain fund locking and arbitration, while explicitly referencing the escrow_release and escrow_dispute workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through the state machine description (funds locked until delivery confirmed or disputed), but lacks explicit guidance on when to choose this over direct payment alternatives like safe_pay. References sibling tools via parenthetical '(release)' and '(dispute)' but doesn't explicitly state selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
escrow_disputeAInspect
Flag a problem with delivery — PayCrow's key differentiator. Locks escrowed funds and triggers arbiter review. Unlike other escrow services that say 'no disputes, no chargebacks', PayCrow has real on-chain dispute resolution. Use when service was not delivered or quality was unacceptable.
| Name | Required | Description | Default |
|---|---|---|---|
| reason | Yes | Brief description of the problem for the arbiter | |
| escrow_id | Yes | The escrow ID to dispute |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and succeeds: discloses that funds become 'locked' (critical state change) and that 'arbiter review' is triggered. Mentions 'on-chain dispute resolution' signaling irreversible/serious process. Could improve by mentioning if disputes are reversible or timelines.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured four-sentence flow: action → mechanics → differentiation → usage. Marketing claim about 'key differentiator' earns its place by explaining why this dispute mechanism exists vs competitors. Front-loaded with core action 'Flag a problem.'
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 2-parameter financial tool with no output schema. Covers locking mechanics and arbiter involvement. Missing only post-submission expectations (timeline, evidence requirements), but core dispute initiation is fully documented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('The escrow ID to dispute', 'Brief description of the problem for the arbiter'). Description text adds no parameter-specific semantics beyond what schema provides, earning baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with 'Flag a problem with delivery' and 'Locks escrowed funds and triggers arbiter review.' Clearly distinguishes from sibling tools like escrow_release (positive completion) and escrow_status (read-only check) by emphasizing the dispute/arbitration flow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use when service was not delivered or quality was unacceptable.' Clear negative-condition trigger. Minor gap: doesn't explicitly name escrow_release as the alternative for successful transactions, though this is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
escrow_releaseAInspect
Confirm delivery and release escrowed USDC to the seller. Only call this when you've verified the service/product was delivered correctly.
| Name | Required | Description | Default |
|---|---|---|---|
| escrow_id | Yes | The escrow ID to release |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the verification requirement (critical safety check) and financial nature (USDC transfer), but lacks details on irreversibility, failure modes, authorization requirements, or return values for this destructive financial operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences with zero waste: first establishes the action, second provides the critical usage constraint. Information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool, covering the core action and critical verification prerequisite. However, for a financial mutation tool without annotations or output schema, it lacks details on error conditions, return values, or side effects beyond the fund release.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (escrow_id fully documented), so the description meets the baseline expectation. The description does not add additional parameter semantics, but none are needed given complete schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb (release), resource (escrowed USDC), and recipient (seller), clearly distinguishing it from sibling tools like escrow_create (creation) and escrow_dispute (dispute resolution).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the critical precondition ('Only call this when you've verified the service/product was delivered correctly'), establishing when to use the tool. However, it does not explicitly name alternatives like escrow_dispute for cases where delivery failed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
escrow_statusAInspect
Check the current state of an escrow (funded, released, disputed, expired, etc.)
| Name | Required | Description | Default |
|---|---|---|---|
| escrow_id | Yes | The escrow ID to check |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context by enumerating possible states (funded, released, disputed, expired), hinting at the return values. However, it lacks information on error handling (e.g., invalid ID), authentication requirements, or whether the check is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of 11 words. Every element serves a purpose: 'Check' defines the action, 'current state' defines the target, and the parenthetical list provides concrete examples without verbosity. No redundancy or filler present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 required parameter, simple query concept), 100% schema coverage, and the absence of annotations or output schema, the description is sufficiently complete. The enumerated state examples partially compensate for the missing output schema by indicating what information the tool returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'escrow_id' parameter ('The escrow ID to check'). The description mentions 'escrow' but does not add semantic details about ID format, valid ranges, or where to obtain the ID beyond what the schema already provides. Per the rubric, 100% schema coverage establishes a baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Check' with the clear resource 'current state of an escrow'. The parenthetical examples (funded, released, disputed, expired) effectively distinguish this query tool from its siblings (escrow_create, escrow_dispute, escrow_release) by indicating it returns status information rather than modifying state.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the verb 'Check' clearly signals this is a read-only query operation distinct from sibling mutation tools (create, dispute, release), there is no explicit guidance on when to use this versus alternatives, nor any mention of prerequisites like requiring a valid escrow_id from a prior create operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rate_serviceAInspect
Rate a completed escrow. After escrow_release, rate the seller's service quality (1-5 stars).
This builds the reputation data that makes PayCrow's trust scores meaningful over time. Both sides can rate: buyer rates seller's service quality, seller rates buyer's conduct.
Ratings are on-chain and permanent — they feed directly into trust scoring.
| Name | Required | Description | Default |
|---|---|---|---|
| stars | Yes | Rating 1-5 stars (1=terrible, 5=excellent) | |
| escrow_id | Yes | The escrow ID to rate (must be in Released state) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral burden excellently. Discloses critical traits: 'on-chain and permanent' (immutability warning), 'feed directly into trust scoring' (side effects), and bidirectional rating capability (buyer/seller both eligible).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste. Front-loaded with action ('Rate a completed escrow'), followed by business value, bidirectional usage rules, and critical permanence warning. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a 2-parameter mutation tool without output schema. Covers workflow timing, actor eligibility, permanence implications, and trust system integration. Could briefly mention success/failure behavior but adequately complete given constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete descriptions for 'escrow_id' (including state requirement) and 'stars' (including scale semantics). The description reinforces the 1-5 scale and completed escrow context but does not add syntax or format details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Rate' targeting 'completed escrow' and 'seller's service quality'. The phrase 'After escrow_release' distinguishes this from sibling tools like escrow_create or escrow_dispute by establishing clear workflow positioning.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear temporal context ('After escrow_release') and eligibility ('Both sides can rate'), establishing when to use the tool in the lifecycle. Lacks explicit 'when not to use' warnings or named alternatives, though the sequencing is strongly implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
safe_payAInspect
The smart way to pay an agent. Checks their trust score first, then auto-configures escrow protection based on risk.
Flow: Check trust → Set protection level → Create escrow → Call API → Verify → Auto-release or auto-dispute.
Protection levels (automatic):
High trust agent → 15min timelock, proceed normally
Moderate trust → 60min timelock, payment capped at $25
Low trust → 4hr timelock, payment capped at $5
Unknown/caution → BLOCKED — will not send funds
This is the recommended tool for paying any agent. If you need manual control, use x402_protected_call instead.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The API endpoint URL to call | |
| body | No | Request body (for POST/PUT) | |
| method | No | HTTP method (default: GET) | GET |
| headers | No | HTTP headers to include | |
| amount_usdc | Yes | Amount to pay in USDC | |
| seller_address | Yes | Ethereum address of the agent you're paying |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Excellently discloses internal decision logic (trust score checks, automatic protection levels with specific timelocks/caps, blocking conditions for unknown agents) and operational flow (escrow creation → API call → auto-release/dispute). Minor gap: lacks description of failure modes or return values given no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured with summary sentence, logical flow diagram, bulleted protection levels, and alternative recommendation. Every sentence conveys essential behavioral or contextual information; no redundancy despite covering complex multi-step orchestration logic.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage of risk-based automation logic and sibling differentiation appropriate for a complex 6-parameter orchestration tool. With no output schema, description should ideally specify what the tool returns (e.g., transaction receipt, success status) to be complete, but the operational documentation is thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description contextualizes parameters within the payment flow ('Call API' explains url/body/method/headers usage) but does not add syntax, format constraints, or semantic details beyond what the schema already provides for individual fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('pay an agent') and resource identified, with clear auto-configuration value proposition. Explicitly distinguishes from sibling x402_protected_call ('If you need manual control, use x402_protected_call instead'), clarifying its position as the automated orchestration layer over raw escrow/trust tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('This is the recommended tool for paying any agent') and provides clear alternative for different needs (manual control → x402_protected_call). The protection level breakdown implicitly guides usage by showing automatic constraints that require no manual configuration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trust_gateAInspect
Should you pay this agent? Check before sending money. Returns a go/no-go decision with recommended escrow protection parameters.
Unlike other trust services, PayCrow ties trust directly to escrow protection:
High trust → shorter timelock, proceed with confidence
Low trust → longer timelock, smaller amounts recommended
Caution → don't proceed, or use maximum protection
This is the tool to call BEFORE escrow_create or safe_pay.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Ethereum address of the agent you're about to pay | |
| intended_amount_usdc | No | How much you plan to pay (helps calibrate the recommendation) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It explains the return value semantics ('go/no-go decision') and the behavioral logic mapping trust levels to specific escrow parameters. It implies this is a read-only check operation ('Check before sending money'), though it doesn't explicitly state side effects or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with the core question. Uses a bullet-point structure to efficiently convey the three trust-level behaviors without verbosity. The final sentence explicitly anchors it in the tool sequence. No filler text; every sentence clarifies purpose, behavior, or workflow position.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description compensates by detailing the three possible decision outcomes and their escrow implications. It adequately covers the 2-parameter input space and contextualizes the tool among its 8 siblings. Minor gap: could hint at the data structure of the 'go/no-go' response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with 'address' and 'intended_amount_usdc' fully documented in the schema. The description implies parameter usage through context ('pay this agent,' 'sending money') but does not add syntactic details or semantic constraints beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific actionable question ('Should you pay this agent?') and clearly states it returns a 'go/no-go decision with recommended escrow protection parameters.' It explicitly distinguishes itself from sibling trust tools (trust_onchain_quick, trust_score_query) by stating 'Unlike other trust services, PayCrow ties trust directly to escrow protection,' and clarifies its unique role in the workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit temporal sequencing: 'This is the tool to call BEFORE escrow_create or safe_pay.' It also details decision outcomes (High trust → shorter timelock, Low trust → longer timelock, Caution → don't proceed), giving clear guidance on how to interpret and act on results relative to the payment workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trust_onchain_quickAInspect
Quick on-chain reputation check using only the PayCrow Reputation contract. Free, no API keys needed. Use trust_score_query for the full composite score.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Ethereum address of the agent to look up |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully adds critical behavioral context: it discloses cost ('Free'), authentication requirements ('no API keys needed'), and data source limitations ('using only the PayCrow Reputation contract'). However, it does not clarify the return format or error handling behavior when an address is not found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three efficient sentences that front-load the purpose, followed by behavioral constraints and sibling differentiation without redundant wording. Every sentence earns its place by conveying distinct operational or comparative information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, no nested objects) and lack of annotations, the description provides sufficient context by naming the specific contract used and contrasting with the full alternative. While it omits return value specifics, the functional scope is adequately defined for an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'address' parameter ('Ethereum address of the agent to look up'), so the description appropriately relies on the schema rather than repeating parameter details. The baseline score of 3 reflects adequate coverage where the schema handles the documentation burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('check') and resource ('on-chain reputation') while explicitly limiting scope to the 'PayCrow Reputation contract.' It clearly distinguishes itself from sibling tool `trust_score_query` by positioning this as the 'quick' option versus the 'full composite score' alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly directs users to the alternative tool `trust_score_query` when they need the 'full composite score,' establishing clear selection criteria between siblings. It also provides practical usage context by stating 'Free, no API keys needed,' which helps determine when this tool is appropriate based on authentication constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trust_score_queryAInspect
Full trust score breakdown for an agent address. Aggregates 4 on-chain sources: PayCrow escrow history, ERC-8004 agent identity, Moltbook social karma, and Base chain activity. Returns 0-100 score with per-source details. For a quick go/no-go decision, use trust_gate instead.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Ethereum address of the agent to look up |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully discloses the 4 specific on-chain data sources (PayCrow, ERC-8004, Moltbook, Base) and the return format (0-100 score with per-source details). However, it lacks explicit mention of rate limits, caching behavior, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 states purpose, sentence 2 details behavioral traits (sources and return format), and sentence 3 provides usage guidelines. Information is front-loaded with the core purpose in the first four words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates by specifying the return structure (0-100 score with per-source details). It adequately explains the aggregation complexity across 4 sources. A minor gap remains regarding error handling or edge cases (e.g., invalid addresses), but it is sufficiently complete for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('Ethereum address of the agent to look up'). The description references 'agent address,' which aligns with but does not substantially augment the schema's semantics. With complete schema coverage, the baseline score of 3 is appropriate as the description does not need to compensate for missing schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Full trust score breakdown for an agent address,' providing a specific verb (breakdown), resource (trust score), and scope (agent address). It clearly distinguishes itself from sibling trust_gate by contrasting the detailed breakdown versus a quick go/no-go decision.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the alternative: 'For a quick go/no-go decision, use trust_gate instead.' This creates clear differentiation between this detailed aggregation tool and its sibling, guiding the agent to select the appropriate tool based on decision speed requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_protected_callAInspect
Make an HTTP API call with manual escrow protection. Full control over verification and timelock parameters.
For most payments, use safe_pay instead — it auto-configures protection based on seller trust.
Use x402_protected_call when you need:
Custom JSON Schema verification (not just "valid JSON + 2xx")
Hash-lock verification (exact response match)
Specific timelock durations
To override safe_pay's trust-based amount limits
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The API endpoint URL to call | |
| body | No | Request body (for POST/PUT) | |
| method | No | HTTP method | GET |
| headers | No | HTTP headers to include | |
| amount_usdc | Yes | Amount to pay in USDC | |
| seller_address | Yes | Ethereum address of the API provider (seller) who will receive payment | |
| timelock_minutes | No | Minutes until escrow expires | |
| verification_data | Yes | Verification data: JSON Schema string (for schema strategy) or expected hash (for hash-lock) | |
| verification_strategy | No | How to verify the response: 'schema' (JSON Schema) or 'hash-lock' (exact hash match) | schema |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses the escrow protection model, verification strategies (schema vs hash-lock), and timelock mechanics. However, it does not explicitly state failure modes (e.g., what happens if verification fails or the timelock expires) or confirm that funds are actually locked/spent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with a one-sentence purpose statement, a one-sentence alternative recommendation, and a high-density bullet list for specific use cases. Every sentence earns its place with zero redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 9-parameter financial tool with nested objects, the description adequately covers the escrow mechanism, verification types, and sibling differentiation. It lacks only explicit documentation of return values or error states (though no output schema exists to require this) and could mention the related escrow lifecycle tools (escrow_dispute, etc.) for full context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% description coverage (baseline 3), the description adds crucial semantic context by explaining that verification_data accepts either a 'JSON Schema string' or 'expected hash' depending on the strategy, and clarifies the relationship between verification_strategy, timelock_minutes, and amount_usdc via the usage scenarios.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb and resource ('Make an HTTP API call with manual escrow protection') and immediately distinguishes itself from sibling tool safe_pay by contrasting 'manual' control versus 'auto-configures' protection. It clearly defines the tool's scope as escrow-protected API calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-not-to-use guidance ('For most payments, use safe_pay instead') and lists four specific scenarios for using this tool instead (custom JSON Schema, hash-lock, specific timelocks, overriding amount limits). This creates clear decision boundaries for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!