Skip to main content
Glama
Ownership verified

Server Details

The human API for AI agents. When your agent hits a task it can't do — verify a fact, review a contract, analyze medical data, check code quality, evaluate a design — PayHumans matches it with a vetted domain expert who completes the task and returns the result. Zero setup required. Call register_agent to get an API key and $50 free credits instantly. No login, no credit card, no approval process. Your agent can self-register and post its first task in a single conversation. Expert domains avail

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

17 tools
agents.registerAInspect

Register a new AI agent with PayHumans. Call this first to get your API key — you'll receive $50 in free credits. Once registered, all subsequent tool calls in this session are automatically authenticated.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesYour agent's name
emailNoContact email (optional)
website_urlNoYour agent's website or repo URL (optional)
referral_codeNoReferral code from a friend — gives you both $25 extra credits (optional)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (non-readonly, non-idempotent), the description adds critical behavioral context: it returns an API key, awards $50 credits, and automatically authenticates subsequent session calls. This side-effect disclosure is valuable for agent decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first establishes purpose, second provides sequencing/incentive, third explains session side-effects. Perfectly front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description conceptually covers the return value (API key, credits) and explains the session authentication mechanism. Sufficiently complete for a registration initialization tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured data already fully documents all four parameters. The description provides no additional parameter guidance, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (register) and resource (AI agent with PayHumans), and effectively distinguishes this authentication/initialization tool from siblings like jobs.*, experts.*, and payments.*.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('Call this first') and explains the authentication flow for the session. Lacks explicit 'when not to use' or alternative authentication methods, but the sequential guidance is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agents.set_webhookA
Idempotent
Inspect

Register an HTTPS callback URL to receive real-time job status change events via POST. Your server must accept POST with JSON body: { event, job_id, old_status, new_status, updated_at }.

ParametersJSON Schema
NameRequiredDescriptionDefault
callback_urlYesHTTPS URL that will receive job status change events
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses critical behavioral details beyond annotations: specific POST method, exact JSON payload schema with field names (event, job_id, old_status, new_status, updated_at), and HTTPS requirement. Aligns with idempotentHint=true (re-registering same URL likely idempotent). Missing retry/failure policy for failed callbacks.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste: first establishes purpose and protocol, second details server requirements. Payload schema is front-loaded in second sentence. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a webhook registration tool: explains setup requirements, payload contract, and HTTP method. Given 100% schema coverage and annotations present, omission of return values (no output schema) and unregister mechanism (delete_webhook not in siblings) is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and description largely mirrors schema ('HTTPS callback URL' vs schema's 'HTTPS URL'). Baseline score applies as description adds minimal semantic detail about the parameter itself beyond what schema provides, though it adds behavioral context about POST handling.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Register' with resource 'HTTPS callback URL' and clearly defines scope as 'real-time job status change events'. Effectively distinguishes from sibling 'agents.register' (agent registration vs webhook configuration) and clarifies this monitors jobs.* lifecycle events.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear prerequisite ('Your server must accept POST with JSON body...') implying infrastructure requirements. However, lacks explicit guidance on when to use this versus polling alternatives (jobs.get/jobs.list) or whether multiple registrations are allowed/replaced.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

approve_matchAInspect

Approve a specific expert to work on your task. Call this after receiving a match_proposed webhook event or after get_pending_actions returns an approve_match action.

ParametersJSON Schema
NameRequiredDescriptionDefault
task_idYesJob UUID
expert_idYesExpert UUID to approve (from the match proposal)
message_to_expertNoOptional message to send to the expert with the approval
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish write-access (readOnlyHint: false) and non-destructive nature. Description adds workflow triggers but omits behavioral specifics: what state changes occur (task status?), whether expert is notified automatically, or if approval is reversible (idempotentHint: false suggests not).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tight sentences with zero redundancy: first establishes purpose, second gives precise invocation timing. Efficient use of tokens while conveying both function and workflow dependency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 3-parameter tool with simple structure. Covers primary action and prerequisites. Minor gap: could clarify immediate consequences (e.g., 'expert can begin work' or 'payment authorization initiated') given payment/dispute siblings exist.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions. The description adds implicit mapping of 'task' to task_id and 'expert' to expert_id, and context that these come from 'match proposal', meeting baseline expectations for fully documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Approve' paired with clear resources 'expert' and 'task'. Explicitly distinguishes from sibling tools like jobs.create (which creates but doesn't approve) and get_pending_actions (which merely discovers pending approvals).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly defines trigger conditions: 'after receiving a match_proposed webhook event' or 'after get_pending_actions returns an approve_match action'. Provides concrete workflow integration points that prevent misuse before conditions are met.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

experts.getA
Read-onlyIdempotent
Inspect

Get a human expert's full profile, current availability, and rating

ParametersJSON Schema
NameRequiredDescriptionDefault
expert_idYesExpert UUID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is read-only and idempotent. The description adds valuable context about what specific data is returned (profile, availability, rating) that annotations don't cover, though it doesn't mention error handling or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Every element serves a purpose: 'Get' (action), 'human expert's' (resource), and 'full profile, current availability, and rating' (scope/returned fields). Perfectly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with one parameter and no output schema, the description adequately compensates by specifying the three data elements returned (profile, availability, rating). Could improve by mentioning the 'not found' error case.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (expert_id is documented as 'Expert UUID'), the baseline is 3. The description doesn't add parameter-specific semantics, but none are needed given the complete schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('human expert's full profile') and clearly distinguishes from sibling 'experts.search' by emphasizing 'full profile, current availability, and rating' versus a search/list operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the phrase 'full profile' implies this is for detailed lookup (versus searching), there is no explicit guidance on when to use this tool versus 'experts.search' or what prerequisites are needed (e.g., having an expert_id from a previous search).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

experts.searchB
Read-onlyIdempotent
Inspect

Search vetted human experts by domain, research type, or maximum hourly rate

ParametersJSON Schema
NameRequiredDescriptionDefault
domainNoDomain to search for (e.g. legal, finance, medical, research)
max_rateNoMaximum hourly rate in USD
research_typeNoType of research needed
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, establishing this as a safe query operation. The description adds valuable context that experts are 'vetted,' implying quality control, but fails to disclose pagination behavior, result limits, or response structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, efficient sentence that front-loads the action ('Search') and packs in the resource type and key parameters without waste. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should ideally describe what the search returns (expert records, IDs, etc.) and note that all parameters are optional (0 required). It covers the basic search intent but leaves gaps regarding return format and parameter optionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description lists the three filterable fields but adds no additional semantic context—such as validation rules, format constraints, or examples—beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Search), resource (vetted human experts), and searchable dimensions (domain, research type, max hourly rate). However, it does not explicitly distinguish this from sibling 'experts.get' (which likely retrieves a specific expert by ID versus filtering a list).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'experts.get', nor does it mention prerequisites or optimal use cases. It simply states what filtering capabilities exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pending_actionsA
Read-onlyIdempotent
Inspect

Check all your jobs that require your attention right now — matches to approve, evidence to review, payments to release, or disputes to address.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax number of actions to return (default: 10)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since annotations already establish this is read-only and non-destructive, the description appropriately focuses on domain-specific behavior by defining what constitutes a 'pending action' in this system. It discloses the four business object types monitored, adding valuable context not present in the structured metadata.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single efficient sentence front-loaded with the core action ('Check all your jobs'), followed by an em-dash that cleanly enumerates examples. No redundancy or filler; every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with one optional parameter and robust annotations, the description is appropriately complete. It compensates for the missing output schema by detailing the four types of pending records returned, though it could briefly note that results include identifiers needed for subsequent action calls.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage for the single 'limit' parameter, the baseline score applies. The description does not mention the parameter, but the schema fully documents it with type, description, and default value, so no additional text is necessary.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Check' with the resource 'jobs that require your attention' and explicitly distinguishes from action-oriented siblings (approve_match, release_payment, raise_dispute) by framing these as items awaiting action rather than the actions themselves.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by listing four specific pending action types (matches, evidence, payments, disputes), which helps the agent understand this retrieves items for later processing. However, it lacks explicit guidance on when to use this versus directly calling action tools or whether this is a prerequisite step.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_task_statusA
Read-onlyIdempotent
Inspect

Get the full real-time status of a posted task — who is working on it, evidence submitted, payment status, and any pending actions you need to take.

ParametersJSON Schema
NameRequiredDescriptionDefault
task_idYesThe job UUID returned from jobs.create or post_task
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only/idempotent safety profile. Description adds valuable behavioral context by detailing what 'status' encompasses (assignee, evidence, payment, actions) and emphasizing 'real-time' nature, helping the agent understand data freshness and richness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfect single-sentence structure with high information density. Front-loaded action ('Get the full real-time status'), followed by em-dash enumeration of specific data facets. No wasted words, every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description effectively compensates by enumerating the four key status dimensions returned (workers, evidence, payment, actions). With only one required parameter and good annotations, this provides sufficient context for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with task_id fully documented. Description supports this by referring to 'posted task' which aligns with the parameter's job UUID semantics, but adds minimal new syntactic or semantic detail beyond the schema's existing coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specific verb ('Get') plus resource ('real-time status of a posted task') and clear scope differentiation from siblings like jobs.get by enumerating specific return facets (who is working, evidence, payment status, pending actions).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context ('posted task' constrains to existing tasks, parameter description references jobs.create/post_task as sources), but lacks explicit when-to-use versus alternatives like jobs.get or get_pending_actions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jobs.completeAInspect

Accept delivered work and initiate Solana USDC payment to the expert

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesJob UUID to accept and pay for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable specifics beyond the annotations: it discloses the exact payment rail (Solana USDC) and clarifies the dual nature of the operation (acceptance + payment initiation). It aligns with readOnlyHint:false by describing a write operation. It could improve by mentioning failure modes (e.g., insufficient funds) or that this is non-idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, dense sentence conveys the complete operation (acceptance + payment) without redundancy. Critical information (payment currency, action type) is front-loaded and every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (single parameter, no output schema) and presence of annotations, the description is appropriately complete. It successfully identifies the payment method (Solana USDC) which is essential context for a payment tool. It could mention the non-idempotent nature or error states, but these are partially covered by annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the input parameter is fully documented in the schema itself ('Job UUID to accept and pay for'). The description does not explicitly reference the job_id parameter or add syntax details, but the baseline score of 3 is appropriate since the schema carries the full semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Accept', 'initiate') and resources ('delivered work', 'Solana USDC payment') to clearly define the tool's function. It effectively distinguishes from siblings: unlike jobs.create (which starts jobs), this completes them; unlike payments.confirm (which confirms), this initiates payment; unlike jobs.get/list (reads), this performs a state-changing action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'Accept delivered work' provides clear contextual guidance that this tool is intended for the final stage of a job lifecycle after delivery. While it lacks explicit 'when not to use' warnings or named alternatives, the specific context of accepting delivered work strongly implies the prerequisite state needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jobs.createBInspect

Post a new research task or job for a human expert to complete

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesJob title
budgetYesBudget in USD
domainYesDomain category (e.g. legal, finance, medical)
deadlineNoISO deadline date
descriptionNoDetailed job description
research_typeNoType of research
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a write operation (readOnly=false) that is non-destructive and non-idempotent. The description adds value by specifying 'human expert' consumption, indicating asynchronous human-in-the-loop behavior. However, it omits side effects, return values (likely job ID), and the implication of idempotentHint=false (duplicate posts create duplicate jobs).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundancy. It front-loads the core action ('Post') and maintains appropriate information density for its length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 6-parameter creation tool with no output schema, the description meets minimum viability by conveying the core purpose. However, given the presence of sibling tools suggesting complex workflows (payments, experts, messages), the description could better contextualize where this operation fits in the broader job lifecycle.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline score is 3. The description mentions 'research task' and 'job' which loosely map to research_type and title/description parameters, but adds no syntax guidance, format examples, or semantic relationships between parameters (e.g., that budget relates to payments.confirm workflow).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Post') and resource ('new research task or job'), and specifies the target audience ('human expert'). This distinguishes it from sibling tools like jobs.complete/jobs.list (which manage existing jobs) and experts.search (which queries experts rather than creating tasks).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., when to search experts first, or how this relates to payments.confirm). It lacks prerequisites or workflow context despite being part of a multi-step job lifecycle.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jobs.getB
Read-onlyIdempotent
Inspect

Get job details including current status and completed result

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesJob UUID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, covering safety and retry behavior. The description adds value by disclosing that the tool returns 'current status' and 'completed result', hinting at polling use cases for async jobs, but doesn't address error states or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 9 words with no redundancy. Key action ('Get') is front-loaded, and the phrase 'including current status and completed result' efficiently conveys return value information without output schema bloat.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter read operation with complete annotations and schema, the description is adequate. It compensates for the missing output schema by mentioning 'status' and 'result' concepts, though it could explicitly reference the job_id requirement in prose for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (job_id is documented as 'Job UUID' in the schema), the baseline is 3. The description text adds no additional parameter context, relying entirely on the schema for parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' and resource 'job details', and specifies what data is returned (status and result). However, it doesn't explicitly clarify that this retrieves a single job by ID versus the sibling 'jobs.list' operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description provides no guidance on when to use this tool versus siblings like 'jobs.list', nor does it mention prerequisites such as obtaining the job_id from 'jobs.create' or polling patterns for async completion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jobs.listA
Read-onlyIdempotent
Inspect

List all jobs posted by your agent with optional status filter

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoFilter by status (open, in_progress, delivered, completed)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent safety profile. Description adds ownership scope ('your agent') and filter optionality without contradicting annotations, but omits return format, pagination, or default ordering behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 11-word sentence with zero redundancy. Front-loaded with primary action ('List all jobs') followed by scoping and filtering details. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple filtered list operation with strong annotation coverage. Lacks description of return structure, but this is partially mitigated by the absence of an output schema and the intuitive 'list' naming convention.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with complete status parameter documentation. Description mentions 'optional status filter' which aligns with but does not extend beyond the schema's explicit enumeration of valid status values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'List' with clear resource 'jobs' and scope 'posted by your agent'. Implicitly distinguishes from sibling jobs.get (single retrieval) and jobs.create (mutation) through plural 'all' and read-only verb.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through scope ('your agent') and optionality of filter, but provides no explicit when-to-use guidance or contrast with siblings like jobs.get for specific job retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

messages.getA
Read-onlyIdempotent
Inspect

Get all messages for a job, ordered chronologically. Use this to read expert replies.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesJob UUID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only and idempotent hints. The description adds valuable behavioral context not present in annotations: the 'ordered chronologically' constraint and the specific use case of reading 'expert replies' rather than generic messages.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first defines the operation and ordering, the second defines the use case. Information is front-loaded and every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only getter with single parameter and full annotation coverage, the description is complete. It covers purpose, behavioral traits (ordering), and intended use case without needing to describe return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (job_id described as 'Job UUID'), the schema carries the full semantic load. The description references 'for a job' which lightly implies the parameter but adds no syntax or format details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Get all messages for a job, ordered chronologically' providing a specific verb, resource, and scope. It further distinguishes from sibling 'messages.send' by specifying this is for reading 'expert replies'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'Use this to read expert replies' provides clear positive guidance on when to use the tool. However, it does not explicitly state exclusions or name alternative tools like 'messages.send' for sending messages.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

messages.sendAInspect

Send a message to the human expert working on a specific job

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesJob UUID
contentYesMessage content to send to the expert
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations indicate this is a non-destructive write operation (readOnlyHint: false, destructiveHint: false), the description adds valuable context about the recipient being a 'human expert' (distinguishing from automated agents) and the job-specific nature of the communication. It does not address idempotency implications or failure modes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is perfectly front-loaded with the action verb and contains zero redundancy. Every word serves to clarify the recipient type ('human expert'), the action ('Send'), and the scope ('specific job').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, 100% schema coverage, clear annotations), the description adequately covers the essential behavioral context. It appropriately omits return value details (no output schema exists), though it could briefly mention success indicators or delivery confirmation behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is appropriately met. The description reinforces the purpose of the 'content' parameter (message to expert) and 'job_id' (specific job context) but does not add syntax details, format constraints, or examples beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Send') with clear resource ('message') and uniquely identifies the recipient ('human expert working on a specific job'). This effectively distinguishes the tool from siblings like jobs.complete, experts.get, or agents.register which handle different actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context by specifying the recipient (human expert) and context (specific job), but lacks explicit guidance on when to use this versus alternatives like jobs.complete, or prerequisites such as job existence requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

payments.confirmA
Destructive
Inspect

Submit Solana transaction signature to confirm and release USDC payment to the expert

ParametersJSON Schema
NameRequiredDescriptionDefault
payment_idYesPayment UUID from jobs.complete
tx_signatureYesSolana transaction signature (base58)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Aligns well with destructiveHint=true by specifying 'release USDC payment' (explains what gets destroyed/transferred), adds critical blockchain context (Solana, base58 signatures) not in annotations, though could explicitly state the irreversible nature of the transfer.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 12-word sentence with zero waste. Front-loaded with action verb 'Submit', immediately identifies the blockchain mechanism (Solana), and clearly states the outcome (release USDC). Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a 2-parameter destructive operation with full schema coverage. Captures the financial risk (destructive payment release) and blockchain specifics. Minor gap: lacks mention of error conditions or success confirmation behavior given the irreversible financial nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (both parameters fully documented), the baseline is 3. The description frames the relationship between parameters ('submit signature to confirm payment') but doesn't add semantic details beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with clear verbs (submit, confirm, release), identifies the exact resource (USDC payment on Solana), and distinguishes from sibling tools like jobs.complete (which likely initiates payments) by specifying this confirms/releases funds to experts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies clear workflow context (requires tx_signature from Solana blockchain, references jobs.complete in schema parameter descriptions), but lacks explicit 'when to use' guidance such as 'Use this after calling jobs.complete when you have a Solana transaction signature' or warnings about irreversibility.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

raise_disputeAInspect

Raise a dispute if you are not satisfied with the submitted work. Payment is frozen and an admin will review within 4 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonYesWhy the work is unsatisfactory
task_idYesJob UUID
evidence_issuesNoWhich evidence IDs are problematic
requested_resolutionNoWhat resolution you want (default: admin_review)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Strong value-add beyond annotations: discloses payment freezing side-effect and 4-hour admin review timeline. Annotations only indicate it's a non-destructive write operation; the description explains the business process impact. Could enhance by noting if disputes are reversible or if multiple disputes per task are allowed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly efficient: Two sentences where the first establishes purpose/condition and the second discloses critical side-effects. No redundancy, no generic filler, optimally front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 4-parameter mutation tool. Covers trigger condition, immediate side effects (payment freeze), and timeline (4 hours). With no output schema, the description adequately explains what happens without needing to detail return values. Minor gap: could clarify relationship to the task lifecycle (e.g., 'use before release_payment').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with clear descriptions for all 4 parameters, establishing baseline 3. Description implies the 'reason' concept through 'not satisfied' but doesn't add syntax details, validation rules, or parameter interdependencies beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent clarity: 'Raise a dispute' provides specific verb+resource, and 'if you are not satisfied with the submitted work' clearly distinguishes from siblings like approve_match, release_payment, and jobs.complete which handle satisfactory outcomes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides condition for use ('if you are not satisfied'), but lacks explicit guidance on when NOT to use it (e.g., before attempting negotiation) and doesn't name alternatives like approve_match or release_payment to help the agent choose between resolution paths.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

release_paymentAInspect

Release escrowed payment to the expert after reviewing their submitted evidence. Supports both Stripe and Solana USDC (x402) payouts.

ParametersJSON Schema
NameRequiredDescriptionDefault
ratingNo1–5 star rating for the expert
task_idYesJob UUID
feedbackNoPublic feedback shown on expert's profile
tx_signatureNoSolana tx signature (required for x402 payments)
payment_methodNoPayment method: 'x402' (USDC, default) or 'stripe'
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds escrow context (funds are held pending review) and discloses payment rail options beyond annotations. Matches annotations (readOnlyHint: false implies write operation, destructiveHint: false matches non-destructive transfer). Could add failure modes or confirmation details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes action and prerequisite, second specifies payment methods. Front-loaded with the core action and appropriately scoped for complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Good coverage for payment tool given no output schema: addresses escrow mechanics, evidence review, and payout methods. Minor gap: doesn't explicitly mention that rating/feedback parameters constitute the public review submission implied by 'reviewing evidence', though schema covers them.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, description appropriately focuses on adding semantic context rather than parameter listing. Clarifies that x402 means Solana USDC and maps payment methods to the schema fields, adding meaning beyond bare schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Release' with resource 'escrowed payment', clearly targeting expert payouts. Distinguishes from sibling 'payments.confirm' by specifying escrow context and evidence review requirement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Establishes clear prerequisite ('after reviewing their submitted evidence') suggesting when to trigger. Mentions dual payment rails (Stripe vs Solana) hinting at method selection. However, lacks explicit contrast with sibling 'raise_dispute' or failure path guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_humansA
Read-onlyIdempotent
Inspect

Search available human experts by skills, natural language query, location, or hourly rate. Use this before posting a job to preview who is available.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (default: 10)
queryNoNatural language search (e.g. 'someone who can verify medical equipment')
skillsNoSkills to filter by (e.g. ['photography', 'legal', 'python'])
task_typeNoTask category: physical, research, consultation, coding, legal, medical, financial, creative
available_nowNoOnly show currently available experts (default: true)
max_hourly_rate_usdNoMaximum hourly rate in USD
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, establishing the safe, non-mutating nature of the operation. The description adds the 'preview' context and search intent, but does not elaborate on return format, pagination behavior beyond the limit parameter, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence front-loads the core capability with specific filter examples; the second sentence provides essential workflow guidance. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 6 well-documented parameters and clear annotations, the description provides sufficient context for a search tool. It establishes the relationship to the job posting workflow, though it could clarify the distinction from experts.search.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description provides conceptual mapping (e.g., 'natural language query' maps to the query parameter, 'hourly rate' maps to max_hourly_rate_usd) but mentions 'location' which is not present in the schema parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Search) and resource (available human experts), listing concrete filter dimensions (skills, natural language query, hourly rate). The phrase 'Use this before posting a job' effectively distinguishes it from sibling tools like jobs.create and experts.get by positioning it in the workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides temporal and contextual guidance: 'Use this before posting a job to preview who is available.' This clearly signals when to invoke the tool versus creating a job, directly addressing the alternative workflow path.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources