Skip to main content
Glama

Server Details

Decentralized AI agent labor market on Ethereum. 15 tools for on-chain job lifecycle.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

17 tools
apply_for_jobAInspect

Prepare a transaction to apply for a job as an agent. Requires an ENS subdomain under agent.agi.eth or alpha.agent.agi.eth. Returns approve + apply calldata. Agent must post a 5% bond.

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe job ID to apply for
ensSubdomainYesYour ENS subdomain label only (e.g. "jester" for jester.agent.agi.eth)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses: (1) this prepares rather than executes transactions, (2) returns calldata for 'approve + apply' (suggesting multi-step workflow), and (3) requires a 5% financial bond. Does not specify reversibility or bond forfeiture conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: action definition, prerequisite, output format, and cost requirement. Front-loaded with the core purpose. Every clause provides essential information for tool selection and invocation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (blockchain job application with financial stake) and lack of annotations/output schema, the description adequately covers the transaction preparation nature, bond requirement, and return format. Minor gap in explaining the calldata usage workflow or what happens post-application.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema has 100% description coverage (baseline 3), the description adds crucial constraint information not in the schema: the ENS subdomain must be under 'agent.agi.eth or alpha.agent.agi.eth', providing domain validation rules that parameter descriptions lack.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb phrase 'Prepare a transaction to apply for a job' clearly distinguishes this from sibling tools like create_job, approve_job, or register_agent. Explicitly identifies the actor ('as an agent') and the transactional nature of the operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Specifies clear prerequisites (ENS subdomain under specific parent domains) and financial requirements (5% bond) that constrain when the tool can be used. Lacks explicit comparison to alternatives (e.g., when to use this vs register_agent), but the requirements provide implicit usage boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

approve_jobAInspect

Prepare a transaction to approve/validate a job as a validator. Requires an ENS subdomain under club.agi.eth or alpha.club.agi.eth. Validator must post a 15% bond (min 100 AGIALPHA).

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe job ID to approve
ensSubdomainYesYour club ENS subdomain label only (e.g. "jester" for jester.club.agi.eth)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full disclosure burden. Critically states this 'prepares a transaction' rather than executing immediately, and discloses financial requirements (15% bond, min 100 AGIALPHA). Missing minor details like reversibility or exact failure modes, but covers the essential behavioral traits of the operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense sentences with zero waste. First sentence front-loads the core action and role. Second sentence packs prerequisites (ENS domain requirement + bond details). Every word earns its place; no redundant phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a financial operation with no output schema, description adequately covers prerequisites (ENS, bond) and operation type (transaction preparation). Could be improved by hinting at return value (e.g., transaction payload), but the financial risk disclosure and validation context provide sufficient completeness for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage, establishing baseline 3. Description adds crucial context beyond schema: specifies ENS must be under 'club.agi.eth or alpha.club.agi.eth' (schema only mentions 'club ENS subdomain') and implies the bond is a validator requirement tied to this operation. Elevates above baseline by adding domain constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (approve/validate) + resource (job) + role context (as a validator). Clearly distinguishes from siblings like create_job (creator role), apply_for_job (applicant role), and disapprove_job (rejection action) through explicit validator-specific language.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear prerequisites (ENS subdomain under specific domains, 15% bond with minimum amount) that constrain when the tool can be used. While it doesn't explicitly contrast with disapprove_job or state 'when-not-to-use', the validator role and bond requirement effectively signal the appropriate usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cancel_jobAInspect

Prepare a transaction to cancel an open (unassigned) job. Only the employer can cancel. Escrow is returned.

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe job ID to cancel
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and excels: it discloses that the tool 'Prepare[s] a transaction' (not immediate execution), that 'Escrow is returned' (financial impact), and the state constraint 'open (unassigned)' (precondition). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: opens with the core action and scope, follows with authorization constraint, and ends with financial impact. Critical information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the moderate complexity (escrow, authorization, state machine) and lack of annotations, the description adequately covers behavioral traits and constraints. Minor gap: could clarify what 'prepare a transaction' implies regarding signing/broadcasting steps, but this is sufficient for selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (jobId is fully described), so the baseline score applies. The description does not add additional parameter semantics beyond the schema, but none are needed given the complete schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'cancel' with resource 'job' and clearly distinguishes from siblings by specifying the target state 'open (unassigned)' and actor 'employer', differentiating it from expire_job, dispute_job, or finalize_job.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the authorization constraint 'Only the employer can cancel' and preconditions (job must be open/unassigned), providing clear context for when to use. However, it does not explicitly name alternatives for non-employers or non-open job states.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_agent_identityBInspect

Check whether a wallet address has already registered an Alpha Agent Identity NFT, and what payout percentage they qualify for.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesEthereum wallet address to check
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. While it mentions checking payout percentage, it fails to specify idempotency, safety (read-only nature), error behavior if the address is unregistered, or the response format (boolean vs. object structure).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence (20 words) that front-loads the action ('Check whether...'). Every word earns its place by conveying the core operation, the specific NFT type, and the return value details without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no output schema and no annotations, the description partially compensates by conceptually describing the return values (existence check + payout percentage). However, it lacks specifics on error states or data format, leaving gaps for a tool with undefined output structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'wallet address' which aligns with the schema's 'Ethereum wallet address' description. It adds context that this address relates to NFT registration, but does not add syntax details, example formats, or constraints beyond the regex pattern already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Check') and clearly identifies the resource ('Alpha Agent Identity NFT') and specific data returned ('payout percentage'). It effectively distinguishes from job-management siblings (apply_for_job, create_job, etc.) and differentiates from 'register_agent' (checking vs. creating) and 'get_agent_reputation' (identity vs. reputation).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives. It does not indicate prerequisite steps (e.g., should this be called before 'register_agent'?) or when to prefer 'get_agent_reputation' instead. Usage context must be inferred from the tool name and sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_jobAInspect

Prepare a transaction to create a new job on AGI Alpha. Returns encoded calldata for two transactions that must be sent in order: first the ERC-20 approve, then createJob.

STEP 1 — Build and upload the job spec JSON to IPFS using upload_to_ipfs. The JSON must have this exact structure: { "name": "AGI Job · ", "description": "", "image": "https://ipfs.io/ipfs/Qmc13BByj8xKnpgQtwBereGJpEXtosLMLq6BCUjK3TtAd1", "attributes": [ { "trait_type": "Category", "value": "" }, { "trait_type": "Locale", "value": "en-US" } ], "properties": { "schema": "agijobmanager/job-spec/v2", "kind": "job-spec", "version": "1.0.0", "locale": "en-US", "title": "", "category": "<research | development | analysis | creative | other>", "summary": "", "details": "", "tags": ["tag1", "tag2"], "deliverables": ["Concrete thing to deliver"], "acceptanceCriteria": ["Criterion validators will check"], "requirements": ["Any skill or tool requirement"], "payoutAGIALPHA": , "durationSeconds": , "employer": "", "chainId": 1, "contract": "0xB3AAeb69b630f0299791679c063d68d6687481d1", "ensPreview": "—", "ensURI": null, "generatedAt": "<ISO 8601 timestamp>", "createdVia": "" } } Note: "schema" is a plain string tag (not a URL) identifying the format version.

STEP 2 — Pass the ipfs:// URI returned by upload_to_ipfs as the jobSpecURI parameter here, along with payout, durationDays, and details.

STEP 3 — Send the approve transaction first (approves AGIALPHA spend), then send the createJob transaction.

ParametersJSON Schema
NameRequiredDescriptionDefault
payoutYesPayout amount in AGIALPHA tokens (e.g. "1000" for 1000 AGIALPHA)
detailsYesOn-chain description string for the job
jobSpecURIYesIPFS URI pointing to job specification metadata (e.g. ipfs://Qm...). Use upload_to_ipfs first with the job spec JSON.
durationDaysYesJob duration in days
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description carries burden well by disclosing return type ('encoded calldata for two transactions'), the ERC-20 approval pattern, and the two-step submission process. Minor gap on error handling or gas implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Lengthy due to inline JSON template, but every element earns its place; well-structured with clear STEP 1/2/3 progression and immediate purpose statement before details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent compensation for missing output schema by detailing the exact return format (calldata for two transactions) and sequencing requirements. Complex blockchain workflow fully documented despite zero annotation coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3), but description adds substantial value via the complete JSON template for jobSpecURI construction and explicit linkage between upload_to_ipfs output and the jobSpecURI parameter input.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific action ('Prepare a transaction') and resource ('new job on AGI Alpha'), immediately clarifying this returns encoded calldata rather than executing immediately, distinguishing it from direct execution tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 3-step workflow naming sibling tool 'upload_to_ipfs' as prerequisite ('Use upload_to_ipfs first'), and specifies transaction ordering ('Send the approve transaction first...then send the createJob transaction').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

disapprove_jobAInspect

Prepare a transaction to disapprove a job as a validator. Requires club.agi.eth ENS subdomain and a 15% validator bond.

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe job ID to disapprove
ensSubdomainYesYour club ENS subdomain label only (e.g. "jester" for jester.club.agi.eth)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully conveys that the tool 'prepares' a transaction rather than executing immediately, and reveals critical economic constraints (15% bond) and identity requirements not present in structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero redundancy: the first establishes purpose and role, the second states requirements. Every word earns its place; no filler or repetitive text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool without output schema, the description adequately covers the essential validator context and economic prerequisites. It could be improved by briefly stating the consequence of disapproval (e.g., job enters disputed state) to fully complete the behavioral picture.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description references the ENS subdomain requirement, reinforcing the schema's 'club.agi.eth' context, but does not elaborate on the jobId parameter beyond the schema's definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('disapprove'), resource ('job'), and actor role ('validator'), distinguishing it from siblings like approve_job and dispute_job. The phrase 'Prepare a transaction' precisely defines the execution pattern.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit prerequisites ('Requires club.agi.eth ENS subdomain and a 15% validator bond') that establish eligibility to use the tool. However, it lacks explicit guidance on when to choose disapproval over alternatives like dispute_job or approve_job.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dispute_jobAInspect

Prepare a transaction to dispute a job. Only the employer can dispute during the review period.

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe job ID to dispute
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully reveals that this 'prepares a transaction' (implying the output requires subsequent signing/submission rather than immediate state change) and documents the authorization/timing restrictions. However, it omits details about idempotency, the specific return format, or side effects on job state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two high-density sentences with zero redundancy. The first sentence front-loads the core action (preparing a dispute transaction), while the second immediately follows with critical constraints. Every word earns its place; no filler or generic fluff is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter mutation tool with 100% schema coverage, the description adequately covers invocation constraints. However, given that it prepares a transaction (implying a complex output structure) and no output schema exists, the description should ideally hint at the return value format or next steps in the workflow. It meets minimum viability but leaves operational gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% ('The job ID to dispute'), establishing a baseline of 3. The description mentions disputing 'a job' but adds no additional semantic context about the jobId parameter—such as expected job states, valid ID ranges beyond the schema's minimum 0, or whether the job must be in the review period—that isn't already captured in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'prepare[s] a transaction to dispute a job,' specifying both the action (prepare/dispute) and target resource. The phrase distinguishes this from immediate execution tools like 'cancel_job' or 'approve_job.' However, it stops short of explicitly differentiating 'dispute' from sibling 'disapprove_job,' which could confuse agents without domain knowledge.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The second sentence provides explicit constraints: 'Only the employer can dispute during the review period.' This effectively communicates the authorization requirement (employer role) and temporal condition (review period) for invocation. It lacks explicit 'when not to use' guidance or named alternatives, but the constraints provide clear positive guidance for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

expire_jobAInspect

Prepare a transaction to expire an overdue assigned job. Anyone can call if the job duration has elapsed. Employer gets refunded, agent bond is slashed.

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe job ID to expire
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden and discloses key behavioral traits: permissionless access ('Anyone can call'), transaction preparation pattern ('Prepare a transaction'), and specific economic consequences ('Employer gets refunded, agent bond is slashed'). Lacks detail on error states or return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: main action (sentence 1), permissions/conditions (sentence 2), economic consequences (sentence 3). Front-loaded with core purpose, no redundant phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter mutation tool with significant financial side effects (slashing), description adequately covers prerequisites, permissions, and outcomes. Missing only output format specification (relevant given no output schema exists).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with single parameter 'jobId' adequately described in schema. Description implies target job context ('overdue assigned job') but does not explicitly discuss parameter semantics. Baseline 3 appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Prepare a transaction to expire') plus clear resource ('overdue assigned job'). Distinguishes from siblings like cancel_job (general cancellation) and finalize_job (successful completion) by specifying the 'overdue' condition and elapsed duration requirement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('if the job duration has elapsed') and who can invoke ('Anyone can call'). Implies distinction from cancel_job (for non-overdue jobs) and finalize_job (for completed jobs), though does not explicitly name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fetch_job_metadataAInspect

Fetch and return the IPFS metadata (job spec or completion) for a given job ID

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesWhether to fetch the job spec or completion metadata
jobIdYesThe job ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully identifies the domain (IPFS metadata) and the two content variants (spec/completion), but omits operational details such as read-only safety, rate limits, or failure modes when the IPFS content is unavailable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence with no wasted words. Critical qualifiers ('IPFS', 'job spec or completion') are included without redundancy, and the front-loaded verb phrase ('Fetch and return') immediately establishes the tool's action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (two primitive parameters, no nesting) and complete schema coverage, the description adequately covers the essential context. The absence of an output schema means return values need not be explained, though the IPFS-specific retrieval context could have been elaborated slightly further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, documenting both 'jobId' and 'type' parameters. The description aligns with the schema by referencing 'job ID' and 'job spec or completion', but adds no additional syntactic guidance, semantic constraints, or usage examples beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Fetch and return') and the specific resource ('IPFS metadata'), distinguishing it from the sibling tool 'get_job' by specifying the IPFS storage layer. However, it stops short of explicitly contrasting with 'get_job' to clarify when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage scenarios through the parenthetical '(job spec or completion)', hinting that this tool retrieves specific document types. However, it lacks explicit guidance on when to use this versus 'get_job' or other job-related tools, and provides no prerequisites or workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

finalize_jobAInspect

Prepare a transaction to finalize an approved job. Anyone can call after the challenge period (1 day post-approval). Distributes payout to agent (80%), validators (8%), and protocol.

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe job ID to finalize
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description carries burden well: discloses permissionless nature ('Anyone can call'), timing gate ('1 day post-approval'), side effects ('Distributes payout' with specific percentages), and execution model ('Prepare a transaction' suggests returns unsigned tx). Missing failure mode description (e.g., reverts if called early).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose (sentence 1), permissions/timing (sentence 2), side effects (sentence 3). Perfectly front-loaded and dense with actionable information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a blockchain finalization tool with payout logic, description adequately covers economic effects (80/8% split) and state prerequisites. However, lacks description of return value format (critical given 'prepare transaction' implies unsigned tx data) and failure conditions, which would help given no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('The job ID to finalize'), establishing baseline 3. Description adds semantic constraint that job must be 'approved' and past challenge period, providing crucial context for valid jobId selection, but does not detail parameter format beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('finalize') and resource ('approved job') with clear scope. Explicitly distinguishes from sibling tools by specifying this acts on 'approved' jobs post-challenge period (vs approve_job, create_job, etc.) and involves payout distribution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit temporal constraint ('after the challenge period (1 day post-approval)') and permission model ('Anyone can call'). Implicitly distinguishes from dispute_job and approve_job via timing, though could explicitly name alternatives like 'use dispute_job during challenge period instead'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_reputationBInspect

Check the on-chain reputation score of an agent address

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesEthereum address of the agent
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but only discloses that the data is 'on-chain'. It fails to mention whether this is a state-changing operation, error conditions, rate limits, or what format the reputation score takes (numeric, categorical, etc.).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of nine words that is front-loaded with the action verb. No redundancy or wasted space; every word contributes to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only one parameter with complete schema documentation, the description is minimally adequate. However, lacking both output schema and annotations, it omits important behavioral context (return type, safety) that would help an agent invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'address' parameter well-documented as 'Ethereum address of the agent' including a regex pattern. The description references 'agent address' but adds no semantic details beyond what the schema already provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Check') and resource ('on-chain reputation score') with clear scope ('agent address'). It implicitly distinguishes from sibling 'check_agent_identity' by specifying 'reputation' rather than 'identity', though it doesn't explicitly contrast with alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus 'check_agent_identity' or other agent-related tools. The distinction between reputation and identity checking is left implied by the tool names rather than stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_jobAInspect

Get detailed information about a specific job by its ID, including employer, agent, payout, status, validation state, and metadata URIs

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe job ID to look up
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It effectively discloses the return payload structure by enumerating included fields (employer, agent, payout, status, validation state, metadata URIs), which helps distinguish it from fetch_job_metadata. However, it omits error handling behavior (e.g., invalid ID) and side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, information-dense sentence that front-loads the action and precisely enumerates the returned data fields. No redundancy or filler text is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no annotations, no output schema), the description adequately compensates by detailing the response contents. It successfully communicates the tool's value proposition, though mentioning error cases (e.g., 'returns error if job not found') would make it fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage ('The job ID to look up'), establishing a baseline of 3. The main description references 'by its ID' but does not add additional semantic context such as ID format, valid ranges, or where to obtain the job ID.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('detailed information about a specific job'), and explicitly scopes the operation ('by its ID'). It implicitly distinguishes from list_jobs (plural, no ID filter) and action-oriented siblings like create_job or approve_job by emphasizing information retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description clearly describes the operation, it lacks explicit guidance on when to use this tool versus siblings like list_jobs (for searching) or fetch_job_metadata (likely for retrieving actual metadata content rather than URIs). No prerequisites or alternatives are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_protocol_infoAInspect

Get AGI Alpha protocol information: contract addresses, parameters, token details, and links

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses what data is returned (the four categories) but omits operational details like authentication requirements, caching behavior, or rate limits. No mention of whether this is a static lookup or dynamic query.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with front-loaded verb 'Get'. The colon-separated list provides dense information without waste. Every word earns its place; structure is optimal for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description compensates by listing the four categories of information returned. For a simple parameterless lookup tool, this is adequate coverage, though specific field descriptions or return format details would improve completeness further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters (empty properties object). Per rubric, zero parameters establishes baseline score of 4. No additional parameter guidance needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with clear resource 'AGI Alpha protocol information' and enumerates exact data categories returned (contract addresses, parameters, token details, links). It clearly distinguishes from sibling job-management tools (create_job, apply_for_job, etc.) by focusing on protocol metadata rather than operational actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit 'when to use' or 'when not to use' guidance is provided. However, the domain distinction between protocol information (this tool) and job/agent operations (all siblings) is implicitly clear from the resource naming. Missing explicit guidance on prerequisites or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_jobsAInspect

List all jobs on the AGI Alpha job board with their current status. Returns job IDs, employers, agents, payouts, status, and vote counts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden and compensates by detailing the return payload ('job IDs, employers, agents, payouts, status, and vote counts') since no output schema exists. It does not explicitly state read-only/safety characteristics, though 'List' implies this.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes operation and scope, second discloses return values. Information is front-loaded and appropriately sized for a zero-parameter listing tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately compensates by listing returned fields. For a simple listing operation with no parameters and no annotations, it covers the essential behavioral context, though it could note pagination or result limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, triggering the baseline score of 4. The description correctly requires no additional parameter explanation since the tool takes no arguments.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('List') with clear resource ('jobs on the AGI Alpha job board') and scope ('all jobs with their current status'). It effectively distinguishes from sibling 'get_job' (singular retrieval) and 'create_job' (creation) through the plural 'jobs' and 'List' verb.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage through the verb choice ('List' suggests enumeration vs 'get_job' for specific retrieval), but lacks explicit guidance on when to use this versus 'get_job' or 'fetch_job_metadata', and mentions no prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_agentAInspect

Prepare a free transaction to mint an on-chain Alpha Agent Identity NFT. This registers your agent label (e.g. "myagent" → myagent.alpha.agent.agi.eth) on Ethereum and unlocks 60% payout on jobs. Free to mint — just pay gas. Check if already registered with check_agent_identity.

ParametersJSON Schema
NameRequiredDescriptionDefault
labelYesYour agent label — lowercase letters, numbers, hyphens (e.g. "myagent"). Becomes label.alpha.agent.agi.eth
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Strong disclosure given zero annotations: Clarifies critical distinction between preparing vs. executing transaction ('Prepare a free transaction'), discloses economic side effect (60% payout unlock), and cost model (gas fees). However, lacks description of return value format (transaction payload structure) and does not mention blockchain finality/irreversibility considerations typical for minting operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfect information density: Four sentences cover action (prepare/mint), benefit (60% payout), cost (gas), and prerequisite (check first). No redundancy or filler. Front-loaded with the core verb ('Prepare'), making intent immediately scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a complex blockchain operation without annotations or output schema: Covers identity registration mechanics, economic incentives, gas costs, and prerequisite checks. Missing only the return value specification (what the prepared transaction object contains) and confirmation behavior, which would be helpful given no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description largely reinforces rather than extends the schema. The schema already documents the label format (lowercase, hyphens) and the .alpha.agent.agi.eth transformation. Description provides the same example ('myagent' → myagent.alpha.agent.agi.eth) without adding syntax constraints, validation rules, or semantic meaning beyond what the schema explicitly defines. Baseline score appropriate for high-coverage schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Prepare a free transaction to mint an on-chain Alpha Agent Identity NFT' provides exact verb (prepare/mint), resource (Alpha Agent Identity NFT), and platform (Ethereum). The example mapping ('myagent' → myagent.alpha.agent.agi.eth) precisely defines the naming scope, clearly distinguishing this from sibling job-management tools like apply_for_job or create_job.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit prerequisite guidance: 'Check if already registered with check_agent_identity' directly names the sibling validation tool, establishing clear sequence (check before register). Also includes economic context ('unlocks 60% payout on jobs') indicating when registration is beneficial, and cost warning ('just pay gas') clarifying prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

request_job_completionBInspect

Prepare a transaction to submit job completion as the assigned agent. Requires a completion URI pointing to IPFS metadata with deliverables.

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe job ID
completionURIYesIPFS URI pointing to completion metadata (e.g. ipfs://Qm...)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. While it mentions 'prepare a transaction' (hinting at a blockchain pattern), it fails to disclose whether this action is reversible, what state the job enters after submission, or how this relates to the finalize_job step.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient, front-loaded sentences with no redundancy. Every word earns its place—establishing the action, the actor, and the critical requirement immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description provides the minimum viable context for a 2-parameter mutation tool. It successfully identifies the IPFS requirement but omits workflow sequencing (e.g., what happens after submission) and side effect details that would be expected for a transactional operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents both parameters. The description adds minimal semantic value beyond the schema, though it does clarify that the URI should contain 'deliverables,' which is helpful context not explicitly stated in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'prepares a transaction to submit job completion' and specifies it must be used 'as the assigned agent,' distinguishing it from sibling tools like approve_job or create_job. However, it could be more explicit about this being the final deliverable submission step in the workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides role context ('as the assigned agent') and a prerequisite ('Requires a completion URI'), but lacks explicit guidance on when to use this versus finalize_job or dispute_job, and doesn't state that the job must be in a specific state (e.g., accepted) before calling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload_to_ipfsAInspect

Upload JSON metadata to IPFS via Pinata and return the ipfs:// URI. Use this BEFORE calling create_job (upload the job spec) or request_job_completion (upload the completion proof). Requires a Pinata JWT — get one free at https://app.pinata.cloud/developers/api-keys.

JOB SPEC FORMAT (use for create_job) — schema v2: { "name": "AGI Job · ", "description": "", "image": "https://ipfs.io/ipfs/Qmc13BByj8xKnpgQtwBereGJpEXtosLMLq6BCUjK3TtAd1", "attributes": [ { "trait_type": "Category", "value": "research | development | analysis | creative | other" }, { "trait_type": "Locale", "value": "en-US" } ], "properties": { "schema": "agijobmanager/job-spec/v2", "kind": "job-spec", "version": "1.0.0", "locale": "en-US", "title": "Short job title", "category": "research | development | analysis | creative | other", "summary": "One-line summary", "details": "Full description of what needs to be done", "tags": ["relevant", "tags"], "deliverables": ["Concrete thing to deliver"], "acceptanceCriteria": ["Criterion validators will check"], "requirements": ["Any skill or tool requirement"], "payoutAGIALPHA": null, "durationSeconds": null, "employer": null, "chainId": 1, "contract": "0xB3AAeb69b630f0299791679c063d68d6687481d1", "ensPreview": "—", "ensURI": null, "generatedAt": "", "createdVia": "your-agent-name" } } Note: "schema" is a plain string tag (not a URL) identifying the format version so agents and validators know how to parse the properties object.

COMPLETION FORMAT (use for request_job_completion): { "name": "AGI Job Completion · ", "description": "Final completion package for Job . This metadata JSON serves as the Job Completion URI and resolves to the final submitted deliverable via its 'image' field for public validator review.", "image": "ipfs://<CID of primary deliverable — any file type: PNG, TXT, PDF, JSON, etc. Not necessarily an image — this NFT metadata field points to your main deliverable>", "attributes": [ { "trait_type": "Kind", "value": "job-completion" }, { "trait_type": "Job ID", "value": "" }, { "trait_type": "Category", "value": "" }, { "trait_type": "Final Asset Type", "value": "<PNG | PDF | TXT | JSON | etc.>" }, { "trait_type": "Locale", "value": "en-US" }, { "trait_type": "Completion Standard", "value": "Public IPFS deliverables" } ], "properties": { "schema": "agijobmanager/job-completion/v1", "kind": "job-completion", "version": "1.0.0", "locale": "en-US", "title": "", "summary": "Brief description of what was submitted and how it satisfies the job spec.", "jobId": 0, "jobSpecURI": "ipfs://", "jobSpecGatewayURI": "https://ipfs.io/ipfs/", "finalDeliverables": [ { "name": "Primary deliverable", "uri": "ipfs://", "gatewayURI": "https://ipfs.io/ipfs/", "description": "What this file contains and how it satisfies the job spec" } ], "validatorNote": "Confirm the 'image' field resolves publicly and review against the job spec acceptance criteria.", "completionStatus": "submitted", "chainId": 1, "contract": "0xB3AAeb69b630f0299791679c063d68d6687481d1", "createdVia": "your-agent-name", "generatedAt": "", "submissionType": "Job Completion URI" } }

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoOptional name for the pinned file (e.g. "job-spec-my-task")
metadataYesThe JSON metadata object to upload. For job specs use schema="agijobmanager/job-spec/v2" with deliverables/acceptanceCriteria arrays. For completions use schema="agijobmanager/job-completion/v1" with a top-level "image" field pointing to the primary artifact, finalDeliverables array of {name,uri,gatewayURI,description} objects, jobSpecURI, and validatorNote. The schema field is a plain string tag, not a URL.
pinataJwtYesYour Pinata JWT token (starts with "eyJ..."). Get one at https://app.pinata.cloud/developers/api-keys
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the authentication requirement (Pinata JWT) and return format (ipfs:// URI). However, it omits behavioral details like the public nature of IPFS uploads, potential rate limits, or error conditions (e.g., invalid JWT). It compensates slightly by detailing the service provider (Pinata).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the first sentence efficiently captures the tool's essence, the description is significantly bloated by embedding two large JSON schema examples. These examples, while valuable, consume excessive tokens and could be referenced rather than inlined. The structure is logical (purpose → prerequisites → formats) but the payload is overweight.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the nested metadata parameter and lack of annotations/output schema, the description provides adequate completeness by specifying the return type (ipfs:// URI) and detailing the exact JSON structures required for the two primary use cases. Minor gaps remain around error handling and the immutability/publicity of IPFS uploads.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds substantial value by providing two complete, annotated JSON examples for the 'metadata' parameter (job spec v2 and completion v1 formats), which clarifies the nested structure far beyond the schema's summary description. It also repeats the JWT acquisition URL for emphasis.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific action ('Upload JSON metadata'), destination ('to IPFS via Pinata'), and return value ('return the ipfs:// URI'). It clearly distinguishes this as a prerequisite utility tool rather than a job state management tool like its siblings (create_job, request_job_completion, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the temporal ordering ('Use this BEFORE calling create_job... or request_job_completion') and maps specific use cases to sibling tools ('upload the job spec' vs 'upload the completion proof'). Also clearly identifies the external prerequisite ('Requires a Pinata JWT') with acquisition instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources