agentwork-mcp
Server Details
Official MCP server for Agentwork — delegate tasks to AI agents with human-in-the-loop
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
9 toolsagentwork_approve_solutionAInspect
Approve or reject the proposed solution.
Use this when agentwork_get_task_status returns status "awaiting_solution_approval".
Args:
task_id: The task ID.
approved: True to approve, False to reject.
api_key: Your Agentwork API key.
rejection_reason: If rejecting, explain what needs to change.
Returns:
JSON with success and new status.| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| task_id | Yes | ||
| approved | Yes | ||
| rejection_reason | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return format ('JSON with success and new status'), but lacks details on side effects, reversibility, or what system state changes occur upon approval vs rejection.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Follows an efficient structured format (Purpose → Usage → Args → Returns) with zero redundancy. Each sentence conveys distinct information necessary for invocation, avoiding repetition of the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Provides sufficient information for a workflow step tool by explaining the input params (since schema lacks descriptions) and referencing the return value. Could be improved by briefly stating the consequence of approval (e.g., task proceeds to execution) versus rejection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the Args section fully compensates by documenting all 4 parameters with clear semantics: task_id (identifier), approved (boolean decision), api_key (auth credential), and rejection_reason (conditional feedback requirement).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action ('Approve or reject') and the specific resource ('the proposed solution'), distinguishing it from sibling agentwork_approve_spec which handles specs rather than solutions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the precise trigger condition: 'Use this when agentwork_get_task_status returns status awaiting_solution_approval', providing clear workflow context and prerequisite checks by referencing a sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agentwork_approve_specAInspect
Approve or reject the proposed spec/plan.
Use this when agentwork_get_task_status returns status "awaiting_spec_approval".
Args:
task_id: The task ID.
approved: True to approve, False to reject.
api_key: Your Agentwork API key.
rejection_reason: If rejecting, explain what changes you want.
Returns:
JSON with success and new status.| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| task_id | Yes | ||
| approved | Yes | ||
| rejection_reason | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the return format ('JSON with success and new status'), but omits behavioral details like whether approval triggers execution, if the operation is idempotent, or whether rejection is reversible. Adequate but not rich behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear sections: one-line purpose, usage trigger, Args block, and Returns block. No wasted words; front-loaded with critical context. The structured Args format makes parameter semantics scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with zero schema coverage and no annotations, the description successfully covers purpose, prerequisites, all input semantics, and basic return structure. Could strengthen by mentioning workflow side effects (e.g., 'approving begins execution'), but sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (titles only). The description compensates by documenting all 4 parameters: 'True to approve, False to reject' clarifies the boolean semantics; 'If rejecting, explain what changes you want' adds conditional logic for rejection_reason. Fully compensates for schema gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific dual verbs (Approve/reject) and clear resource (proposed spec/plan). The 'spec/plan' qualifier effectively distinguishes this from sibling agentwork_approve_solution (which handles solutions), clarifying scope without ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit triggering condition: 'Use this when agentwork_get_task_status returns status "awaiting_spec_approval"'. Names the specific sibling tool and exact state condition, giving precise guidance on when to select this tool over others in the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agentwork_cancel_taskBInspect
Cancel a running task.
Args:
task_id: The task ID to cancel.
api_key: Your Agentwork API key.
Returns:
JSON with success status.| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| task_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It mentions the constraint 'running task' but fails to disclose crucial behavioral traits: whether cancellation is immediate or graceful, irreversible, has side effects (cleanup of resources), or specific error conditions. Only notes that it returns a success JSON.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded with the action first, but the Args/Returns docstring format is stylistically awkward for an MCP tool description and adds structural verbosity. The content earns its place, but the formatting could be more integrated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a two-parameter mutation tool: it identifies the resource, parameters, and basic return type. However, given this is a destructive operation with no annotations, it lacks critical context like irreversibility warnings, partial result handling, or error state descriptions that would complete the safety profile.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. The description compensates by providing semantic definitions for both required parameters ('The task ID to cancel' and 'Your Agentwork API key') in the Args section. While functional, it lacks format specifics (e.g., UUID vs integer for task_id) or scope details for the API key.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Cancel') and resource ('running task') clearly. First sentence is direct. However, it does not explicitly differentiate from sibling tools like agentwork_get_task_status or agentwork_create_task in the text itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use versus alternatives, prerequisites (e.g., checking if task is running first), or conditions where cancellation might fail. Simply states what the tool does, not when to choose it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agentwork_create_taskBInspect
Create a new task on Agentwork.
Args:
title: Short title for the task (max ~70 chars).
description: Detailed description of what you need done.
api_key: Your Agentwork API key.
Returns:
JSON with task_id and initial status.| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | ||
| api_key | Yes | ||
| description | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Only discloses return format (task_id and status). Fails to mention mutation effects, async processing implied by get_task_result sibling, side effects, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured docstring format (Args/Returns) which is clear and front-loaded. While Returns section is somewhat redundant given output_schema exists, it is appropriately sized with no wasted prose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers basic invocation needs (params and return) but lacks workflow context (relation to approve_spec/approve_solution) and behavioral details given the complexity of a multi-step task delegation system.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema description coverage. Adds critical constraints (max ~70 chars for title) and clarifies purpose for all 3 parameters (api_key, title, description) beyond the bare schema titles.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Create) and resource (task) with platform context (Agentwork). Clearly distinguishes from siblings like cancel_task or get_task_status by establishing this as the creation entry point in the workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use versus alternatives (e.g., when to create vs. register), nor does it mention prerequisite steps like registration implied by the sibling 'agentwork_register'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agentwork_get_task_eventsAInspect
Get the activity log for a task — see what the agent is doing in real-time.
Returns timestamped events like status updates ("Searching the web..."),
questions asked, replies sent, and spec/solution proposals. Useful for
monitoring progress while status is "processing".
Args:
task_id: The task ID.
api_key: Your Agentwork API key.
Returns:
JSON with a list of events, each having timestamp, type, and content.| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| task_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses behavioral traits: it describes the event stream nature (status updates, questions, replies), mentions 'real-time' operation, and specifies return structure (JSON with timestamp, type, content). Could be improved by mentioning polling behavior or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (purpose, event examples, usage context, Args, Returns). Front-loaded with the key value proposition. The Returns section adds semantic detail about event structure that may complement the output schema, though slightly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists (per context signals), the description appropriately focuses on semantic meaning of events rather than just structure. Covers all 2 parameters despite poor schema coverage and explains the monitoring use case comprehensively. Lacks error condition documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0% description coverage. The Args section compensates by documenting both task_id and api_key, providing minimal but sufficient semantic context ('The task ID', 'Your Agentwork API key') that prevents ambiguity despite tautological simplicity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] the activity log for a task' with specific verb+resource, and distinguishes itself from siblings like get_task_result/get_task_status by emphasizing 'real-time' monitoring, 'timestamped events', and specific event types (questions asked, replies sent, proposals).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear temporal context ('Useful for monitoring progress while status is "processing"') but lacks explicit 'when not to use' guidance or direct comparison to alternatives like get_task_result despite clear sibling differentiation in the ecosystem.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agentwork_get_task_resultAInspect
Fetch the completed result of a task (text + file URLs).
Use this when agentwork_get_task_status returns status "completed".
Args:
task_id: The task ID.
api_key: Your Agentwork API key.
Returns:
JSON with result_text and a list of downloadable files.| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| task_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so full burden falls on description. It discloses the return structure (JSON with result_text and files), but lacks details on error conditions (e.g., what happens if called before completion), idempotency, or rate limits. The 'Fetch' verb implies read-only, but explicit safety confirmation is absent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured docstring format with clear Args/Returns sections. Front-loaded with the action. Slightly redundant to include Returns section when output schema exists, but remains concise and readable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the brief mention of return structure is acceptable. The description successfully establishes the prerequisite workflow (status check first) and covers both required parameters. Adequate for a simple 2-parameter retrieval tool, though parameter descriptions could be richer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It identifies both parameters in the Args block: 'task_id: The task ID' adds minimal value, while 'api_key: Your Agentwork API key' adds domain context. However, it lacks format specifications, constraints, or validation rules for the task_id string.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action (Fetch) and specific resource (completed result of a task), including the content type (text + file URLs). Distinguishes from sibling agentwork_get_task_status by targeting 'result' versus 'status', though it could explicitly name the task system (Agentwork) in the first sentence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: 'Use this when agentwork_get_task_status returns status "completed"'. Names the prerequisite sibling tool and the specific condition for invocation, clearly establishing the workflow sequence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agentwork_get_task_statusAInspect
Check the current status of a task.
The response tells you what action is needed:
- "processing": The agent is working. Poll again later.
- "awaiting_reply": The agent asked a question. Use agentwork_send_message to reply.
- "awaiting_spec_approval": A plan was proposed. Use agentwork_approve_spec to accept/reject.
- "awaiting_solution_approval": A solution was proposed. Use agentwork_approve_solution to accept/reject.
- "completed": The task is done. Use agentwork_get_task_result to fetch the output.
Args:
task_id: The task ID returned by agentwork_create_task.
api_key: Your Agentwork API key.
Returns:
JSON with task_id, title, status, and optional question/spec/solution details.| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| task_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full disclosure burden effectively. Documents polling pattern ('Poll again later' for processing), state machine semantics (5 distinct statuses), and blocking behavior (when agent awaits input vs working). Minor gap: no mention of rate limits or timeout behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose → response semantics → parameters → returns. Bulleted state-to-action mappings are information-dense. Args section is necessary given 0% schema coverage, though slightly redundant with parameter names.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a state-machine polling tool. Leverages existence of output schema (mentioned in description) to avoid over-documenting returns. Captures all workflow transitions and next-step actions required for agent orchestration.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (titles only, no descriptions). Description compensates via Args section: specifies task_id provenance ('returned by agentwork_create_task') and api_key context ('Your Agentwork API key'), providing semantic meaning entirely absent from schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb-resource combination 'Check the current status of a task' and immediately distinguishes from sibling agentwork_get_task_result by stating when to use that alternative ('completed' → use get_task_result). Clear scope boundaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Exceptional workflow mapping: explicitly links each returned status value to a specific sibling tool action (e.g., 'awaiting_reply' → use agentwork_send_message, 'awaiting_spec_approval' → use agentwork_approve_spec). Creates clear decision tree for agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agentwork_registerAInspect
Register a new Agentwork account and get an API key.
Call this first if you don't have an API key yet. You need to provide
an email address and your full name.
Args:
email: Your email address (so the Agentwork team can contact you).
name: Your full name.
organization_name: Organization name (optional, auto-generated if omitted).
Returns:
JSON with api_key, organization_id, and user_id.| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| Yes | |||
| organization_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the mutation (account creation) and return structure (JSON with api_key, organization_id, user_id), but lacks behavioral details like idempotency guarantees, validation rules, or error scenarios (e.g., duplicate email handling).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured docstring format (Args/Returns) that efficiently packs information. Front-loaded with purpose and usage guidance. Slight verbosity in 'You need to provide' but overall well-organized with zero wasted lines in the structured sections.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero schema descriptions and no annotations, the description adequately covers the registration contract: required vs optional params, return value structure, and sequential context. Lacks error handling documentation but sufficient for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (titles only), but description fully compensates via Args section: explains email purpose ('so team can contact you'), clarifies name format ('full name'), and documents optional behavior for organization_name ('auto-generated if omitted').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Register') + resource ('Agentwork account') + outcome ('get an API key'). Distinctly differs from sibling task management tools (create_task, approve_spec, etc.) which assume existing accounts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit sequencing guidance ('Call this first if you don't have an API key yet') establishes clear prerequisite relationship with other tools. Could be improved by mentioning what to use instead if API key exists, but the temporal cue is strong.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agentwork_send_messageAInspect
Reply to a question from the Agentwork agent.
Use this when agentwork_get_task_status returns status "awaiting_reply".
This covers clarifying questions, accuracy level selection, and general follow-ups.
Args:
task_id: The task ID.
message: Your reply to the agent's question.
api_key: Your Agentwork API key.
Returns:
JSON with success status.| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| message | Yes | ||
| task_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions return format ('JSON with success status'), but omits key behavioral details: what state transition occurs (e.g., does status change from 'awaiting_reply' to 'in_progress'?), and whether the message is synchronous or queued.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections: purpose statement, usage trigger, scope clarification, Args documentation, and Returns documentation. Front-loaded with the essential action and trigger condition. Slightly informal structure (Args/Returns headers) but zero wasted prose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a workflow tool with 7+ siblings. Documents the critical workflow trigger (awaiting_reply status) and return format. Missing only the resulting state change description. Good coverage given lack of annotations and zero schema descriptions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, but the Args section compensates by documenting all 3 parameters. Descriptions are minimal ('The task ID') but semantically sufficient: 'Your reply to the agent's question' clarifies directionality, and 'Your Agentwork API key' identifies authentication requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Reply') + resource ('question') + domain ('Agentwork agent'). Explicitly distinguishes from siblings like create_task (initialization) and approve_solution (completion) by specifying this is for mid-task clarification responses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit trigger condition: 'Use this when agentwork_get_task_status returns status "awaiting_reply"'. Names specific sibling tool for precondition check. Also clarifies scope covers 'clarifying questions, accuracy level selection, and general follow-ups', distinguishing from spec approval or solution approval flows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!