getTask
Retrieve specific task details by ID from the Follow Up Boss CRM to manage workflows and track progress.
Instructions
Get a task by ID
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Task ID |
Retrieve specific task details by ID from the Follow Up Boss CRM to manage workflows and track progress.
Get a task by ID
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Task ID |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states this is a read operation ('get'), which is helpful, but doesn't disclose behavioral traits like error handling (what happens if ID doesn't exist), authentication needs, rate limits, or response format. For a tool with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's appropriately sized for a simple lookup tool and front-loads the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what data is returned, error conditions, or behavioral context. For a tool that presumably returns task details, more information about the response would be helpful for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'id' documented as 'Task ID' in the schema. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get a task by ID' clearly states the action (get) and resource (task), but it's vague about scope and doesn't distinguish from sibling tools like 'listTasks' or 'getTask' vs 'getPerson'. It specifies the lookup method (by ID), which adds some specificity beyond just the name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'listTasks' for multiple tasks or other 'getX' tools for different resources. The description implies usage when you have a specific task ID, but doesn't explicitly state this or mention prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/mindwear-capitian/followupboss-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server