Skip to main content
Glama

get_task

Retrieve detailed task information from Productive.io using the internal task ID, including status, dates, time tracking data, and todo counts.

Instructions

Get detailed task information by its internal ID.

Use this when you have the internal task ID (e.g., 14677418). For looking up tasks by their project-specific number (e.g., #960), use get_project_task instead.

Returns task details including:

  • Task title, description, and status (open/closed)

  • Due date, start date, and creation/update timestamps

  • Time tracking: initial estimate, remaining time, billable time, and worked time (in minutes)

  • Todo counts: total and open

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
task_idYesThe unique Productive task identifier (internal ID)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • tools.py:98-136 (handler)
    The core handler function that executes the get_task tool logic: fetches task data from the Productive API client, sanitizes the response with filter_response, ensures time tracking fields have default values, and handles API and unexpected errors.
    async def get_task(ctx: Context, task_id: int) -> ToolResult:
        """Fetch a single task by internal ID.
    
        Developer notes:
        - Wraps client.get_task(task_id).
        - Applies utils.filter_response to sanitize output.
        - Ensures time tracking fields are always present (initial_estimate, worked_time, billable_time, remaining_time).
        - Raises ProductiveAPIError on failure.
        """
        try:
            await ctx.info(f"Fetching task with ID: {task_id}")
            result = await client.get_task(task_id)
            await ctx.info("Successfully retrieved task")
            
            filtered = filter_response(result)
            
            # Ensure time tracking fields are always present at the top level
            if "data" in filtered and "attributes" in filtered["data"]:
                attributes = filtered["data"]["attributes"]
                
                # Set default values for time tracking fields if missing
                time_fields = {
                    "initial_estimate": 0,
                    "worked_time": 0,
                    "billable_time": 0,
                    "remaining_time": 0
                }
                
                for field, default_value in time_fields.items():
                    if field not in attributes or attributes[field] is None:
                        attributes[field] = default_value
            
            return filtered
            
        except ProductiveAPIError as e:
            await _handle_productive_api_error(ctx, e, f"task {task_id}")
        except Exception as e:
            await ctx.error(f"Unexpected error fetching task: {str(e)}")
            raise e
  • server.py:250-269 (registration)
    MCP tool registration using @mcp.tool decorator. Delegates execution to tools.get_task and provides input schema via Annotated Field for task_id, along with comprehensive docstring describing usage and output.
    @mcp.tool
    async def get_task(
        ctx: Context,
        task_id: Annotated[
            int, Field(description="The unique Productive task identifier (internal ID)")
        ],
    ) -> Dict[str, Any]:
        """Get detailed task information by its internal ID.
    
        Use this when you have the internal task ID (e.g., 14677418).
        For looking up tasks by their project-specific number (e.g., #960), use get_project_task instead.
    
        Returns task details including:
        - Task title, description, and status (open/closed)
        - Due date, start date, and creation/update timestamps
        - Time tracking: initial estimate, remaining time, billable time, and worked time (in minutes)
        - Todo counts: total and open
        """
        return await tools.get_task(ctx=ctx, task_id=task_id)
  • Input schema definition for the tool using Pydantic Annotated and Field, specifying task_id as required integer with description.
        task_id: Annotated[
            int, Field(description="The unique Productive task identifier (internal ID)")
        ],
    ) -> Dict[str, Any]:
  • Helper method in ProductiveClient that performs the actual HTTP GET request to the Productive API /tasks/{task_id} endpoint, including workflow_status in the response.
    async def get_task(self, task_id: int) -> Dict[str, Any]:
        """Get task by ID with workflow_status always included"""
        return await self._request("GET", f"/tasks/{str(task_id)}", params={"include": "workflow_status"})
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by specifying it's a read operation ('Get detailed task information'), listing comprehensive return data, and clarifying the ID format requirement. It doesn't mention error cases, permissions, or rate limits, but provides substantial behavioral context for a read-only tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with zero waste: first sentence states purpose, second provides usage guidance with alternative, third introduces return values, and bullet points efficiently list data categories. Every sentence earns its place and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which covers return values), 1 parameter with 100% schema coverage, and no annotations, the description is complete: it explains purpose, usage guidelines, parameter context, and provides a helpful overview of return data categories. No significant gaps remain for this simple read operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by clarifying the parameter semantics: it explains what 'task_id' represents ('internal task ID'), provides an example format (14677418), and distinguishes it from project-specific numbers. This enhances understanding beyond the schema's basic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed task information') and resource ('by its internal ID'), distinguishing it from sibling tools like get_project_task which uses project-specific numbers. It provides a concrete example of the ID format (14677418), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool ('when you have the internal task ID') and when to use an alternative ('For looking up tasks by their project-specific number, use get_project_task instead'). This provides clear guidance on tool selection based on available identifiers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/druellan/Productive-GET-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server