Skip to main content
Glama

get_task_result

Retrieve task results from A2A agents, including status, messages, and artifacts, by providing the task ID and optional history length.

Instructions

Retrieve the result of a task from an A2A agent.

Args: task_id: ID of the task to retrieve history_length: Optional number of history items to include (null for all)

Returns: Task result including status, message, and artifacts if available

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
task_idYes
history_lengthNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • This is the main handler function for the 'get_task_result' MCP tool. It is decorated with @mcp.tool(), making it the registered tool implementation. The function looks up the agent URL for the given task_id from a global task_agent_mapping, creates an A2AClient, calls client.get_task() to fetch the result from the A2A agent, and parses the response to extract status, message, artifacts, and history into a dictionary response.
    @mcp.tool()
    async def get_task_result(
        task_id: str,
        history_length: Optional[int] = None,
        ctx: Context = None,
    ) -> Dict[str, Any]:
        """
        Retrieve the result of a task from an A2A agent.
        
        Args:
            task_id: ID of the task to retrieve
            history_length: Optional number of history items to include (null for all)
            
        Returns:
            Task result including status, message, and artifacts if available
        """
        if task_id not in task_agent_mapping:
            return {
                "status": "error",
                "message": f"Task ID not found: {task_id}",
            }
        
        agent_url = task_agent_mapping[task_id]
        
        # Create a client for the agent
        client = A2AClient(url=agent_url)
        
        try:
            # Create the request payload
            payload = {
                "id": task_id,
                "historyLength": history_length
            }
            
            if ctx:
                await ctx.info(f"Retrieving task result for task_id: {task_id}")
            
            # Send the get task request
            result = await client.get_task(payload)
            
            # Debug: Print the raw response for analysis
            if ctx:
                await ctx.info(f"Raw task result: {result}")
                
            # Create a response dictionary with as much info as we can extract
            response = {
                "status": "success",
                "task_id": task_id,
            }
            
            # Try to extract task data
            try:
                if hasattr(result, "result"):
                    task = result.result
                    
                    # Add basic task info
                    if hasattr(task, "sessionId"):
                        response["session_id"] = task.sessionId
                    else:
                        response["session_id"] = None
                    
                    # Add task status
                    if hasattr(task, "status"):
                        status = task.status
                        if hasattr(status, "state"):
                            response["state"] = status.state
                        
                        # Extract message from status
                        if hasattr(status, "message") and status.message:
                            response_text = ""
                            for part in status.message.parts:
                                if part.type == "text":
                                    response_text += part.text
                            if response_text:
                                response["message"] = response_text
                    
                    # Extract artifacts
                    if hasattr(task, "artifacts") and task.artifacts:
                        artifacts_data = []
                        for artifact in task.artifacts:
                            artifact_data = {
                                "name": artifact.name if hasattr(artifact, "name") else "unnamed_artifact",
                                "contents": [],
                            }
                            
                            for part in artifact.parts:
                                if part.type == "text":
                                    artifact_data["contents"].append({
                                        "type": "text",
                                        "text": part.text,
                                    })
                                elif part.type == "data":
                                    artifact_data["contents"].append({
                                        "type": "data",
                                        "data": part.data,
                                    })
                            
                            artifacts_data.append(artifact_data)
                        
                        response["artifacts"] = artifacts_data
                    
                    # Extract message history if available
                    if hasattr(task, "history") and task.history:
                        history_data = []
                        for message in task.history:
                            message_data = {
                                "role": message.role,
                                "parts": [],
                            }
                            
                            for part in message.parts:
                                if part.type == "text":
                                    message_data["parts"].append({
                                        "type": "text",
                                        "text": part.text,
                                    })
                                elif hasattr(part, "data"):
                                    message_data["parts"].append({
                                        "type": "data",
                                        "data": part.data,
                                    })
                            
                            history_data.append(message_data)
                        
                        response["history"] = history_data
                else:
                    response["error"] = "No result in response"
                    
            except Exception as e:
                response["parsing_error"] = f"Error parsing task result: {str(e)}"
                
            return response
        except Exception as e:
            return {
                "status": "error",
                "message": f"Error retrieving task result: {str(e)}",
            }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions what the tool returns (status, message, artifacts) but doesn't cover important aspects like error handling (what happens with invalid task_id), authentication requirements, rate limits, or whether this is a read-only operation. For a tool that retrieves potentially sensitive task results, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise. It opens with a clear purpose statement, then provides organized sections for Args and Returns with bullet-like formatting. Every sentence earns its place by adding essential information without redundancy or fluff. The information is front-loaded with the core purpose first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but has output schema), the description does well. The output schema existence means it doesn't need to detail return values, and the description covers parameter meanings adequately. However, for a task result retrieval tool in an A2A agent context, it could benefit from mentioning typical use cases or integration patterns with sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description provides meaningful context for both parameters beyond the schema's 0% coverage. It explains that 'task_id' identifies which task to retrieve and that 'history_length' controls how many history items to include (with null meaning all). This adds crucial semantic understanding, though it doesn't specify format constraints for task_id or valid ranges for history_length.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Retrieve the result of a task') and resource ('from an A2A agent'), making the purpose immediately understandable. It distinguishes this tool from siblings like 'cancel_task' or 'send_message' by focusing on result retrieval rather than task management or communication. However, it doesn't explicitly contrast with potential alternatives for getting task information, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you need to check the outcome of a previously submitted task, but provides no explicit guidance on when to use this versus alternatives. There's no mention of prerequisites (e.g., needing a valid task_id from a prior operation) or when not to use it. The context is clear but lacks specific usage rules or comparisons with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/GongRzhe/A2A-MCP-Server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server