Skip to main content
Glama
OriginQ

QPanda3 Runtime MCP Server

by OriginQ

get_task_results_tool

Retrieve quantum computing task results from the QPanda3 Runtime MCP Server. Use this tool to access measurement outcomes or expectation values after task completion.

Instructions

Get the computation results of a completed task.

Retrieve the results from a completed quantum computing task. The task must be in DONE status to retrieve results.

Args: task_id: The ID of the completed task.

Returns: For sampling tasks: - status: "success", "pending", or "error" - task_id: The task ID - task_status: Current status - results: List of measurement outcomes, each containing: - key: List of measurement outcomes (hex format, e.g., "0x0", "0x3") - value: Corresponding probabilities or counts - message: Status message

For estimation tasks:
- status: "success", "pending", or "error"
- task_id: The task ID
- task_status: Current status
- results: Expectation value(s)
- message: Status message

Example: # First check status status = get_task_status_tool("task_12345") if status["task_status"] == "DONE": # Then get results results = get_task_results_tool("task_12345") print(f"Measurement outcomes: {results['results']}")

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
task_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that the tool only works for completed tasks (DONE status), which is a key behavioral constraint. It also describes return formats for different task types, though it doesn't mention error handling, rate limits, or authentication needs explicitly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with the core purpose, but includes extensive return value documentation that duplicates what an output schema would provide. The example is helpful but lengthy. Some sentences could be more concise while maintaining clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (retrieving quantum computation results) and the presence of an output schema (which handles return values), the description is reasonably complete. It covers purpose, prerequisites, and usage context well, though it could benefit from more behavioral details like error cases or performance characteristics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains that 'task_id' refers to 'The ID of the completed task,' adding semantic context beyond the schema's type information. However, it doesn't specify format constraints (e.g., length, pattern) or provide examples of valid task IDs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the computation results') and resource ('of a completed task'), distinguishing it from sibling tools like get_task_status_tool (which checks status) and cancel_task_tool (which cancels tasks). The verb 'retrieve' reinforces the purpose without being tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('The task must be in DONE status to retrieve results') and provides an example showing usage after checking status with get_task_status_tool. This clearly differentiates it from alternatives and establishes prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OriginQ/qpanda3-runtime-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server