Skip to main content
Glama
codeocean

Code Ocean MCP Server

Official
by codeocean

wait_until_completed

Monitor Code Ocean computations until they finish or fail, with adjustable polling intervals and timeout controls for reliable workflow management.

Instructions

Poll a computation until it reaches 'Completed' or 'Failed' state with configurable timing.

Args: computation: The computation object to monitor polling_interval: Time between status checks in seconds (minimum 5 seconds) timeout: Maximum time to wait in seconds, or None for no timeout

Returns: Updated computation object once completed or failed

Raises: ValueError: If polling_interval < 5 or timeout constraints are violated TimeoutError: If computation doesn't complete within the timeout period

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
computation_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
idYes
nameYes
stateYes
createdYes
run_timeYes
exit_codeNo
processesNo
end_statusNo
parametersNo
data_assetsNo
has_resultsNo
nextflow_profileNo
cloud_workstationNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and comprehensively discloses behavioral traits. It describes polling behavior with configurable timing, terminal states ('Completed' or 'Failed'), error conditions (ValueError, TimeoutError), and return values (updated computation object). This covers mutation implications, timing constraints, and error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose in the first sentence. Each subsequent section (Args, Returns, Raises) is concise and adds necessary detail without redundancy. Every sentence earns its place by clarifying behavior, parameters, or outcomes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (polling with timing constraints), no annotations, and an output schema (implied by 'Returns' section), the description is complete. It covers purpose, parameters with semantics, behavioral details, return values, and error conditions, providing all needed context for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage with only one parameter ('computation_id'), but the description adds significant semantic value. It explains that 'computation' is the object to monitor, details 'polling_interval' with minimum constraints, and clarifies 'timeout' behavior (including 'None' for no timeout). This fully compensates for the schema gap and provides essential usage context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Poll a computation until it reaches 'Completed' or 'Failed' state') and distinguishes it from siblings like 'wait_until_ready' by specifying the terminal states. It uses precise verbs ('poll', 'monitor') and identifies the resource ('computation object').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it's for monitoring computations until completion or failure, but does not explicitly name when to use alternatives like 'wait_until_ready' or other sibling tools. It provides clear functional intent without explicit exclusions or comparisons to other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/codeocean/codeocean-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server