Skip to main content
Glama
codeocean

Code Ocean MCP Server

Official
by codeocean

wait_until_ready

Monitor a data asset's status until it becomes ready or fails, with configurable polling intervals and optional timeout settings.

Instructions

Poll a data asset until it reaches 'Ready' or 'Failed' state with configurable timing.

Args: data_asset: The data asset object to monitor polling_interval: Time between status checks in seconds (minimum 5 seconds) timeout: Maximum time to wait in seconds, or None for no timeout

Returns: Updated data asset object once ready or failed

Raises: ValueError: If polling_interval < 5 or timeout constraints are violated TimeoutError: If data asset doesn't become ready within timeout period Poll until the specified data asset becomes ready before performing further operations (e.g., downloading files). You can set polling_interval and optional timeout.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
data_assetYes
polling_intervalNo
timeoutNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
idYes
nameYes
sizeNo
tagsNo
typeYes
filesNo
mountYes
stateYes
createdYes
last_usedYes
provenanceNo
descriptionNo
source_bucketNo
app_parametersNo
failure_reasonNo
transfer_errorNo
custom_metadataNo
last_transferredNo
nextflow_profileNo
contained_data_assetsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's polling behavior, configurable timing, termination conditions (ready/failed states, timeout), and error cases (ValueError, TimeoutError). However, it does not mention side effects, rate limits, or authentication requirements, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with a clear summary, followed by structured sections (Args, Returns, Raises) and a usage note. While efficient, the final sentence ('Poll until...') slightly repeats information from the opening, making it marginally less concise than ideal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (polling with configurable timing), no annotations, and an output schema present, the description is complete. It covers the tool's purpose, parameters, return values, error conditions, and usage context, providing all necessary information for an agent to understand and invoke the tool correctly without needing to infer missing details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% schema description coverage. It explains the purpose of each parameter ('data_asset: The data asset object to monitor'), provides constraints ('polling_interval: minimum 5 seconds'), clarifies optionality ('timeout: or None for no timeout'), and contextualizes their roles in the polling process, fully compensating for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('poll', 'monitor') and resource ('data asset'), and distinguishes it from siblings by focusing on waiting for state transitions rather than creation, retrieval, or modification. It explicitly mentions the target states ('Ready' or 'Failed'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('before performing further operations (e.g., downloading files)'), but does not explicitly mention when not to use it or name specific alternatives. It implies usage for asynchronous operations but lacks explicit exclusions or comparisons to sibling tools like 'wait_until_completed'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/codeocean/codeocean-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server