Skip to main content
Glama

run_grid

Execute BitScale workflows by adding data rows and triggering automated column enrichments to process and return structured outputs.

Instructions

Run a BitScale Grid by appending a new row with the given inputs and triggering all column enrichments.

This is the primary tool for executing BitScale workflows. It adds a row to the grid, runs all enrichment/formula/merge columns, and returns the enriched outputs.

IMPORTANT — inputs vs output_columns use DIFFERENT key formats:

  • 'inputs' uses human-readable LABELS (e.g. "company_name", "website") — these are NOT UUIDs. The labels are derived from the source columns configured on the grid's BitScale API data source. You can find the exact labels in the BitScale app by clicking the Data Source column, selecting the BitScale API source, and looking at the input fields.

  • 'output_columns' uses column UUIDs from get_grid_details to filter which output columns to return.

Before calling this, use get_grid_details to understand the grid schema. To discover the exact input labels, check the grid's API data source panel in the BitScale app, or look at the source column configuration.

Args: grid_id: UUID of the grid to run. Found in grid URL or list_grids. inputs: Key-value map of input LABELS to their values. These are human-readable keys like "company_name", "website", "email" — NOT column UUIDs. Example: {"company_name": "Acme Corp", "website": "acme.com"} mode: Execution mode — "sync" (default) or "async". - sync: waits up to 120 seconds for completion, returns outputs directly. If still processing, returns a request_id to poll with get_run_status. - async: returns a request_id immediately. Poll get_run_status for results. output_columns: Optional list of column UUIDs to include in the response. Use the column 'id' values from get_grid_details. If omitted, all enriched columns are returned. source_id: Optional UUID of a specific BitScale API data source on the grid. If omitted, the first available source is used.

Returns:

  • sync completed: {mode, status: "completed", outputs: {column_uuid: {value, name}}}

  • sync timeout or async: {mode, status: "running", request_id, poll_url}

The outputs object keys are column UUIDs, each containing {value, name} where 'name' is the human-readable column display name.

If status is "running", use get_run_status with the returned request_id to poll for completion (every 2-5 seconds).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
grid_idYes
inputsYes
modeNosync
output_columnsNo
source_idNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • main.py:159-234 (handler)
    The 'run_grid' tool implementation, which includes the @mcp.tool decorator and the function logic to run a BitScale grid.
    @mcp.tool()
    def run_grid(
        grid_id: str,
        inputs: dict[str, str],
        mode: str = "sync",
        output_columns: list[str] | None = None,
        source_id: str | None = None,
    ) -> str:
        """
        Run a BitScale Grid by appending a new row with the given inputs and
        triggering all column enrichments.
    
        This is the primary tool for executing BitScale workflows. It adds a row
        to the grid, runs all enrichment/formula/merge columns, and returns the
        enriched outputs.
    
        IMPORTANT — inputs vs output_columns use DIFFERENT key formats:
        - 'inputs' uses human-readable LABELS (e.g. "company_name", "website")
          — these are NOT UUIDs. The labels are derived from the source columns
          configured on the grid's BitScale API data source. You can find the
          exact labels in the BitScale app by clicking the Data Source column,
          selecting the BitScale API source, and looking at the input fields.
        - 'output_columns' uses column UUIDs from get_grid_details to filter
          which output columns to return.
    
        Before calling this, use get_grid_details to understand the grid schema.
        To discover the exact input labels, check the grid's API data source
        panel in the BitScale app, or look at the source column configuration.
    
        Args:
            grid_id: UUID of the grid to run. Found in grid URL or list_grids.
            inputs:  Key-value map of input LABELS to their values. These are
                     human-readable keys like "company_name", "website", "email"
                     — NOT column UUIDs.
                     Example: {"company_name": "Acme Corp", "website": "acme.com"}
            mode:    Execution mode — "sync" (default) or "async".
                     - sync: waits up to 120 seconds for completion, returns
                       outputs directly. If still processing, returns a
                       request_id to poll with get_run_status.
                     - async: returns a request_id immediately. Poll
                       get_run_status for results.
            output_columns: Optional list of column UUIDs to include in the
                            response. Use the column 'id' values from
                            get_grid_details. If omitted, all enriched columns
                            are returned.
            source_id: Optional UUID of a specific BitScale API data source on
                       the grid. If omitted, the first available source is used.
    
        Returns:
        - sync completed: {mode, status: "completed", outputs: {column_uuid: {value, name}}}
        - sync timeout or async: {mode, status: "running", request_id, poll_url}
    
        The outputs object keys are column UUIDs, each containing {value, name}
        where 'name' is the human-readable column display name.
    
        If status is "running", use get_run_status with the returned request_id
        to poll for completion (every 2-5 seconds).
        """
        if not grid_id:
            raise ValueError("grid_id must not be empty")
        if not inputs:
            raise ValueError("inputs must not be empty — provide at least one input column key-value pair")
    
        body: dict = {
            "mode": mode,
            "inputs": inputs,
        }
        if output_columns:
            body["output_columns"] = output_columns
        if source_id:
            body["source_id"] = source_id
    
        # Sync mode can take up to 120s; use 130s timeout to avoid premature client timeout
        timeout = 135 if mode == "sync" else 30
        data = _post(f"/grids/{grid_id}/run", body, timeout=timeout)
        return json.dumps(data, indent=2)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It successfully explains the mutation behavior (appends row), the 120-second sync timeout threshold, the label vs UUID key format distinction, and polling mechanics. Minor gap: no mention of error handling or rate limiting behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Structured effectively with explicit 'Args' and 'Returns' sections and an 'IMPORTANT' header for the key format distinction. Length is justified by zero schema coverage and operational complexity, though the Returns section could be slightly condensed given the existence of an output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex workflow execution tool with 5 parameters (including nested objects), sync/async modes, and prerequisite discovery steps, the description provides comprehensive coverage including polling instructions and output format details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description fully compensates by documenting all 5 parameters with semantic meaning—most notably the critical distinction that 'inputs' uses human-readable labels while 'output_columns' uses UUIDs, with examples provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb+resource pattern ('Run a BitScale Grid') and explicitly positions this as 'the primary tool for executing BitScale workflows'—clearly distinguishing it from sibling introspection tools like get_grid_details and get_run_status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit prerequisite guidance ('Before calling this, use get_grid_details') and directs users to specific sibling tools for schema discovery and status polling. The sync vs async mode explanation provides clear when-to-use guidance for different execution patterns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/featherflow/bitscale-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server