Skip to main content
Glama

get_grid_details

Retrieve comprehensive metadata for a specific grid, including column definitions, settings, and data sources. Use this to understand grid schemas before execution, identifying input labels and output column UUIDs required for running grids.

Instructions

Get full metadata for a specific Grid, including all column definitions, grid settings, and attached data sources.

Use this to inspect a grid's schema before running it — especially to understand the grid's input labels and output column UUIDs needed for the run_grid tool.

Args: grid_id: UUID of the grid. Found in the grid URL at app.bitscale.ai/grid/{gridId}, or from list_grids results.

Returns: grid id, name, description, row_count, created_at, updated_at, settings (auto_run, auto_dedupe, visibility, dedupe_column_id), columns (all columns including text, enrichment, formula, merge types with their id/key and name), and sources (data sources with schedule info).

NOTE on columns vs run_grid inputs:

  • The column 'id' values here are UUIDs — use these for the 'output_columns' parameter of run_grid to filter which outputs you want.

  • The 'inputs' parameter of run_grid uses human-readable LABELS (e.g. "company_name", "website"), NOT column UUIDs. These labels are derived from the API data source columns configured on the grid. You can find the exact input labels in the BitScale app under the grid's Data Source → BitScale API panel, or by inspecting the source column names.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
grid_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • main.py:124-156 (handler)
    The get_grid_details function is the handler that fetches grid metadata by querying the /grids/{grid_id} endpoint. It is decorated as an MCP tool (though the decorator is not visible in the snippet provided in lines 124-156, it is implied by the surrounding code pattern and the request context). Note: The code provided for lines 124-156 is the core implementation.
    def get_grid_details(grid_id: str) -> str:
        """
        Get full metadata for a specific Grid, including all column definitions,
        grid settings, and attached data sources.
    
        Use this to inspect a grid's schema before running it — especially to
        understand the grid's input labels and output column UUIDs needed for
        the run_grid tool.
    
        Args:
            grid_id: UUID of the grid. Found in the grid URL at
                     app.bitscale.ai/grid/{gridId}, or from list_grids results.
    
        Returns: grid id, name, description, row_count, created_at, updated_at,
        settings (auto_run, auto_dedupe, visibility, dedupe_column_id),
        columns (all columns including text, enrichment, formula, merge types
        with their id/key and name), and sources (data sources with schedule info).
    
        NOTE on columns vs run_grid inputs:
        - The column 'id' values here are UUIDs — use these for the
          'output_columns' parameter of run_grid to filter which outputs
          you want.
        - The 'inputs' parameter of run_grid uses human-readable LABELS
          (e.g. "company_name", "website"), NOT column UUIDs. These labels
          are derived from the API data source columns configured on the
          grid. You can find the exact input labels in the BitScale app
          under the grid's Data Source → BitScale API panel, or by
          inspecting the source column names.
        """
        if not grid_id:
            raise ValueError("grid_id must not be empty")
        data = _get(f"/grids/{grid_id}")
        return json.dumps(data, indent=2)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Details return structure including nested objects (settings, columns, sources) and explains critical semantic distinction between column UUIDs (for output_columns) vs input labels (for inputs parameter). Could improve by explicitly stating read-only nature or caching behavior, but explains data semantics thoroughly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: purpose statement, usage context, Args documentation, Returns documentation, and NOTE clarification. Every section adds distinct value. Front-loaded with actionable purpose. NOTE section is essential domain knowledge, not verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite output schema existing, description provides necessary semantic interpretation (UUID vs label mapping) that structured schema cannot convey. Explains complete workflow from grid_id discovery (URL/list_grids) through inspection to run_grid execution. Addresses single parameter with sourcing instructions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (grid_id has no description). Description fully compensates by clarifying grid_id is a 'UUID', providing extraction pattern 'app.bitscale.ai/grid/{gridId}', and referencing sibling list_grids as discovery alternative.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Get' and resource 'Grid', clearly stating scope includes 'column definitions, grid settings, and attached data sources'. Distinguishes from sibling list_grids by specifying 'specific Grid' (single item vs list) and explicitly references run_grid workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'inspect a grid's schema before running it'. Names sibling tool run_grid as the follow-up action and explains prerequisite relationship. Provides specific use case: 'especially to understand the grid's input labels and output column UUIDs needed for the run_grid tool'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/featherflow/bitscale-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server