Skip to main content
Glama
marekrost

mcp-server-spreadsheet

read_cell

Extract a single cell's value from spreadsheet files. Returns numbers as integers/floats, text as strings, and empty cells as null.

Instructions

Read the value of a single cell.

Returns the cell's value: numbers as int/float, text as string, and empty cells as null.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
fileYesPath to the spreadsheet file
cellYesCell reference in A1 notation, e.g. 'B3' or '$B$3'
sheetNoSheet name. Defaults to the first sheet if omitted.

Implementation Reference

  • The implementation of the 'read_cell' tool, which uses 'load_workbook' and 'parse_cell' to retrieve a cell's value.
    @mcp.tool()
    def read_cell(
        file: Annotated[str, Field(description="Path to the spreadsheet file")],
        cell: Annotated[str, Field(description="Cell reference in A1 notation, e.g. 'B3' or '$B$3'")],
        sheet: Annotated[str | None, Field(description="Sheet name. Defaults to the first sheet if omitted.")] = None,
    ):
        """Read the value of a single cell.
    
        Returns the cell's value: numbers as int/float, text as string,
        and empty cells as null.
        """
        wb = load_workbook(file)
        ws = _resolve_sheet(wb, sheet)
        row, col = parse_cell(cell)
        return ws.cell_value(row, col)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively explains the return value behavior (numbers as int/float, text as string, empty cells as null), which is crucial for understanding output. However, it lacks details on error handling (e.g., invalid file paths or cell references), performance characteristics, or any side effects like file locking.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured, with two sentences that directly address the tool's purpose and return behavior. Every word earns its place, and it's front-loaded with the core action, making it easy to parse quickly without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (reading a single cell), lack of annotations, and no output schema, the description is partially complete. It covers the return value semantics well but misses contextual details like error conditions, performance limits, or comparisons to sibling tools. For a read operation with no annotations, it should ideally include more behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters (file, cell, sheet). The description adds no additional parameter semantics beyond what's in the schema, such as examples for the 'sheet' parameter or constraints on 'file' paths. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Read the value of a single cell') and resource ('cell'), distinguishing it from sibling tools like read_range or read_sheet that handle multiple cells or entire sheets. It provides a precise verb+resource combination that leaves no ambiguity about the tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives like read_range or read_sheet. It doesn't mention prerequisites, such as requiring an existing spreadsheet file, or compare it to sibling tools that might be more appropriate for different scenarios (e.g., reading multiple cells).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/marekrost/mcp-server-spreadsheet'

If you have feedback or need assistance with the MCP directory API, please join our Discord server