Skip to main content
Glama

update_row

Modify specific columns in a CSV row with selective updates, tracking changes and returning previous and new values for updated columns.

Instructions

Update specific columns in row with selective updates.

Supports partial column updates with change tracking. Returns old/new values for updated columns.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
row_indexYesRow index (0-based) to update
dataYesColumn updates as dict mapping column names to values, or JSON string

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
successNoWhether operation completed successfully
operationNoOperation type identifierupdate_row
row_indexYesIndex of updated row
new_valuesYesNew values for updated columns
old_valuesYesPrevious values for updated columns
changes_madeYesNumber of columns that were changed
columns_updatedYesNames of columns that were updated

Implementation Reference

  • The main handler function that implements the 'update_row' tool logic. It updates specified columns in a given row of the DataFrame, tracks old and new values, handles JSON string input parsing, validates inputs, and returns an UpdateRowResult.
    def update_row(
        ctx: Annotated[Context, Field(description="FastMCP context for session access")],
        row_index: Annotated[int, Field(description="Row index (0-based) to update")],
        data: Annotated[
            dict[str, CellValue] | str,
            Field(description="Column updates as dict mapping column names to values, or JSON string"),
        ],
    ) -> UpdateRowResult:
        """Update specific columns in row with selective updates.
    
        Supports partial column updates with change tracking. Returns old/new values for updated
        columns.
        """
        # Handle Claude Code's JSON string serialization
        if isinstance(data, str):
            try:
                data = parse_json_string_to_dict(data)
            except ValueError as e:
                msg = f"Invalid JSON string in data parameter: {e}"
                raise ToolError(msg) from e
    
        if not isinstance(data, dict):
            msg = "Update data must be a dictionary or JSON string"
            raise ToolError(msg)
    
        session_id = ctx.session_id
        _session, df = get_session_data(session_id)
    
        # Validate row index
        if row_index < 0 or row_index >= len(df):
            msg = f"Row index {row_index} out of range (0-{len(df) - 1})"
            raise ToolError(msg)
    
        # Validate all columns exist
        missing_columns = [col for col in data if col not in df.columns]
        if missing_columns:
            raise ColumnNotFoundError(missing_columns[0], list(df.columns))
    
        # Track changes
        columns_updated = []
        old_values = {}
        new_values = {}
    
        # Update each column
        for column, new_value in data.items():
            # Get old value
            old_value = df.iloc[row_index, df.columns.get_loc(column)]  # type: ignore[index]
            if pd.isna(old_value):
                old_value = None
            elif hasattr(old_value, "item"):  # numpy scalar
                old_value = old_value.item()  # type: ignore[assignment]
    
            # Set new value
            df.iloc[row_index, df.columns.get_loc(column)] = new_value  # type: ignore[index]
    
            # Get new value (after pandas type conversion)
            updated_value = df.iloc[row_index, df.columns.get_loc(column)]  # type: ignore[index]
            if pd.isna(updated_value):
                updated_value = None
            elif hasattr(updated_value, "item"):  # numpy scalar
                updated_value = updated_value.item()  # type: ignore[assignment]
    
            # Track the change
            columns_updated.append(column)
            old_values[column] = old_value
            new_values[column] = updated_value
    
        # No longer recording operations (simplified MCP architecture)
    
        return UpdateRowResult(
            row_index=row_index,
            columns_updated=columns_updated,
            old_values=old_values,
            new_values=new_values,
            changes_made=len(columns_updated),
        )
  • Registers the 'update_row' function as an MCP tool named 'update_row' on the row_operations_server FastMCP instance.
    row_operations_server.tool(name="update_row")(update_row)
  • Pydantic model defining the output schema for the 'update_row' tool, including operation identifier, updated row details, old/new values, and change count.
    class UpdateRowResult(BaseToolResponse):
        """Response model for row update operations."""
    
        operation: str = Field(default="update_row", description="Operation type identifier")
        row_index: int = Field(description="Index of updated row")
        columns_updated: list[str] = Field(description="Names of columns that were updated")
        old_values: dict[str, str | int | float | bool | None] = Field(
            description="Previous values for updated columns",
        )
        new_values: dict[str, str | int | float | bool | None] = Field(
            description="New values for updated columns",
        )
        changes_made: int = Field(description="Number of columns that were changed")
  • Pydantic model defining the input schema parameters for row updates, matching the tool's function signature, with validation for row_index and data parsing.
    class RowUpdateRequest(BaseModel):
        """Request parameters for row update operations."""
    
        model_config = ConfigDict(extra="forbid")
    
        row_index: int = Field(ge=0, description="Row index to update (0-based)")
        data: dict[str, CellValue] | str = Field(description="Column updates as dict or JSON string")
    
        @field_validator("row_index")
        @classmethod
        def validate_row_index(cls, v: int) -> int:
            """Validate row index is non-negative."""
            if v < 0:
                msg = "Row index must be non-negative"
                raise ValueError(msg)
            return v
    
        @field_validator("data")
        @classmethod
        def parse_json_data(cls, v: dict[str, CellValue] | str) -> dict[str, CellValue]:
            """Parse JSON string data for Claude Code compatibility."""
            return parse_json_string_to_dict(v)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses behavioral traits: 'Supports partial column updates with change tracking' and 'Returns old/new values for updated columns.' This adds context on what the tool does (partial updates, tracking) and output behavior. However, it misses details like error handling, permissions, or side effects. The description does not contradict annotations (none provided).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, and subsequent sentences add key behavioral details. Every sentence earns its place with no waste. It is concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given complexity (mutation tool with 2 parameters), no annotations, and an output schema (implied by 'Has output schema: true'), the description is fairly complete. It covers purpose, behavior (partial updates, change tracking), and output (returns old/new values). The output schema likely details return values, so the description need not explain them. However, it could improve by addressing error cases or prerequisites.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('row_index' and 'data') with descriptions. The description adds marginal value by implying 'data' is for column updates, but does not provide additional semantics beyond what the schema states (e.g., format details or examples). Baseline is 3 when schema does heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Update specific columns in row with selective updates.' This specifies the verb ('update'), resource ('columns in row'), and scope ('selective updates'). It distinguishes from siblings like 'set_cell_value' (single cell) and 'update_column' (entire column), though not explicitly. The purpose is specific but could be more explicit about sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for partial column updates with change tracking, but does not explicitly state when to use this tool versus alternatives like 'set_cell_value' (for single cells) or 'update_column' (for entire columns). No exclusions or prerequisites are mentioned. Usage is implied from the context of selective updates, but lacks explicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jonpspri/databeak'

If you have feedback or need assistance with the MCP directory API, please join our Discord server