update_row
Modify specific columns in a CSV row with selective updates, tracking changes and returning previous and new values for updated columns.
Instructions
Update specific columns in row with selective updates.
Supports partial column updates with change tracking. Returns old/new values for updated columns.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| row_index | Yes | Row index (0-based) to update | |
| data | Yes | Column updates as dict mapping column names to values, or JSON string |
Implementation Reference
- The main handler function that implements the 'update_row' tool logic. It updates specified columns in a given row of the DataFrame, tracks old and new values, handles JSON string input parsing, validates inputs, and returns an UpdateRowResult.def update_row( ctx: Annotated[Context, Field(description="FastMCP context for session access")], row_index: Annotated[int, Field(description="Row index (0-based) to update")], data: Annotated[ dict[str, CellValue] | str, Field(description="Column updates as dict mapping column names to values, or JSON string"), ], ) -> UpdateRowResult: """Update specific columns in row with selective updates. Supports partial column updates with change tracking. Returns old/new values for updated columns. """ # Handle Claude Code's JSON string serialization if isinstance(data, str): try: data = parse_json_string_to_dict(data) except ValueError as e: msg = f"Invalid JSON string in data parameter: {e}" raise ToolError(msg) from e if not isinstance(data, dict): msg = "Update data must be a dictionary or JSON string" raise ToolError(msg) session_id = ctx.session_id _session, df = get_session_data(session_id) # Validate row index if row_index < 0 or row_index >= len(df): msg = f"Row index {row_index} out of range (0-{len(df) - 1})" raise ToolError(msg) # Validate all columns exist missing_columns = [col for col in data if col not in df.columns] if missing_columns: raise ColumnNotFoundError(missing_columns[0], list(df.columns)) # Track changes columns_updated = [] old_values = {} new_values = {} # Update each column for column, new_value in data.items(): # Get old value old_value = df.iloc[row_index, df.columns.get_loc(column)] # type: ignore[index] if pd.isna(old_value): old_value = None elif hasattr(old_value, "item"): # numpy scalar old_value = old_value.item() # type: ignore[assignment] # Set new value df.iloc[row_index, df.columns.get_loc(column)] = new_value # type: ignore[index] # Get new value (after pandas type conversion) updated_value = df.iloc[row_index, df.columns.get_loc(column)] # type: ignore[index] if pd.isna(updated_value): updated_value = None elif hasattr(updated_value, "item"): # numpy scalar updated_value = updated_value.item() # type: ignore[assignment] # Track the change columns_updated.append(column) old_values[column] = old_value new_values[column] = updated_value # No longer recording operations (simplified MCP architecture) return UpdateRowResult( row_index=row_index, columns_updated=columns_updated, old_values=old_values, new_values=new_values, changes_made=len(columns_updated), )
- src/databeak/servers/row_operations_server.py:651-651 (registration)Registers the 'update_row' function as an MCP tool named 'update_row' on the row_operations_server FastMCP instance.row_operations_server.tool(name="update_row")(update_row)
- Pydantic model defining the output schema for the 'update_row' tool, including operation identifier, updated row details, old/new values, and change count.class UpdateRowResult(BaseToolResponse): """Response model for row update operations.""" operation: str = Field(default="update_row", description="Operation type identifier") row_index: int = Field(description="Index of updated row") columns_updated: list[str] = Field(description="Names of columns that were updated") old_values: dict[str, str | int | float | bool | None] = Field( description="Previous values for updated columns", ) new_values: dict[str, str | int | float | bool | None] = Field( description="New values for updated columns", ) changes_made: int = Field(description="Number of columns that were changed")
- Pydantic model defining the input schema parameters for row updates, matching the tool's function signature, with validation for row_index and data parsing.class RowUpdateRequest(BaseModel): """Request parameters for row update operations.""" model_config = ConfigDict(extra="forbid") row_index: int = Field(ge=0, description="Row index to update (0-based)") data: dict[str, CellValue] | str = Field(description="Column updates as dict or JSON string") @field_validator("row_index") @classmethod def validate_row_index(cls, v: int) -> int: """Validate row index is non-negative.""" if v < 0: msg = "Row index must be non-negative" raise ValueError(msg) return v @field_validator("data") @classmethod def parse_json_data(cls, v: dict[str, CellValue] | str) -> dict[str, CellValue]: """Parse JSON string data for Claude Code compatibility.""" return parse_json_string_to_dict(v)