insert_row
Add a new row at a specific position in CSV data, supporting dictionary, list, or JSON string formats with null value handling.
Instructions
Insert new row at specified index with multiple data formats.
Supports dict, list, and JSON string input with null value handling. Returns insertion result with before/after statistics.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| row_index | Yes | Index to insert row at (0-based, -1 to append at end) | |
| data | Yes | Row data as dict, list, or JSON string |
Implementation Reference
- The core handler function for the 'insert_row' tool. It processes input data (dict, list, or JSON string), inserts a new row into the pandas DataFrame at the specified index (-1 for append), handles type conversions and null values, and returns an InsertRowResult.def insert_row( ctx: Annotated[Context, Field(description="FastMCP context for session access")], row_index: Annotated[ int, Field(description="Index to insert row at (0-based, -1 to append at end)"), ], data: Annotated[ RowData | str, Field(description="Row data as dict, list, or JSON string"), ], # Accept string for Claude Code compatibility ) -> InsertRowResult: """Insert new row at specified index with multiple data formats. Supports dict, list, and JSON string input with null value handling. Returns insertion result with before/after statistics. """ # Handle Claude Code's JSON string serialization if isinstance(data, str): try: data = parse_json_string_to_dict(data) except ValueError as e: msg = f"Invalid JSON string in data parameter: {e}" raise ToolError(msg) from e session_id = ctx.session_id session, df = get_session_data(session_id) rows_before = len(df) # Handle special case: append at end if row_index == -1: row_index = len(df) # Validate row index for insertion (0 to N is valid for insertion) if row_index < 0 or row_index > len(df): msg = f"Row index {row_index} out of range for insertion (0-{len(df)})" raise ToolError(msg) # Process data based on type if isinstance(data, dict): # Dictionary format - fill missing columns with None row_data = {} for col in df.columns: row_data[col] = data.get(col, None) elif isinstance(data, list): # List format - must match column count try: row_data = dict(zip(df.columns, data, strict=True)) except ValueError as e: msg = f"List data length ({len(data)}) must match column count ({len(df.columns)})" raise ToolError( msg, ) from e else: msg = f"Unsupported data type: {type(data)}. Use dict, list, or JSON string" raise ToolError(msg) # Create new row as DataFrame new_row = pd.DataFrame([row_data]) # Insert the row if row_index == 0: # Insert at beginning df_new = pd.concat([new_row, df], ignore_index=True) elif row_index >= len(df): # Append at end df_new = pd.concat([df, new_row], ignore_index=True) else: # Insert in middle df_before = df.iloc[:row_index] df_after = df.iloc[row_index:] df_new = pd.concat([df_before, new_row, df_after], ignore_index=True) # Update session data session.df = df_new # Prepare inserted data for response (handle pandas types) data_inserted: dict[str, CellValue] = {} for key, value in row_data.items(): if pd.isna(value): data_inserted[key] = None elif hasattr(value, "item"): # numpy scalar data_inserted[key] = value.item() else: data_inserted[key] = value # No longer recording operations (simplified MCP architecture) return InsertRowResult( row_index=row_index, rows_before=rows_before, rows_after=len(df_new), data_inserted=data_inserted, columns=list(df_new.columns), )
- src/databeak/servers/row_operations_server.py:649-649 (registration)Registration of the insert_row handler as an MCP tool named 'insert_row' on the FastMCP row_operations_server.row_operations_server.tool(name="insert_row")(insert_row)
- Pydantic model defining the output schema for the insert_row tool response, including operation details, statistics, inserted data, and columns.class InsertRowResult(BaseToolResponse): """Response model for row insertion operations.""" operation: str = Field(default="insert_row", description="Operation type identifier") row_index: int = Field(description="Index where row was inserted") rows_before: int = Field(description="Row count before insertion") rows_after: int = Field(description="Row count after insertion") data_inserted: dict[str, str | int | float | bool | None] = Field( description="Actual data that was inserted", ) columns: list[str] = Field(description="Current column names")
- Pydantic model for input parameters of row insertion, including validation for JSON string parsing (though not directly used in handler params).class RowInsertRequest(BaseModel): """Request parameters for row insertion operations.""" model_config = ConfigDict(extra="forbid") row_index: int = Field(description="Index where to insert row (-1 to append at end)") data: dict[str, CellValue] | list[CellValue] | str = Field( description="Row data as dict, list, or JSON string", ) @field_validator("data") @classmethod def parse_json_data( cls, v: dict[str, CellValue] | list[CellValue] | str, ) -> dict[str, CellValue] | list[CellValue]: """Parse JSON string data for Claude Code compatibility.""" return parse_json_string_to_dict_or_list(v)