Skip to main content
Glama

insert_row

Add a new row at any position in CSV data using dictionary, list, or JSON string formats with proper null value handling.

Instructions

Insert new row at specified index with multiple data formats.

Supports dict, list, and JSON string input with null value handling. Returns insertion result with before/after statistics.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
row_indexYesIndex to insert row at (0-based, -1 to append at end)
dataYesRow data as dict, list, or JSON string

Implementation Reference

  • The handler function that implements the core logic for inserting a row into the DataFrame. Supports insertion at specific index or append (-1), handles dict/list/JSON data formats, validates inputs, updates session DataFrame, and returns structured result.
    def insert_row( ctx: Annotated[Context, Field(description="FastMCP context for session access")], row_index: Annotated[ int, Field(description="Index to insert row at (0-based, -1 to append at end)"), ], data: Annotated[ RowData | str, Field(description="Row data as dict, list, or JSON string"), ], # Accept string for Claude Code compatibility ) -> InsertRowResult: """Insert new row at specified index with multiple data formats. Supports dict, list, and JSON string input with null value handling. Returns insertion result with before/after statistics. """ # Handle Claude Code's JSON string serialization if isinstance(data, str): try: data = parse_json_string_to_dict(data) except ValueError as e: msg = f"Invalid JSON string in data parameter: {e}" raise ToolError(msg) from e session_id = ctx.session_id session, df = get_session_data(session_id) rows_before = len(df) # Handle special case: append at end if row_index == -1: row_index = len(df) # Validate row index for insertion (0 to N is valid for insertion) if row_index < 0 or row_index > len(df): msg = f"Row index {row_index} out of range for insertion (0-{len(df)})" raise ToolError(msg) # Process data based on type if isinstance(data, dict): # Dictionary format - fill missing columns with None row_data = {} for col in df.columns: row_data[col] = data.get(col, None) elif isinstance(data, list): # List format - must match column count try: row_data = dict(zip(df.columns, data, strict=True)) except ValueError as e: msg = f"List data length ({len(data)}) must match column count ({len(df.columns)})" raise ToolError( msg, ) from e else: msg = f"Unsupported data type: {type(data)}. Use dict, list, or JSON string" raise ToolError(msg) # Create new row as DataFrame new_row = pd.DataFrame([row_data]) # Insert the row if row_index == 0: # Insert at beginning df_new = pd.concat([new_row, df], ignore_index=True) elif row_index >= len(df): # Append at end df_new = pd.concat([df, new_row], ignore_index=True) else: # Insert in middle df_before = df.iloc[:row_index] df_after = df.iloc[row_index:] df_new = pd.concat([df_before, new_row, df_after], ignore_index=True) # Update session data session.df = df_new # Prepare inserted data for response (handle pandas types) data_inserted: dict[str, CellValue] = {} for key, value in row_data.items(): if pd.isna(value): data_inserted[key] = None elif hasattr(value, "item"): # numpy scalar data_inserted[key] = value.item() else: data_inserted[key] = value # No longer recording operations (simplified MCP architecture) return InsertRowResult( row_index=row_index, rows_before=rows_before, rows_after=len(df_new), data_inserted=data_inserted, columns=list(df_new.columns), )
  • Registers the insert_row function as an MCP tool named 'insert_row' on the row_operations_server FastMCP instance.
    row_operations_server.tool(name="insert_row")(insert_row)
  • Pydantic output response model defining the structure returned by the insert_row tool, including operation details, stats, inserted data, and columns.
    class InsertRowResult(BaseToolResponse): """Response model for row insertion operations.""" operation: str = Field(default="insert_row", description="Operation type identifier") row_index: int = Field(description="Index where row was inserted") rows_before: int = Field(description="Row count before insertion") rows_after: int = Field(description="Row count after insertion") data_inserted: dict[str, str | int | float | bool | None] = Field( description="Actual data that was inserted", ) columns: list[str] = Field(description="Current column names")
  • Pydantic input model for row insertion parameters, including validation for row_index and data (supports dict, list, JSON string with validator).
    class RowInsertRequest(BaseModel): """Request parameters for row insertion operations.""" model_config = ConfigDict(extra="forbid") row_index: int = Field(description="Index where to insert row (-1 to append at end)") data: dict[str, CellValue] | list[CellValue] | str = Field( description="Row data as dict, list, or JSON string", ) @field_validator("data") @classmethod def parse_json_data( cls, v: dict[str, CellValue] | list[CellValue] | str, ) -> dict[str, CellValue] | list[CellValue]: """Parse JSON string data for Claude Code compatibility.""" return parse_json_string_to_dict_or_list(v)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jonpspri/databeak'

If you have feedback or need assistance with the MCP directory API, please join our Discord server