Skip to main content
Glama

load_csv_from_content

Parse CSV data from string content into a DataBeak session for analysis and transformation. Configure delimiter and header detection to prepare data for processing.

Instructions

Load CSV data from string content into DataBeak session.

Parses CSV data directly from string with validation. Returns session ID and data preview for further operations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesCSV data as string content
delimiterNoColumn delimiter character (comma, tab, semicolon, pipe),
header_configNoHeader detection configuration

Implementation Reference

  • Main handler function for the 'load_csv_from_content' tool. Parses CSV string content into a pandas DataFrame using read_csv, validates size limits, loads data into the session, and returns a LoadResult with preview data and metadata.
    async def load_csv_from_content( ctx: Annotated[Context, Field(description="FastMCP context for session access")], content: Annotated[str, Field(description="CSV data as string content")], delimiter: Annotated[ str, Field(description="Column delimiter character (comma, tab, semicolon, pipe)") ] = ",", *, header_config: Annotated[ HeaderConfigUnion | None, Field(default=None, description="Header detection configuration"), ] = None, ) -> LoadResult: """Load CSV data from string content into DataBeak session. Parses CSV data directly from string with validation. Returns session ID and data preview for further operations. """ # Get session_id from FastMCP context session_id = ctx.session_id await ctx.info("Loading CSV from content string") # Handle default header configuration if header_config is None: header_config = AutoDetectHeader() if not content or not content.strip(): msg = "Content cannot be empty" raise ToolError(msg) # Parse CSV from string using StringIO try: df = pd.read_csv( StringIO(content), delimiter=delimiter, header=resolve_header_param(header_config), ) except pd.errors.EmptyDataError as e: msg = "CSV content is empty or contains no data" raise ToolError(msg) from e except pd.errors.ParserError as e: msg = f"CSV parsing error: {e}" raise ToolError(msg) from e if df.empty: msg = "Parsed CSV contains no data rows" raise ToolError(msg) # Validate DataFrame size against limits validate_dataframe_size(df) # Get or create session session_manager = get_session_manager() session = session_manager.get_or_create_session(session_id) session.load_data(df, None) await ctx.info(f"Loaded {len(df)} rows and {len(df.columns)} columns from content") return create_load_result(df)
  • Registration of the 'load_csv_from_content' tool (and related tools) on the io_server FastMCP instance.
    # Register the logic functions directly as MCP tools (no wrapper functions needed) io_server.tool(name="load_csv_from_url")(load_csv_from_url) io_server.tool(name="load_csv_from_content")(load_csv_from_content) io_server.tool(name="get_session_info")(get_session_info)
  • Pydantic model for the LoadResult returned by the tool.
    class LoadResult(BaseToolResponse): """Response model for data loading operations.""" rows_affected: int = Field(description="Number of rows loaded") columns_affected: list[str] = Field(description="List of column names detected") data: DataPreview | None = Field(None, description="Sample of loaded data") memory_usage_mb: float | None = Field(None, description="Memory usage in megabytes")
  • Input schema components: HeaderConfig discriminated union for configuring CSV header detection, used in the tool's header_config parameter.
    class HeaderConfig(BaseModel, ABC): """Abstract base class for header configuration.""" mode: str = Field(description="Header detection mode") @abstractmethod def get_pandas_param(self) -> int | None | Literal["infer"]: """Convert to pandas read_csv header parameter.""" ... class AutoDetectHeader(HeaderConfig): """Auto-detect whether file has headers using pandas inference.""" mode: Literal["auto"] = "auto" def get_pandas_param(self) -> Literal["infer"]: """Return pandas parameter for auto-detection.""" return "infer" class NoHeader(HeaderConfig): """File has no headers - generate default column names (Column_0, Column_1, etc.).""" mode: Literal["none"] = "none" def get_pandas_param(self) -> None: """Return pandas parameter for no headers.""" return None class ExplicitHeaderRow(HeaderConfig): """Use specific row number as header.""" mode: Literal["row"] = "row" row_number: NonNegativeInt = Field(description="Row number to use as header (0-based)") def get_pandas_param(self) -> int: """Return pandas parameter for explicit header row.""" return self.row_number # Discriminated union type HeaderConfigUnion = Annotated[ AutoDetectHeader | NoHeader | ExplicitHeaderRow, Discriminator("mode"), ]
  • Helper function to create the LoadResult object from the loaded DataFrame, including data preview and memory usage calculation.
    def create_load_result(df: pd.DataFrame) -> LoadResult: """Create LoadResult from a DataFrame. Args: df: Loaded DataFrame Returns: LoadResult with data preview and metadata """ # Create data preview with indices preview_data = create_data_preview_with_indices(df, 5) data_preview = DataPreview( rows=preview_data["records"], row_count=preview_data["total_rows"], column_count=preview_data["total_columns"], truncated=preview_data["preview_rows"] < preview_data["total_rows"], ) return LoadResult( rows_affected=len(df), columns_affected=[str(col) for col in df.columns], data=data_preview, memory_usage_mb=df.memory_usage(deep=True).sum() / (1024 * 1024), )

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jonpspri/databeak'

If you have feedback or need assistance with the MCP directory API, please join our Discord server