Skip to main content
Glama
K02D

MCP Tabular Data Analysis Server

by K02D

export_data

Export filtered, sorted, and transformed tabular data from CSV or SQLite files to new CSV files for analysis and sharing.

Instructions

Export filtered/transformed data to a new CSV file. Args: file_path: Path to source CSV or SQLite file output_name: Name for output file (without extension, saved to data/ folder) filter_column: Optional column to filter on filter_operator: Filter operator - 'eq', 'ne', 'gt', 'gte', 'lt', 'lte', 'contains' filter_value: Value to filter by columns: List of columns to include (default: all) sort_by: Column to sort by sort_ascending: Sort direction (default: ascending) limit: Maximum rows to export Returns: Dictionary containing export details and file path

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYes
output_nameYes
filter_columnNo
filter_operatorNo
filter_valueNo
columnsNo
sort_byNo
sort_ascendingNo
limitNo

Implementation Reference

  • The main handler function for the 'export_data' MCP tool. It loads data from a file, optionally filters, selects columns, sorts, limits rows, and exports to a new timestamped CSV file in the data/ directory. Registered via @mcp.tool() decorator.
    def export_data( file_path: str, output_name: str, filter_column: str | None = None, filter_operator: str | None = None, filter_value: str | float | None = None, columns: list[str] | None = None, sort_by: str | None = None, sort_ascending: bool = True, limit: int | None = None, ) -> dict[str, Any]: """ Export filtered/transformed data to a new CSV file. Args: file_path: Path to source CSV or SQLite file output_name: Name for output file (without extension, saved to data/ folder) filter_column: Optional column to filter on filter_operator: Filter operator - 'eq', 'ne', 'gt', 'gte', 'lt', 'lte', 'contains' filter_value: Value to filter by columns: List of columns to include (default: all) sort_by: Column to sort by sort_ascending: Sort direction (default: ascending) limit: Maximum rows to export Returns: Dictionary containing export details and file path """ df = _load_data(file_path) original_count = len(df) # Apply filter if specified if filter_column and filter_operator and filter_value is not None: if filter_column not in df.columns: raise ValueError(f"Filter column '{filter_column}' not found") if filter_operator == "eq": df = df[df[filter_column] == filter_value] elif filter_operator == "ne": df = df[df[filter_column] != filter_value] elif filter_operator == "gt": df = df[df[filter_column] > float(filter_value)] elif filter_operator == "gte": df = df[df[filter_column] >= float(filter_value)] elif filter_operator == "lt": df = df[df[filter_column] < float(filter_value)] elif filter_operator == "lte": df = df[df[filter_column] <= float(filter_value)] elif filter_operator == "contains": df = df[df[filter_column].astype(str).str.contains(str(filter_value), case=False, na=False)] else: raise ValueError(f"Unknown operator: {filter_operator}") # Select columns if columns: invalid = [c for c in columns if c not in df.columns] if invalid: raise ValueError(f"Columns not found: {invalid}") df = df[columns] # Sort if sort_by: if sort_by not in df.columns: raise ValueError(f"Sort column '{sort_by}' not found") df = df.sort_values(sort_by, ascending=sort_ascending) # Limit rows if limit: df = df.head(limit) # Save file output_dir = _PROJECT_ROOT / 'data' output_dir.mkdir(parents=True, exist_ok=True) # Clean output name and add timestamp clean_name = "".join(c for c in output_name if c.isalnum() or c in ('-', '_')) timestamp = datetime.now().strftime('%Y%m%d_%H%M%S') output_file = output_dir / f"{clean_name}_{timestamp}.csv" df.to_csv(output_file, index=False) return { "success": True, "source_file": file_path, "output_file": str(output_file.relative_to(_PROJECT_ROOT)), "absolute_path": str(output_file), "original_rows": original_count, "exported_rows": len(df), "exported_columns": df.columns.tolist(), "filter_applied": f"{filter_column} {filter_operator} {filter_value}" if filter_column else None, }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/K02D/mcp-tabular'

If you have feedback or need assistance with the MCP directory API, please join our Discord server