The CSV Editor server provides AI-powered tools for comprehensive CSV data manipulation, analysis, and validation with robust session management and history tracking.
Core Capabilities:
Data Loading & Export: Load CSV data from files, URLs, or string content; export to CSV, JSON, Excel, Parquet, HTML, and Markdown formats
Data Manipulation: Filter, sort, select, rename, add, remove, update, and transform columns; change data types and handle missing values
Data Analysis: Generate statistics, calculate correlations, group by with aggregation, value counts, outlier detection, and comprehensive data profiling
Data Quality: Remove duplicates, validate against schemas, quality assurance checks, and anomaly detection
Session Management: Multi-user support with isolated sessions, creation, listing, and cleanup capabilities
History & Auto-Save: Full undo/redo functionality, operation tracking, restore points, and configurable auto-save strategies (overwrite, backup, versioned, custom)
Server Utilities: Health status monitoring and capability information
Supports exporting CSV data to Markdown format for documentation and reporting purposes
Leverages NumPy for advanced numerical computing operations on CSV data including statistical analysis and mathematical transformations
Provides comprehensive CSV data manipulation capabilities including filtering, cleaning, statistical analysis, and transformation operations using pandas as the core data processing engine
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@CSV Editorload the sales data and remove duplicate rows"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
CSV Editor - AI-Powered CSV Processing via MCP
Transform how AI assistants work with CSV data. CSV Editor is a high-performance MCP server that gives Claude, ChatGPT, and other AI assistants powerful data manipulation capabilities through simple commands.
π― Why CSV Editor?
The Problem
AI assistants struggle with complex data operations - they can read files but lack tools for filtering, transforming, analyzing, and validating CSV data efficiently.
The Solution
CSV Editor bridges this gap by providing AI assistants with 40+ specialized tools for CSV operations, turning them into powerful data analysts that can:
Clean messy datasets in seconds
Perform complex statistical analysis
Validate data quality automatically
Transform data with natural language commands
Track all changes with undo/redo capabilities
Key Differentiators
Feature | CSV Editor | Traditional Tools |
AI Integration | Native MCP protocol | Manual operations |
Auto-Save | Automatic with strategies | Manual save required |
History Tracking | Full undo/redo with snapshots | Limited or none |
Session Management | Multi-user isolated sessions | Single user |
Data Validation | Built-in quality scoring | Separate tools needed |
Performance | Handles GB+ files with chunking | Memory limitations |
Related MCP server: Chunky MCP
β‘ Quick Demo
# Your AI assistant can now do this:
"Load the sales data and remove duplicates"
"Filter for Q4 2024 transactions over $10,000"
"Calculate correlation between price and quantity"
"Fill missing values with the median"
"Export as Excel with the analysis"
# All with automatic history tracking and undo capability!π Quick Start (2 minutes)
Installing via Smithery
To install csv-editor for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @santoshray02/csv-editor --client claudeFastest Installation (Recommended)
# Install uv if needed (one-time setup)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone and run
git clone https://github.com/santoshray02/csv-editor.git
cd csv-editor
uv sync
uv run csv-editorConfigure Your AI Assistant
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{
"mcpServers": {
"csv-editor": {
"command": "uv",
"args": ["tool", "run", "csv-editor"],
"env": {
"CSV_MAX_FILE_SIZE": "1073741824"
}
}
}
}See MCP_CONFIG.md for detailed configuration.
π‘ Real-World Use Cases
π Data Analyst Workflow
# Morning: Load yesterday's data
session = load_csv("daily_sales.csv")
# Clean: Remove duplicates and fix types
remove_duplicates(session_id)
change_column_type("date", "datetime")
fill_missing_values(strategy="median", columns=["revenue"])
# Analyze: Get insights
get_statistics(columns=["revenue", "quantity"])
detect_outliers(method="iqr", threshold=1.5)
get_correlation_matrix(min_correlation=0.5)
# Report: Export cleaned data
export_csv(format="excel", file_path="clean_sales.xlsx")π ETL Pipeline
# Extract from multiple sources
load_csv_from_url("https://api.example.com/data.csv")
# Transform with complex operations
filter_rows(conditions=[
{"column": "status", "operator": "==", "value": "active"},
{"column": "amount", "operator": ">", "value": 1000}
])
add_column(name="quarter", formula="Q{(month-1)//3 + 1}")
group_by_aggregate(group_by=["quarter"], aggregations={
"amount": ["sum", "mean"],
"customer_id": "count"
})
# Load to different formats
export_csv(format="parquet") # For data warehouse
export_csv(format="json") # For APIπ Data Quality Assurance
# Validate incoming data
validate_schema(schema={
"customer_id": {"type": "integer", "required": True},
"email": {"type": "string", "pattern": r"^[^@]+@[^@]+\.[^@]+$"},
"age": {"type": "integer", "min": 0, "max": 120}
})
# Quality scoring
quality_report = check_data_quality()
# Returns: overall_score, missing_data%, duplicates, outliers
# Anomaly detection
anomalies = find_anomalies(methods=["statistical", "pattern"])π¨ Core Features
Data Operations
Load & Export: CSV, JSON, Excel, Parquet, HTML, Markdown
Transform: Filter, sort, group, pivot, join
Clean: Remove duplicates, handle missing values, fix types
Calculate: Add computed columns, aggregations
Analysis Tools
Statistics: Descriptive stats, correlations, distributions
Outliers: IQR, Z-score, custom thresholds
Profiling: Complete data quality reports
Validation: Schema checking, quality scoring
Productivity Features
Auto-Save: Never lose work with configurable strategies
History: Full undo/redo with operation tracking
Sessions: Multi-user support with isolation
Performance: Stream processing for large files
π Available Tools
I/O Operations
load_csv- Load from fileload_csv_from_url- Load from URLload_csv_from_content- Load from stringexport_csv- Export to various formatsget_session_info- Session detailslist_sessions- Active sessionsclose_session- Cleanup
Data Manipulation
filter_rows- Complex filteringsort_data- Multi-column sortselect_columns- Column selectionrename_columns- Rename columnsadd_column- Add computed columnsremove_columns- Remove columnsupdate_column- Update valueschange_column_type- Type conversionfill_missing_values- Handle nullsremove_duplicates- Deduplicate
Analysis
get_statistics- Statistical summaryget_column_statistics- Column statsget_correlation_matrix- Correlationsgroup_by_aggregate- Group operationsget_value_counts- Frequency countsdetect_outliers- Find outliersprofile_data- Data profiling
Validation
validate_schema- Schema validationcheck_data_quality- Quality metricsfind_anomalies- Anomaly detection
Auto-Save & History
configure_auto_save- Setup auto-saveget_auto_save_status- Check statusundo/redo- Navigate historyget_history- View operationsrestore_to_operation- Time travel
βοΈ Configuration
Environment Variables
Variable | Default | Description |
| 1GB | Maximum file size |
| 3600s | Session timeout |
| 10000 | Processing chunk size |
| true | Enable auto-save |
Auto-Save Strategies
CSV Editor automatically saves your work with configurable strategies:
Overwrite (default) - Update original file
Backup - Create timestamped backups
Versioned - Maintain version history
Custom - Save to specified location
# Configure auto-save
configure_auto_save(
strategy="backup",
backup_dir="/backups",
max_backups=10
)π οΈ Advanced Installation Options
Using pip
git clone https://github.com/santoshray02/csv-editor.git
cd csv-editor
pip install -e .Using pipx (Global)
pipx install git+https://github.com/santoshray02/csv-editor.gitFrom GitHub (Recommended)
# Install latest version
pip install git+https://github.com/santoshray02/csv-editor.git
# Or using uv
uv pip install git+https://github.com/santoshray02/csv-editor.git
# Install specific version
pip install git+https://github.com/santoshray02/csv-editor.git@v1.0.1π§ͺ Development
Running Tests
uv run test # Run tests
uv run test-cov # With coverage
uv run all-checks # Format, lint, type-check, testProject Structure
csv-editor/
βββ src/csv_editor/ # Core implementation
β βββ tools/ # MCP tool implementations
β βββ models/ # Data models
β βββ server.py # MCP server
βββ tests/ # Test suite
βββ examples/ # Usage examples
βββ docs/ # Documentationπ€ Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Quick Contribution Guide
Fork the repository
Create a feature branch
Make your changes with tests
Run
uv run all-checksSubmit a pull request
π Roadmap
SQL query interface
Real-time collaboration
Advanced visualizations
Machine learning integrations
Cloud storage support
Performance optimizations for 10GB+ files
π¬ Support
Issues: GitHub Issues
Discussions: GitHub Discussions
Documentation: Wiki
π License
MIT License - see LICENSE file
π Acknowledgments
Built with:
Ready to supercharge your AI's data capabilities? Get started in 2 minutes β