CSV Editor
CSV Editor - AI-Powered CSV Processing via MCP
Stateful CSV editing for AI assistants. CSV Editor is an MCP server that gives Claude, ChatGPT, Cursor, Windsurf, and other MCP clients a full suite of CSV operations — with sessions, undo/redo, and auto-save built in. Most data MCPs are analyze-only; this one lets the AI edit.
🆕 What's new in v2.0.0 (April 2026)
FastMCP 3.x — migrated from FastMCP 2 to 3.2, aligning with MCP spec 2025-11-25.
Python 3.11+ required (was 3.10+). Tested against 3.11 / 3.12 / 3.13 / 3.14.
--transport sseremoved. Use--transport http(Streamable HTTP) for remote deployments. SSE was deprecated by FastMCP 3.Dependency refresh: pydantic 2.13, pyarrow 23, httpx 0.28.
New
CSV_EDITOR_CSV_HISTORY_DIRenv var for configuring the history directory.First-class CI test matrix on GitHub Actions.
Users who pinned csv-editor>=1,<2 are unaffected and will continue to receive 1.x patches if needed. See CHANGELOG.md for the full list of breaking changes.
Related MCP server: Chunky MCP
🎯 Why CSV Editor?
The Problem
AI assistants struggle with complex data operations - they can read files but lack tools for filtering, transforming, analyzing, and validating CSV data efficiently.
The Solution
CSV Editor bridges this gap by providing AI assistants with 39 specialized tools for CSV operations, turning them into powerful data analysts that can:
Clean messy datasets in seconds
Perform complex statistical analysis
Validate data quality automatically
Transform data with natural language commands
Track all changes with undo/redo capabilities
Key differentiators vs. other CSV / tabular MCPs
Capability | CSV Editor | DuckDB / Polars MCPs | Most pandas-based MCPs |
Stateful editing (load → mutate → save) | ✅ | Read-only or single-shot | Partial |
Undo / redo with snapshots | ✅ | ❌ | ❌ |
Multi-session isolation | ✅ | Limited | Limited |
Auto-save with strategies | ✅ (overwrite / backup / versioned / custom) | ❌ | ❌ |
Quality scoring & validation | ✅ | SQL-only | Via separate tools |
File-size sweet spot | <1 GB (pandas) | 50 GB+ (streaming SQL) | Small–medium |
Best for | Edit-and-review workflows | Large-file analytics | Quick analysis |
When to pick CSV Editor: you want the AI to make changes to a CSV and iterate, not just answer questions about it. If your workload is read-only analytics on multi-GB files, a DuckDB-based MCP is likely a better fit; CSV Editor's DuckDB/Polars engine support is tracked on the roadmap.
⚡ Quick Demo
# Your AI assistant can now do this:
"Load the sales data and remove duplicates"
"Filter for Q4 2024 transactions over $10,000"
"Calculate correlation between price and quantity"
"Fill missing values with the median"
"Export as Excel with the analysis"
# All with automatic history tracking and undo capability!🚀 Quick Start (2 minutes)
Installing via Smithery
To install csv-editor for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @santoshray02/csv-editor --client claudeFastest Installation (Recommended)
# Install uv if needed (one-time setup)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone and run
git clone https://github.com/santoshray02/csv-editor.git
cd csv-editor
uv sync
uv run csv-editorConfigure Your AI Assistant
Add to your claude_desktop_config.json:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.jsonLinux:
~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"csv-editor": {
"command": "uv",
"args": ["tool", "run", "csv-editor"],
"env": {
"CSV_MAX_FILE_SIZE": "1073741824"
}
}
}
}Any MCP-capable client works with stdio transport. See MCP_CONFIG.md for per-client setup.
ChatGPT Connectors require remote Streamable HTTP with OAuth, which is tracked on the roadmap but not yet in v2.0.0. Use stdio-based clients (Claude Desktop, Claude Code, Cursor, etc.) in the meantime.
💡 Real-World Use Cases
📊 Data Analyst Workflow
# Morning: Load yesterday's data
session = load_csv("daily_sales.csv")
# Clean: Remove duplicates and fix types
remove_duplicates(session_id)
change_column_type("date", "datetime")
fill_missing_values(strategy="median", columns=["revenue"])
# Analyze: Get insights
get_statistics(columns=["revenue", "quantity"])
detect_outliers(method="iqr", threshold=1.5)
get_correlation_matrix(min_correlation=0.5)
# Report: Export cleaned data
export_csv(format="excel", file_path="clean_sales.xlsx")🏭 ETL Pipeline
# Extract from multiple sources
load_csv_from_url("https://api.example.com/data.csv")
# Transform with complex operations
filter_rows(conditions=[
{"column": "status", "operator": "==", "value": "active"},
{"column": "amount", "operator": ">", "value": 1000}
])
add_column(name="quarter", formula="Q{(month-1)//3 + 1}")
group_by_aggregate(group_by=["quarter"], aggregations={
"amount": ["sum", "mean"],
"customer_id": "count"
})
# Load to different formats
export_csv(format="parquet") # For data warehouse
export_csv(format="json") # For API🔍 Data Quality Assurance
# Validate incoming data
validate_schema(schema={
"customer_id": {"type": "integer", "required": True},
"email": {"type": "string", "pattern": r"^[^@]+@[^@]+\.[^@]+$"},
"age": {"type": "integer", "min": 0, "max": 120}
})
# Quality scoring
quality_report = check_data_quality()
# Returns: overall_score, missing_data%, duplicates, outliers
# Anomaly detection
anomalies = find_anomalies(methods=["statistical", "pattern"])🎨 Core Features
Data Operations
Load & Export: CSV, JSON, Excel, Parquet, HTML, Markdown
Transform: Filter, sort, group, pivot, join
Clean: Remove duplicates, handle missing values, fix types
Calculate: Add computed columns, aggregations
Analysis Tools
Statistics: Descriptive stats, correlations, distributions
Outliers: IQR, Z-score, custom thresholds
Profiling: Complete data quality reports
Validation: Schema checking, quality scoring
Productivity Features
Auto-Save: Never lose work with configurable strategies
History: Full undo/redo with operation tracking
Sessions: Multi-user support with isolation
Performance: Stream processing for large files
📚 Available Tools
Server info (2)
health_check— health status + active session countget_server_info— capabilities, supported formats, limits
I/O operations (7)
load_csv— Load from fileload_csv_from_url— Load from URLload_csv_from_content— Load from stringexport_csv— Export to various formats (csv, tsv, json, excel, parquet, html, markdown)get_session_info— Session detailslist_sessions— Active sessionsclose_session— Cleanup
Data manipulation (10)
filter_rows— Complex filteringsort_data— Multi-column sortselect_columns— Column selectionrename_columns— Rename columnsadd_column— Add computed columnsremove_columns— Remove columnsupdate_column— Update valueschange_column_type— Type conversionfill_missing_values— Handle nullsremove_duplicates— Deduplicate
Analysis (7)
get_statistics— Statistical summaryget_column_statistics— Column statsget_correlation_matrix— Correlationsgroup_by_aggregate— Group operationsget_value_counts— Frequency countsdetect_outliers— Find outliers (IQR, Z-score)profile_data— Data profiling
Validation (3)
validate_schema— Schema validationcheck_data_quality— Quality metrics + overall scorefind_anomalies— Anomaly detection
Auto-save (4)
configure_auto_save— Setup auto-save strategydisable_auto_save— Turn off auto-saveget_auto_save_status— Check statustrigger_manual_save— Force a save now
History (6)
undo— Step back one operationredo— Step forward after undoget_history— View operations logrestore_to_operation— Time travel to a specific operationclear_history— Reset historyexport_history— Export operations log
⚙️ Configuration
Environment variables
Variable | Default | Description |
|
| Maximum file size (megabytes) |
|
| Session timeout |
|
| Directory for persisted operation history |
Auto-Save Strategies
CSV Editor automatically saves your work with configurable strategies:
Overwrite (default) - Update original file
Backup - Create timestamped backups
Versioned - Maintain version history
Custom - Save to specified location
# Configure auto-save
configure_auto_save(
strategy="backup",
backup_dir="/backups",
max_backups=10
)🛠️ Advanced Installation Options
Using pip
git clone https://github.com/santoshray02/csv-editor.git
cd csv-editor
pip install -e .Using pipx (Global)
pipx install git+https://github.com/santoshray02/csv-editor.gitFrom PyPI (once v2.0.0 is live)
pip install csv-editor # latest
pip install csv-editor==2.0.0 # pinned
# Or with uv:
uv tool install csv-editorFrom GitHub
# Latest main
pip install git+https://github.com/santoshray02/csv-editor.git
# Specific release
pip install git+https://github.com/santoshray02/csv-editor.git@v2.0.0
# Or with uv
uv pip install git+https://github.com/santoshray02/csv-editor.git@v2.0.0🧪 Development
Running tests
uv run pytest tests/ -v # Run tests
uv run pytest tests/ --cov=src/csv_editor # With coverage
uv run ruff check src/ tests/ # Lint
uv run black --check src/ tests/ # Format check
uv run mypy src/ # Type checkCI runs the full pytest matrix on Python 3.11–3.14 for every push to main — see .github/workflows/test.yml.
Project Structure
csv-editor/
├── src/csv_editor/ # Core implementation
│ ├── tools/ # MCP tool implementations
│ ├── models/ # Data models
│ └── server.py # MCP server
├── tests/ # Test suite
├── examples/ # Usage examples
└── docs/ # Documentation🤝 Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Quick Contribution Guide
Fork the repository
Create a feature branch
Make your changes with tests
Run
uv run pytest tests/anduv run ruff check src/ tests/Submit a pull request
📈 Roadmap
Post-v2.0.0 priorities (see the 2026 relevance audit for context):
pandas 3.0 / numpy 2.4 — Copy-on-Write migration, Arrow-backed default strings (follow-up to v2.0.0).
DuckDB + Polars engines — swappable backends with DuckDB as the default for files >100 MB (closes the large-file gap).
MCP async Tasks + Resource Links — non-blocking
load_csv/export_csv/profile_datafor GB files; paginated large results.Remote HTTP + OAuth (CIMD) — enables ChatGPT Connectors and VS Code Copilot remote usage.
Elicitation — prompt for ambiguous CSV dialect / encoding / dtype at load time instead of failing.
Docs migration — Docusaurus → MkDocs-Material with
mkdocstringsfor auto-generated API docs.
💬 Support
Issues: GitHub Issues
Discussions: GitHub Discussions
Documentation: Wiki
📄 License
MIT License - see LICENSE file
🙏 Acknowledgments
Built with:
Ready to supercharge your AI's data capabilities? Get started in 2 minutes →
Maintenance
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/santoshray02/csv-editor'
If you have feedback or need assistance with the MCP directory API, please join our Discord server