<div align="center">
# mcp-csv-analyst
**MCP server for querying and analyzing CSV files**
[](https://www.npmjs.com/package/mcp-csv-analyst)
[](https://opensource.org/licenses/MIT)
Give Claude (or any MCP client) the ability to load, query, filter, aggregate, and analyze CSV data.
</div>
---
## Features
- **csv_describe** - Load a CSV and get schema, row count, column types, and statistics
- **csv_filter** - Filter rows by column conditions (eq, gt, lt, contains, etc.)
- **csv_aggregate** - Compute sum, avg, min, max, count, median on numeric columns
- **csv_group_by** - Group by a column and aggregate another
- **csv_sample** - Get sample rows with offset/limit
- **csv_unique** - Get unique values and their counts for any column
## Quick Start
### With Claude Desktop
Add to your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"csv-analyst": {
"command": "npx",
"args": ["-y", "mcp-csv-analyst"]
}
}
}
```
### With Claude Code
```bash
claude mcp add csv-analyst npx mcp-csv-analyst
```
### Manual Install
```bash
npm install -g mcp-csv-analyst
```
## Tools
### csv_describe
Load a CSV file and get an overview of its structure.
**Parameters:**
- `file_path` (string) - Absolute path to CSV file
### csv_filter
Filter rows by a column condition.
**Parameters:**
- `file_path` (string) - Path to CSV
- `column` (string) - Column to filter on
- `operator` (enum) - `eq`, `neq`, `gt`, `gte`, `lt`, `lte`, `contains`, `starts_with`
- `value` (string) - Value to compare
- `limit` (number, optional) - Max rows to return (default 50)
- `sort_by` (string, optional) - Column to sort by
- `sort_dir` (enum, optional) - `asc` or `desc`
### csv_aggregate
Compute an aggregate on a numeric column.
**Parameters:**
- `file_path` (string) - Path to CSV
- `column` (string) - Numeric column
- `operation` (enum) - `sum`, `avg`, `min`, `max`, `count`, `median`
### csv_group_by
Group rows and compute an aggregate.
**Parameters:**
- `file_path` (string) - Path to CSV
- `group_column` (string) - Column to group by
- `agg_column` (string) - Column to aggregate
- `operation` (enum) - `sum`, `avg`, `count`, `min`, `max`
### csv_sample
Get sample rows from a CSV.
**Parameters:**
- `file_path` (string) - Path to CSV
- `count` (number, optional) - Number of rows (default 10)
- `offset` (number, optional) - Starting row offset
### csv_unique
Get unique values with counts.
**Parameters:**
- `file_path` (string) - Path to CSV
- `column` (string) - Column name
- `limit` (number, optional) - Max values (default 50)
## Example Conversation
> **You:** Analyze the sales data in /data/sales.csv
>
> **Claude:** Let me look at the structure of your CSV first...
> *Uses csv_describe to examine the file*
>
> The file has 10,000 rows with columns: date, product, region, quantity, price, total.
> Let me compute some key metrics...
> *Uses csv_group_by to sum total by region*
> *Uses csv_aggregate to get the overall average price*
## License
MIT