Skip to main content
Glama

MCP Tabular Data Analysis Server

by K02D

MCP Tabular Data Analysis Server

A Model Context Protocol (MCP) server that provides powerful tools for analyzing numeric and tabular data. Works with CSV files and SQLite databases.

Features

Core Tools

Tool

Description

list_data_files

List available CSV and SQLite files in the data directory

describe_dataset

Generate comprehensive statistics for a dataset (shape, types, distributions, missing values)

detect_anomalies

Find outliers using Z-score or IQR methods

compute_correlation

Calculate correlation matrices between numeric columns

filter_rows

Filter data using various operators (eq, gt, lt, contains, etc.)

group_aggregate

Group data and compute aggregations (sum, mean, count, etc.)

query_sqlite

Execute SQL queries on SQLite databases

list_tables

List all tables and schemas in a SQLite database

Advanced Analytics Tools (NEW!)

Tool

Description

create_pivot_table

Create Excel-style pivot tables with flexible aggregations

data_quality_report

Comprehensive data quality assessment with scores and recommendations

analyze_time_series

Time series analysis with trends, seasonality, and moving averages

generate_chart

Create visualizations (bar, line, scatter, histogram, pie, box plots)

merge_datasets

Join/merge two datasets together (inner, left, right, outer joins)

statistical_test

Hypothesis testing (t-test, ANOVA, chi-squared, correlation tests)

auto_insights

Automatically discover interesting patterns and insights

export_data

Export filtered/transformed data to new CSV files

Installation

Prerequisites

  • Python 3.10+

  • uv (recommended) or pip

Install with uv

cd mcp-tabular uv sync

Install with pip

cd mcp-tabular pip install -e .

Usage

Running the Server Directly

# With uv uv run mcp-tabular # With pip installation mcp-tabular

Configure with Claude Desktop

Quick Setup:

  1. Find your Claude Desktop config file:

    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

    • Windows: %APPDATA%\Claude\claude_desktop_config.json

    • Linux: ~/.config/Claude/claude_desktop_config.json

  2. Add this configuration (replace /Users/kirondeb/mcp-tabular with your actual path):

{ "mcpServers": { "tabular-data": { "command": "/Users/kirondeb/mcp-tabular/.venv/bin/python", "args": [ "-m", "mcp_tabular.server" ] } } }
  1. Restart Claude Desktop completely (quit and reopen)

  2. Test it by asking Claude: "Describe the dataset in data/sample_sales.csv"

šŸ“– For detailed instructions and troubleshooting, see CONNECT_TO_CLAUDE_DESKTOP.md

šŸ“ For example prompts to try, see TEST_PROMPTS.md

Sample Data

The project includes sample data for testing:

  • data/sample_sales.csv - Sales transaction data

  • data/sample.db - SQLite database with customers, orders, and products tables

To create the SQLite sample database:

python scripts/create_sample_db.py

Path Resolution

Important: All file paths are resolved relative to the project root directory. This means:

  • Relative paths like data/sample_sales.csv work from any working directory

  • Absolute paths also work as expected

  • The server automatically resolves paths relative to where mcp_tabular is installed

Tool Examples

List Data Files

Discover available data files:

list_data_files()

Returns all CSV and SQLite files in the data directory with metadata.

Describe Dataset

Get comprehensive statistics about a dataset:

describe_dataset(file_path="data/sample_sales.csv")

Returns shape, column types, numeric statistics (mean, std, median, skew, kurtosis), categorical value counts, and a sample preview.

Detect Anomalies

Find outliers in numeric columns:

detect_anomalies( file_path="data/sample_sales.csv", column="total_sales", method="zscore", threshold=3.0 )

Supports zscore and iqr methods.

Compute Correlation

Analyze relationships between numeric columns:

compute_correlation( file_path="data/sample_sales.csv", method="pearson" )

Returns full correlation matrix and top correlations ranked by strength.

Filter Rows

Filter data based on conditions:

filter_rows( file_path="data/sample_sales.csv", column="category", operator="eq", value="Electronics" )

Operators: eq, ne, gt, gte, lt, lte, contains, startswith, endswith

Group & Aggregate

Group data and compute aggregations:

group_aggregate( file_path="data/sample_sales.csv", group_by=["category", "region"], aggregations={"total_sales": ["sum", "mean"], "quantity": ["count"]} )

Query SQLite

Execute SQL queries on databases:

query_sqlite( db_path="data/sample.db", query="SELECT * FROM customers WHERE lifetime_value > 1000" )

List Tables

Explore SQLite database structure:

list_tables(db_path="data/sample.db")

Advanced Analytics Examples

Create Pivot Table

Create Excel-style pivot tables for business analysis:

create_pivot_table( file_path="data/sample_sales.csv", index=["region"], columns=["category"], values="total_sales", aggfunc="sum" )

Data Quality Report

Get a comprehensive quality assessment before analysis:

data_quality_report(file_path="data/sample_sales.csv")

Returns completeness score, duplicate detection, outlier analysis, and an overall quality grade (A-F).

Time Series Analysis

Analyze trends and seasonality in time-based data:

analyze_time_series( file_path="data/sample_sales.csv", date_column="order_date", value_column="total_sales", freq="M", include_forecast=True )

Generate Charts

Create visualizations returned as base64 images:

generate_chart( file_path="data/sample_sales.csv", chart_type="bar", x_column="category", y_column="total_sales", title="Sales by Category" )

Supported chart types: bar, line, scatter, histogram, pie, box

Merge Datasets

Combine data from multiple sources:

merge_datasets( file_path_left="data/orders.csv", file_path_right="data/customers.csv", on=["customer_id"], how="left" )

Statistical Testing

Perform hypothesis tests for data science:

statistical_test( file_path="data/sample_sales.csv", test_type="ttest_ind", column1="total_sales", group_column="region", alpha=0.05 )

Supported tests: ttest_ind, ttest_paired, chi_squared, anova, mann_whitney, pearson, spearman

Auto Insights

Automatically discover interesting patterns:

auto_insights(file_path="data/sample_sales.csv")

Returns insights about correlations, outliers, skewed distributions, missing data, and more.

Export Data

Export filtered data to a new CSV:

export_data( file_path="data/sample_sales.csv", output_name="electronics_sales", filter_column="category", filter_operator="eq", filter_value="Electronics", sort_by="total_sales", sort_ascending=False )

Development

Run Tests

uv run pytest

Project Structure

mcp-tabular/ ā”œā”€ā”€ src/ │ └── mcp_tabular/ │ ā”œā”€ā”€ __init__.py │ └── server.py # Main MCP server implementation ā”œā”€ā”€ data/ │ ā”œā”€ā”€ sample_sales.csv # Sample CSV data │ └── sample.db # Sample SQLite database ā”œā”€ā”€ scripts/ │ └── create_sample_db.py ā”œā”€ā”€ pyproject.toml ā”œā”€ā”€ claude_desktop_config.json └── README.md

License

MIT

-
security - not tested
F
license - not found
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/K02D/mcp-tabular'

If you have feedback or need assistance with the MCP directory API, please join our Discord server