Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Data Analytics MCP ToolkitGenerate a bar chart of monthly sales from the sales.csv file"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Data Analytics MCP Toolkit
An MCP (Model Context Protocol) server that exposes data visualization and simple machine learning tools. When an external LLM calls the toolkit, it can use the high-level run_analytics tool to describe intent and data; the server selects and runs the appropriate pipeline (visualization or ML) and returns charts or metrics.
Features
Data:
load_data(CSV/JSON string or URL),clean_data(drop NA, optional normalize)Visualization:
plot_bar,plot_line,plot_scatter,plot_histogram,plot_box,plot_heatmap(return base64 PNG)ML:
train_test_split,train_linear_regression,train_logistic_regression,train_kmeans, plusevaluate_regression,evaluate_classification,evaluate_clusteringPipeline:
run_analytics(intent, data_source)— intent-based routing to the right pipeline
Install
From the project root, ensure src is on PYTHONPATH when running the server (or install in editable mode).
Run the MCP server
stdio (for Cursor / IDE):
Or with uv:
(If using a pyproject.toml that sets packages under src, install first with pip install -e . then run python -m data_analytics_mcp.server from the repo root.)
Cursor MCP configuration
Add the server to Cursor (e.g. in Cursor Settings → MCP, or project .cursor/mcp.json):
Use the full path for cwd. If you installed the package (pip install -e .), you can use:
Usage
One-shot: Call
run_analyticswith a natural-language intent (e.g. "show distribution of sales", "predict price from square_feet", "cluster into 4 groups") and the data as CSV/JSON string or URL. The server returns either a chart (base64 image) or ML metrics and a short model summary.Step-by-step: Use
load_data→ getdata_id→ then callclean_data,plot_*, ortrain_test_split→train_*→evaluate_*as needed. Use resourcesanalytics://pipelinesandanalytics://pipelines/visualization(etc.) to see pipeline descriptions.