Data Analytics MCP Toolkit
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| PYTHONPATH | No | Environment variable to ensure the 'src' directory is in the Python search path for module resolution. |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": false
} |
| prompts | {
"listChanged": false
} |
| resources | {
"subscribe": false,
"listChanged": false
} |
| experimental | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| load_dataA | |
| clean_dataB | |
| plot_barC | Bar chart: x_column as categories, y_column as values (or count of x if y_column omitted). |
| plot_lineC | Line chart: x_column on x-axis, one or more y_columns as lines. |
| plot_scatterC | Scatter plot of x_column vs y_column. |
| plot_histogramC | Histogram of a numeric column (distribution). |
| plot_boxC | Box plot: single numeric column, or all numeric columns if column is omitted. |
| plot_heatmapC | Heatmap of correlation matrix. If columns omitted, uses all numeric columns. |
| train_test_splitB | |
| train_linear_regressionC | Fit a linear regression model. Returns model_id for evaluate_regression. |
| train_logistic_regressionB | Fit a logistic regression classifier. Returns model_id for evaluate_classification. |
| train_kmeansC | Fit K-means clustering. Returns model_id for evaluate_clustering. |
| evaluate_regressionC | Compute MSE and R² for a regression model on test data. |
| evaluate_classificationC | Compute accuracy for a classification model on test data. |
| evaluate_clusteringC | Compute silhouette score for a clustering model on test data. |
| run_analyticsB | |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
| list_pipelines | List available analytics pipelines with short descriptions. |
| pipeline_visualization | Steps: 1) load_data(source, format) 2) clean_data(data_id) 3) plot_histogram/plot_bar/plot_line/plot_scatter/plot_box/plot_heatmap(data_id, column(s)). Or use run_analytics(intent, data_source). |
| pipeline_regression | Steps: 1) load_data 2) clean_data 3) train_test_split(data_id, target_column) 4) train_linear_regression(train_data_id, target_column) 5) evaluate_regression(model_id, test_data_id). Or use run_analytics(intent, data_source) with intent like 'predict Y from X'. |
| pipeline_classification | Steps: 1) load_data 2) clean_data 3) train_test_split 4) train_logistic_regression 5) evaluate_classification. Or use run_analytics with intent like 'classify' or 'predict category'. |
| pipeline_clustering | Steps: 1) load_data 2) clean_data 3) train_kmeans(data_id, n_clusters) 4) evaluate_clustering(model_id, data_id). Or use run_analytics with intent like 'cluster into k groups'. |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/ChenJellay/trying_IBM_MCP'
If you have feedback or need assistance with the MCP directory API, please join our Discord server