Skip to main content
Glama

MCP Autonomous Analyst

by MadMando

Autonomous Analyst

🧠 Overview

Autonomous Analyst is a local, agentic AI pipeline that:

  • Analyzes tabular data
  • Detects anomalies with Mahalanobis distance
  • Uses a local LLM (llama3.2:1b via Ollama) to generate interpretive summaries
  • Logs results to ChromaDB for semantic recall
  • Is fully orchestrated via the Model Context Protocol (MCP)

⚙️ Features

ComponentDescription
FastAPI Web UIFriendly dashboard for synthetic or uploaded datasets
MCP Tool OrchestrationEach process step is exposed as a callable MCP tool
Anomaly DetectionMahalanobis Distance-based outlier detection
Visual OutputSaved scatter plot of inliers vs. outliers
Local LLM SummarizationInsights generated using llama3.2:1b via Ollama
Vector Store LoggingSummaries are stored in ChromaDB for persistent memory
Agentic Planning ToolA dedicated LLM tool (autonomous_plan) determines next steps based on dataset context
Agentic FlowLLM + memory + tool use + automatic reasoning + context awareness

🧪 Tools Defined (via MCP)

Tool NameDescriptionLLM Used
generate_dataCreate synthetic tabular data (Gaussian + categorical)
analyze_outliersLabel rows using Mahalanobis distance
plot_resultsSave a plot visualizing inliers vs outliers
summarize_resultsInterpret and explain outlier distribution using llama3.2:1b
summarize_data_statsDescribe dataset trends using llama3.2:1b
log_results_to_vector_storeStore summaries to ChromaDB for future reference
search_logsRetrieve relevant past sessions using vector search (optional LLM use)⚠️
autonomous_planRun the full pipeline, use LLM to recommend next actions automatically

🤖 Agentic Capabilities

  • Autonomy: LLM-guided execution path selection with autonomous_plan
  • Tool Use: Dynamically invokes registered MCP tools via LLM inference
  • Reasoning: Generates technical insights from dataset conditions and outlier analysis
  • Memory: Persists and recalls knowledge using ChromaDB vector search
  • LLM: Powered by Ollama with llama3.2:1b (temperature = 0.1, deterministic)

🚀 Getting Started

1. Clone and Set Up

git clone https://github.com/MadMando/mcp-autonomous-analyst.git cd mcp-autonomous-analyst conda create -n mcp-agentic python=3.11 -y conda activate mcp-agentic pip install uv uv pip install -r requirements.txt

2. Start the MCP Server

mcp run server.py --transport streamable-http

3. Start the Web Dashboard

uvicorn web:app --reload --port 8001

Then visit: http://localhost:8000


🌐 Dashboard Flow

  • Step 1: Upload your own dataset or click Generate Synthetic Data
  • Step 2: The system runs anomaly detection on feature_1 vs feature_2
  • Step 3: Visual plot of outliers is generated
  • Step 4: Summaries are created via LLM
  • Step 5: Results are optionally logged to vector store for recall

📁 Project Layout

📦 autonomous-analyst/ ├── server.py # MCP server ├── web.py # FastAPI + MCP client (frontend logic) ├── tools/ │ ├── synthetic_data.py │ ├── outlier_detection.py │ ├── plotter.py │ ├── summarizer.py │ ├── vector_store.py ├── static/ # Saved plot ├── data/ # Uploaded or generated dataset ├── requirements.txt ├── .gitignore └── README.md

📚 Tech Stack

  • MCP SDK: mcp
  • LLM Inference: Ollama running llama3.2:1b
  • UI Server: FastAPI + Uvicorn
  • Memory: ChromaDB vector database
  • Data: pandas, matplotlib, scikit-learn

✅ .gitignore Additions

__pycache__/ *.pyc *.pkl .env static/ data/

🙌 Acknowledgements

This project wouldn't be possible without the incredible work of the open-source community. Special thanks to:

Tool / LibraryPurposeRepository
🧠 Model Context Protocol (MCP)Agentic tool orchestration & executionmodelcontextprotocol/python-sdk
💬 OllamaLocal LLM inference engine (llama3.2:1b)ollama/ollama
🔍 ChromaDBVector database for logging and retrievalchroma-core/chroma
🌐 FastAPIInteractive, fast web interfacetiangolo/fastapi
UvicornASGI server powering the FastAPI backendencode/uvicorn

💡 If you use this project, please consider starring or contributing to the upstream tools that make it possible.

This repo was created with the assistance of a local rag-llm using llama3.2:1b

-
security - not tested
F
license - not found
-
quality - not tested

local-only server

The server can only run on the client's local machine because it depends on local resources.

A local, agentic AI pipeline that analyzes tabular data, detects anomalies, and generates interpretive summaries using local LLMs orchestrated via the Model Context Protocol.

  1. 🧠 Overview
    1. ⚙️ Features
    2. 🧪 Tools Defined (via MCP)
    3. 🤖 Agentic Capabilities
  2. 🚀 Getting Started
    1. Clone and Set Up
    2. Start the MCP Server
    3. Start the Web Dashboard
  3. 🌐 Dashboard Flow
    1. 📁 Project Layout
      1. 📚 Tech Stack
        1. ✅ .gitignore Additions
          1. 🙌 Acknowledgements

            Related MCP Servers

            • -
              security
              F
              license
              -
              quality
              Facilitates enhanced interaction with large language models (LLMs) by providing intelligent context management, tool integration, and multi-provider AI model coordination for efficient AI-driven workflows.
              Last updated -
              Python
            • -
              security
              A
              license
              -
              quality
              A Model Context Protocol server enabling AI agents to access and manipulate ServiceNow data through natural language interactions, allowing users to search for records, update them, and manage scripts.
              Last updated -
              9
              Python
              MIT License
            • -
              security
              A
              license
              -
              quality
              An MCP server implementation that integrates AI assistants with Langfuse workspaces, allowing models to query LLM metrics by time range.
              Last updated -
              9
              JavaScript
              Apache 2.0
            • -
              security
              -
              license
              -
              quality
              Connect your AI models to your favorite SurrealDB database, and let the LLMs do all your work for you.
              Last updated -
              3
              JavaScript

            View all related MCP servers

            MCP directory API

            We provide all the information about MCP servers via our MCP API.

            curl -X GET 'https://glama.ai/api/mcp/v1/servers/MadMando/mcp-autonomous-analyst'

            If you have feedback or need assistance with the MCP directory API, please join our Discord server