Skip to main content
Glama

MCP Autonomous Analyst

by MadMando

Autonomous Analyst

🧠 Overview

Autonomous Analyst is a local, agentic AI pipeline that:

  • Analyzes tabular data
  • Detects anomalies with Mahalanobis distance
  • Uses a local LLM (llama3.2:1b via Ollama) to generate interpretive summaries
  • Logs results to ChromaDB for semantic recall
  • Is fully orchestrated via the Model Context Protocol (MCP)

⚙️ Features

ComponentDescription
FastAPI Web UIFriendly dashboard for synthetic or uploaded datasets
MCP Tool OrchestrationEach process step is exposed as a callable MCP tool
Anomaly DetectionMahalanobis Distance-based outlier detection
Visual OutputSaved scatter plot of inliers vs. outliers
Local LLM SummarizationInsights generated using llama3.2:1b via Ollama
Vector Store LoggingSummaries are stored in ChromaDB for persistent memory
Agentic Planning ToolA dedicated LLM tool (autonomous_plan) determines next steps based on dataset context
Agentic FlowLLM + memory + tool use + automatic reasoning + context awareness

🧪 Tools Defined (via MCP)

Tool NameDescriptionLLM Used
generate_dataCreate synthetic tabular data (Gaussian + categorical)
analyze_outliersLabel rows using Mahalanobis distance
plot_resultsSave a plot visualizing inliers vs outliers
summarize_resultsInterpret and explain outlier distribution using llama3.2:1b
summarize_data_statsDescribe dataset trends using llama3.2:1b
log_results_to_vector_storeStore summaries to ChromaDB for future reference
search_logsRetrieve relevant past sessions using vector search (optional LLM use)⚠️
autonomous_planRun the full pipeline, use LLM to recommend next actions automatically

🤖 Agentic Capabilities

  • Autonomy: LLM-guided execution path selection with autonomous_plan
  • Tool Use: Dynamically invokes registered MCP tools via LLM inference
  • Reasoning: Generates technical insights from dataset conditions and outlier analysis
  • Memory: Persists and recalls knowledge using ChromaDB vector search
  • LLM: Powered by Ollama with llama3.2:1b (temperature = 0.1, deterministic)

🚀 Getting Started

1. Clone and Set Up

git clone https://github.com/MadMando/mcp-autonomous-analyst.git cd mcp-autonomous-analyst conda create -n mcp-agentic python=3.11 -y conda activate mcp-agentic pip install uv uv pip install -r requirements.txt

2. Start the MCP Server

mcp run server.py --transport streamable-http

3. Start the Web Dashboard

uvicorn web:app --reload --port 8001

Then visit: http://localhost:8000


🌐 Dashboard Flow

  • Step 1: Upload your own dataset or click Generate Synthetic Data
  • Step 2: The system runs anomaly detection on feature_1 vs feature_2
  • Step 3: Visual plot of outliers is generated
  • Step 4: Summaries are created via LLM
  • Step 5: Results are optionally logged to vector store for recall

📁 Project Layout

📦 autonomous-analyst/ ├── server.py # MCP server ├── web.py # FastAPI + MCP client (frontend logic) ├── tools/ │ ├── synthetic_data.py │ ├── outlier_detection.py │ ├── plotter.py │ ├── summarizer.py │ ├── vector_store.py ├── static/ # Saved plot ├── data/ # Uploaded or generated dataset ├── requirements.txt ├── .gitignore └── README.md

📚 Tech Stack

  • MCP SDK: mcp
  • LLM Inference: Ollama running llama3.2:1b
  • UI Server: FastAPI + Uvicorn
  • Memory: ChromaDB vector database
  • Data: pandas, matplotlib, scikit-learn

✅ .gitignore Additions

__pycache__/ *.pyc *.pkl .env static/ data/

🙌 Acknowledgements

This project wouldn't be possible without the incredible work of the open-source community. Special thanks to:

Tool / LibraryPurposeRepository
🧠 Model Context Protocol (MCP)Agentic tool orchestration & executionmodelcontextprotocol/python-sdk
💬 OllamaLocal LLM inference engine (llama3.2:1b)ollama/ollama
🔍 ChromaDBVector database for logging and retrievalchroma-core/chroma
🌐 FastAPIInteractive, fast web interfacetiangolo/fastapi
UvicornASGI server powering the FastAPI backendencode/uvicorn

💡 If you use this project, please consider starring or contributing to the upstream tools that make it possible.

This repo was created with the assistance of a local rag-llm using llama3.2:1b

-
security - not tested
F
license - not found
-
quality - not tested

local-only server

The server can only run on the client's local machine because it depends on local resources.

A local, agentic AI pipeline that analyzes tabular data, detects anomalies, and generates interpretive summaries using local LLMs orchestrated via the Model Context Protocol.

  1. 🧠 Overview
    1. ⚙️ Features
    2. 🧪 Tools Defined (via MCP)
    3. 🤖 Agentic Capabilities
  2. 🚀 Getting Started
    1. 1. Clone and Set Up
    2. 2. Start the MCP Server
    3. 3. Start the Web Dashboard
  3. 🌐 Dashboard Flow
    1. 📁 Project Layout
      1. 📚 Tech Stack
        1. ✅ .gitignore Additions
          1. 🙌 Acknowledgements

            Related MCP Servers

            • A
              security
              A
              license
              A
              quality
              Agentic tool that looks for statistical variations in conversation structure and logs unusual events to a SQLite database. Built using the Model Context Protocol (MCP), this system is designed to be used with Claude Desktop or other MCP-compatible clients.
              Last updated 6 months ago
              8
              4
              Python
              MIT License
              • Apple
              • Linux
            • -
              security
              F
              license
              -
              quality
              Enables AI models to interact with Linear for issue tracking and project management through the Model Context Protocol, supporting capabilities like creating issues, searching, managing sprints, and bulk updating statuses.
              Last updated 3 months ago
              5
              TypeScript
            • A
              security
              A
              license
              A
              quality
              Unleashes LLM-powered agents to autonomously execute and debug web apps directly in your code editor, with features like webapp navigation, network traffic capture, and console error collection.
              Last updated 5 days ago
              1
              1,118
              Python
              Apache 2.0
              • Apple
              • Linux
            • -
              security
              F
              license
              -
              quality
              A server that enables browser-based local LLM inference using Playwright to automate interactions with @mlc-ai/web-llm, supporting text generation, chat sessions, model switching, and status monitoring.
              Last updated 2 months ago
              1
              TypeScript

            View all related MCP servers

            MCP directory API

            We provide all the information about MCP servers via our MCP API.

            curl -X GET 'https://glama.ai/api/mcp/v1/servers/MadMando/mcp-autonomous-analyst'

            If you have feedback or need assistance with the MCP directory API, please join our Discord server