Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@email-insightsShow me a breakdown of email topics and their urgency"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
email-insights
An MCP server that exposes email signal analytics to Claude Desktop.
Project Structure
email-insights/
├── data/
│ └── emails.csv # Raw email data (id, from, subject, body, date)
├── database/
│ └── signals.db # SQLite database (created after running ingestion)
├── ingestion/
│ ├── parse_csv.py # Step 1: Load emails from CSV
│ ├── extract_signals.py # Step 2: Call local LLM to extract signals
│ └── store_signals.py # Step 3: Write signals to SQLite (run this)
├── mcp_server/
│ ├── server.py # MCP server: registers tools and starts listening
│ └── tools.py # SQLite query functions (no MCP logic here)
├── requirements.txt
└── README.mdSetup
1. Install dependencies
pip install -r requirements.txt2. Start LM Studio
Open LM Studio and load any instruction-following model (Llama 3, Mistral, etc.)
Start the local server: Local Server → Start Server
Default URL:
http://localhost:1234/v1Copy the model identifier string and paste it into
ingestion/extract_signals.pyasLOCAL_MODEL
3. Run the ingestion pipeline
python ingestion/store_signals.pyThis reads data/emails.csv, sends each email to your local LLM for signal extraction,
and stores the results in database/signals.db.
4. Connect Claude Desktop
Add this server to your Claude Desktop config:
Mac: ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"email-insights": {
"command": "python",
"args": ["/absolute/path/to/email-insights/mcp_server/server.py"]
}
}
}Restart Claude Desktop. You should see email-insights in the tools list.
MCP Tools
Tool | Description |
| Query signals with optional date/topic/tone filters |
| Count of emails per topic category |
| Breakdown by sender type with urgency stats |
| Search signals by keyword |
SQLite Schema
CREATE TABLE signals (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email_id TEXT UNIQUE,
topic TEXT, -- job application | recruiter outreach | rejection | interview | networking | other
tone TEXT, -- positive | neutral | negative
sender_type TEXT, -- recruiter | company HR | networking contact | university | other
urgency TEXT, -- high | medium | low
requires_action INTEGER, -- 0 or 1
date TEXT -- ISO format: YYYY-MM-DD
);What to Learn from the Code
mcp_server/server.py
FastMCP("email-insights")— creates the server instance with a display name@mcp.tool()— registers the decorated function as a callable MCP toolDocstrings matter — Claude reads them to decide when and how to call each tool
Type hints — FastMCP uses them to build the JSON input schema Claude receives
mcp.run()— starts the stdio loop; Claude Desktop communicates via stdin/stdout
mcp_server/tools.py
Completely separate from MCP — plain Python functions returning JSON strings
Parameterized SQL queries prevent injection:
WHERE topic LIKE ?withparamssqlite3.Rowfactory lets you access columns by name:row["topic"]Returns JSON strings so Claude can parse and reason about the data
ingestion/extract_signals.py
OpenAI(base_url="http://localhost:1234/v1")— points the client at LM StudioLow
temperature=0.1— more deterministic output, better for structured JSONStrips markdown code fences the LLM might wrap around its JSON response
Falls back to safe defaults if parsing fails — pipeline never crashes on one bad email
This server cannot be installed
Resources
Looking for Admin?
Admins can modify the Dockerfile, update the server description, and track usage metrics. If you are the server author, to access the admin panel.