MCP Server Template
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MCP Server TemplateScaffold a new 'analytics' feature following the project's architecture."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Server Template
A generic, production-ready scaffold for building Model Context Protocol (MCP) servers with Python and FastMCP.
This template preserves the architecture, patterns, and best practices of a real production MCP server — stripped of all domain-specific code so you can fork it and build your own.
It also serves as an onboarding project and a reference codebase for coding agents (e.g. Claude, Cursor, Copilot). The structure, inline annotations, and documentation are intentionally designed so that an AI agent can read the codebase, understand the conventions, and rapidly scaffold new tools, workflows, and packages without human hand-holding.
Architecture
mcp-template/
├── packages/
│ ├── equator/ # Prompt-toolkit TUI foundation — base layer for all terminal UIs
│ ├── beetle/ # Live log interpreter — ingests logs, explains them with a local LLM
│ ├── tropical/ # MCP protocol inspector — browse tools, resources, and prompts
│ ├── lab_mouse/ # Pydantic-AI agent REPL — tests whether the LLM uses your tools correctly
│ └── mcp_shared/ # Shared utilities (response builders, schemas, logging)
├── mcp_server/ # Main MCP Server
│ └── src/mcp_server/
│ ├── __main__.py # Server entry point
│ ├── instructions/ # Agent instructions (4-layer framework)
│ ├── tool_box/ # Tool registration + _tools_template reference
│ └── workflows/ # Multi-step workflow orchestration
├── tests/
│ ├── unit/ # Unit tests for packages
│ └── agentic/ # Agentic integration tests (requires running server)
├── start.py # One-command startup: mcp_server + lab_mouse
└── docs/ # Architecture and best practices documentationKey Design Decisions
mcp_shared— All tools use shared response builders (SummaryResponse,ErrorResponse) andResponseFormatenum to control output verbosity and token usage._tools_template— A fully annotated reference implementation. Every architectural decision is documented inline. Read this before creating your first tool.Docstring Registry — Tool descriptions are versioned separately from logic, enabling A/B testing and prompt engineering without touching business logic.
ToolNames Registry — All tool names are constants. No inline strings — enables suggesting tool calls in other tools responses.
Quickstart
Prerequisites
1. Install
git clone <your-repo-url> mcp-template
cd mcp-template
uv sync --all-packages2. Pull Ollama models
Both lab_mouse and beetle use local models via Ollama running at http://localhost:11434 (standard Ollama port — no configuration needed if Ollama is installed normally).
Agent | Purpose | Default model | Recommended |
lab_mouse | Agent REPL — calls MCP tools, reasons over results |
|
|
beetle | Log interpreter — narrates live log output |
|
|
# Minimum — default model for both agents
ollama pull phi4-mini:3.8b
# Recommended — better reasoning for lab_mouse
ollama pull qwen3:4b
# Low-memory alternative
ollama pull qwen3:1.7bSet the model for each agent in .env (or as environment variables):
AGENT_MODEL=ollama:qwen3:4b # lab_mouse (default: ollama:phi4-mini:3.8b)
BEETLE_MODEL=ollama:phi4-mini:3.8b # beetle (default: ollama:phi4-mini:3.8b)
# Only needed if Ollama is NOT on the default port (11434)
# OLLAMA_BASE_URL=http://localhost:11434Using a cloud model instead of Ollama? Set
AGENT_MODELto any pydantic-ai model string and add the corresponding API key. No Ollama required for lab_mouse in that case.
Provider
AGENT_MODELexampleAPI key env var
Google Gemini
google-gla:gemini-2.0-flash
GEMINI_API_KEYAnthropic
anthropic:claude-sonnet-4-6
ANTHROPIC_API_KEYOpenAI
openai:gpt-4o
OPENAI_API_KEY
3. Configure environment
cp .env.sample .env
# Edit .env — at minimum set AGENT_MODEL if not using the default4. Start everything
One command — starts mcp_server in a new terminal, waits for it to be ready, then launches lab_mouse:
uv run python start.pyOr start them separately:
# Terminal 1
uv run mcp_server
# Terminal 2 (once the server is ready)
uv run lab_mouseHealth check:
curl http://127.0.0.1:8000/healthcheck
# → OK5. Use beetle (optional)
From inside lab_mouse, type /beetle to open beetle in a new terminal, pre-loaded with current logs and wired for live forwarding.
equator TUI guide
All terminal apps — lab_mouse and beetle — share the same TUI built on equator.
Layout
┌──────────────────────── lab_mouse ─────────────────────────┐
│ │
│ Conversation history │
│ │
│ ▏ ((o)) what sections does the resume have? │
│ │
│ ▏ ))o(( ⚙ md_list_sections(document="RESUME")… │
│ ▏ ))o(( ⚙ md_list_sections… ✓ 8 sections found │
│ ▏ ))o(( I found 8 sections: Summary, Experience, … │
│ │
├─────────────────────────────────────────────────────────────┤
│ AGENT · 312 chars │
│ Preview: I found 8 sections: Summary, Experience, … │
│ Tools: 1/1 completed F2 inspect │
├─────────────────────────────────────────────────────────────┤
│ [INF] httpx: POST /mcp 200 │
│ [DBG] mcp: tool result received │
├─────────────────────────────────────────────────────────────┤
│ > type here │
│ │
├─────────────────────────────────────────────────────────────┤
│ ollama:qwen3:4b | MCP: ✓ | ●DBG ●INF ●WRN ●ERR ●CRT │
│ Context ████░░░░░░░░░░░░░░░░░░ 4,200 / 32,768 (13%) │
│ TAB = toggle help │
└─────────────────────────────────────────────────────────────┘Visual identity:
Symbol | Meaning |
| You (the user) |
| The agent |
| beetle |
Key bindings
Key | Action |
| Send message |
| Insert newline |
| Navigate conversation history (when input is empty) |
| Scroll inspector content (when in expanded inspect mode) |
| Clear message cursor (return to auto-follow) |
| Toggle logs panel |
| Toggle internal logs panel |
| Page through logs (when logs panel is open) |
| Cycle tool calls (when a message is selected) |
| Toggle inspect / detail expansion |
| Toggle help sidebar |
| Navigate model selector (when open) |
| Confirm model selection |
| Cancel model selector |
| Quit |
Slash commands
Type any /command in the input. Tab-completion is available.
Universal (lab_mouse + beetle):
Command | Description |
| Show key bindings and all commands in the logs panel |
| Show current active log levels |
| Show only ERR and CRT |
| Enable all five levels (default on startup) |
| Silence all levels |
| Clear the conversation history |
| Clear the logs panel |
| Quit |
lab_mouse only:
Command | Description |
| Launch beetle in a new terminal with live log forwarding |
| Open tropical inspector, auto-connected to the active MCP server |
| Open tropical connected to a specific URL |
| List tools from connected MCP servers |
| Switch model inline — e.g. |
Log levels
The status bar shows which levels are active (filled dot = on, empty = off). All levels are on by default.
●DBG ●INF ●WRN ●ERR ●CRTInspect mode (F2)
When a message is selected (↑ / ↓):
Compact: one summary line shown below the message — role, timing, tool count.
Expanded (press
F2): full tool args + results with JSON syntax highlighting.← / →cycles through tool calls within the same turn.↑ / ↓scrolls through long inspector content.F2again collapses;Escclears the selection entirely.
Developer Toolchain
equator — TUI foundation
The shared prompt-toolkit base that beetle and lab_mouse are built on. Not a standalone tool — a library. Use it directly if you want to wrap your own pydantic-ai agent in a full terminal interface:
import equator
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP
agent = Agent("openai:gpt-4o", toolsets=[MCPServerStreamableHTTP("http://localhost:8000/mcp")])
equator.run(agent, name="my-tester")The lower layers (protocol.py, state.py, components/) have no pydantic-ai dependency — any async backend that implements SessionProtocol can drive the TUI. beetle uses this to run a completely different session type with the same rendering infrastructure.
Custom commands are registered via CommandRegistry and passed to equator.run(). Three kinds: ACTION (executes TUI-side logic), PROMPT (pre-fills the input box), SCRIPT (sends a fixed message to the agent).
See packages/equator/README.md for the full reference.
beetle — live log interpreter
Wraps any Python process with a full-screen TUI that ingests logs over TCP and interprets them in plain language using a local LLM. No API keys required — runs on Ollama.
uv run beetle # listens on localhost:9020Wire your application with the built-in handler:
from beetle.log_server import BeetleHandler
import logging
logging.getLogger().addHandler(BeetleHandler())Or use the zero-dependency snippet if you don't want beetle as a project dependency:
import json, socket, logging
class BeetleHandler(logging.Handler):
def __init__(self, host="localhost", port=9020):
super().__init__()
self._sock = socket.create_connection((host, port))
def emit(self, record):
import traceback
exc = traceback.format_exc() if record.exc_info else None
data = json.dumps({
"level": record.levelno, "name": record.name,
"msg": record.getMessage(), "exc": exc,
}) + "\n"
try:
self._sock.sendall(data.encode())
except OSError:
self.handleError(record)
logging.getLogger().addHandler(BeetleHandler())Options:
beetle --port 9021 # custom port (default: 9020)
beetle --logs ./app.log # pre-load a log file on startup
beetle --no-server # disable TCP listener (static analysis)
cat app.log | beetle # pipe mode
BEETLE_MODEL=ollama:phi4-mini:3.8b # interpreter model (default: ollama:phi4-mini:3.8b)See packages/beetle/README.md for the full reference.
tropical — MCP protocol inspector
A full-screen TUI for raw MCP protocol inspection. Browse tools, resources, and prompts; execute requests; view responses with syntax highlighting and markdown rendering. No API keys required.
uv run tropical # standalone
uv run tropical connect-http http://localhost:8000/mcp # connect directly
uv run tropical connect-http http://localhost:8000/mcp --header "Authorization=Bearer <token>"Supports STDIO, HTTP (Streamable), and TCP transports. Server configs persist in ~/.config/tropical/servers.yaml.
lab_mouse — agent REPL
An interactive terminal agent connected to your MCP server. Tests whether the LLM actually uses your tools correctly — not just whether the tools return the right data.
uv run python start.py # starts both mcp_server and lab_mouseor manually:
uv run mcp_server # Terminal 1
uv run lab_mouse # Terminal 2Running Tests
uv run pytestAgentic tests require a running server:
# Terminal 1: start the server
uv run mcp_server
# Terminal 2: run agentic tests
uv run pytest tests/agentic/ -vCoverage threshold: 80% (enforced in CI).
How to Create a New Tool
Create a feature folder under
mcp_server/src/mcp_server/tool_box/:tool_box/ └── my_feature/ ├── __init__.py ├── tools.py # add_tool(mcp) function ├── schemas.py # Pydantic input/output models ├── tool_names.py # ToolNames constants └── docstrings/ ├── __init__.py # DOCSTRINGS registry └── my_tool_docs.pyUse
_tools_template/tools.pyas your reference — every architectural decision is annotated.Register your tool in
tool_box/__init__.py:from .my_feature.tools import add_tool as add_my_feature_tool def register_all_tools(mcp): add_template_tool(mcp) add_my_feature_tool(mcp) # ← add hereAdd your tool name to the root
ToolNamesregistry intool_box/tool_names.py.
How to Write Effective Tool Docstrings
See docs/TOOLS_BEST_PRACTICES.md for the full guide. Key principles:
Everything is a prompt — function names, argument names, docstrings, and responses all shape agent behavior.
Examples are contracts — show the agent what success looks like; it will follow the pattern.
Flat arguments > nested — agents struggle with deeply nested inputs; prefer flat Pydantic models.
ResponseFormat enum — give agents control over output verbosity to manage token budgets.
Token budget — allocate a max token budget per tool before you write it.
How to Write Agent Instructions
See docs/MCP_INSTRUCTIONS_FRAMEWORK.md for the 4-layer framework:
Mental Model — domain-specific interpretive lens
Categories — mutually exclusive use-case classification slots
Procedural Knowledge — tool chains and guard rails per category
Examples — few-shot intent → action demonstrations
Edit mcp_server/src/mcp_server/instructions/instructions.py to replace the generic template with your domain instructions.
VS Code Debugging
Add to .vscode/launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "MCP Server",
"type": "python",
"request": "launch",
"module": "mcp_server",
"justMyCode": false,
"env": {
"PYTHONPATH": "${workspaceFolder}/mcp_server/src:${workspaceFolder}/packages/mcp_shared/src"
}
}
]
}Documentation
Document | Description |
Best practices for designing MCP tools | |
4-layer agent instructions design framework | |
UV workspace mechanics and package management | |
Creating and consuming workspace packages | |
Python and UV external resources | |
equator full reference | |
beetle full reference |
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Frankietime/mcp-template'
If you have feedback or need assistance with the MCP directory API, please join our Discord server