Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@mcp-basecheck your health status"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
mcp-filesystem-readonly
A read-only filesystem FastMCP server. Configure a root directory and let AI assistants browse its contents via MCP tools.
MCP (Model Context Protocol) is an open standard that lets AI assistants call external tools and services. This server implements MCP over HTTP so any MCP-compatible AI application can reach it.
Prerequisites
Docker — for the Docker Compose deployment path
uv — for the source deployment path (see Installing uv)
Node.js — required for the git commit hooks; the hooks use commitlint to enforce Conventional Commits, which is the best-in-class Node.js tool for commit message validation
Customising the Template
1. Copy the template
On GitHub — click Use this template → Create a new repository. This creates a clean copy with no fork relationship and no template history.
Without GitHub — clone, strip the history, and reinitialise:
git clone https://github.com/sesopenko/mcp-base.git my-project
cd my-project
rm -rf .git
git init
git add .
git commit -m "chore: bootstrap from mcp-base template"2. Customise identity values
Edit project.env to set your own values (Docker image name, package name, project name, description), then run the setup script to substitute them throughout the repository:
bash scripts/apply-project-config.shThe script is idempotent — safe to run multiple times.
Quick Start
Option A — Docker Compose
Create a
docker-compose.yml:services: mcp-filesystem-readonly: image: sesopenko/mcp-filesystem-readonly:latest ports: - "8080:8080" volumes: - ./config.toml:/config/config.toml:ro - /mnt/video:/mnt/video:ro restart: unless-stoppedCopy the example config and edit it:
cp config.toml.example config.tomlStart the server:
docker compose up -d
Option B — Run from Source
Install uv if you haven't already.
Install dependencies:
uv syncCopy the example config and edit it:
cp config.toml.example config.tomlStart the server:
uv run python -m mcp_base
Security
This server has no authentication on its MCP endpoint. It is designed for LAN use only.
Do not expose this server directly to the internet.
If you need to access it remotely, place it behind a reverse proxy that handles TLS termination and access control. Configuring a reverse proxy is outside the scope of this project.
Configuration
Create a config.toml in the working directory (or pass --config <path>):
[server]
host = "0.0.0.0"
port = 8080
[logging]
level = "info"
[filesystem]
roots = "/mnt/video"[server]
Key | Default | Description |
|
| Address the MCP server listens on. |
|
| Port the MCP server listens on. |
[logging]
Key | Default | Description |
|
| Log verbosity. One of: |
[filesystem]
Key | Required | Description |
| yes | Comma-separated list of absolute paths exposed via |
Connecting an AI Application
This server uses the Streamable HTTP MCP transport. Clients communicate via HTTP POST with streaming responses — opening the endpoint in a browser will return a Not Acceptable error, which is expected.
Point your MCP-compatible AI application at the server's MCP endpoint:
http://<host>:<port>/mcpFor example, if the server is running on 192.168.1.10 with the default port:
http://192.168.1.10:8080/mcpConsult your AI application's documentation for how to register an MCP server. Ensure it supports the Streamable HTTP transport (most modern MCP clients do).
Example System Prompt
XML is preferred over markdown for system prompts because explicit named tags give unambiguous semantic meaning — the AI always knows exactly what each block contains. Markdown headings require inference and are more likely to be misinterpreted.
Copy and adapt this prompt to give your AI assistant clear guidance on using the tools.
Tip — let an LLM write this for you. XML-structured system prompts are effective but unfamiliar to most developers and tedious to write by hand. A quick conversation with any capable LLM (describe your tools, what they do, and how you want the assistant to behave) will produce a well-structured prompt you can drop straight in. The results are often better than anything written manually as plain text or markdown.
XML tags act like labeled folders — the model knows exactly where each piece of information starts and stops
Training data is full of structured markup, so models already "think" in tags naturally
Tags prevent the model from confusing your instructions with the content it's working on
<system>
<role>
You are a helpful assistant with access to a read-only filesystem MCP server.
Use the available tools to browse and describe files at the user's request.
</role>
<tools>
<tool name="health_check">Check that the MCP server is running and reachable.</tool>
<tool name="list_root_paths">Return the configured root directory paths. Call this first to discover the starting points for file listing.</tool>
<tool name="list_inclusion_filters">Return all available filters for list_folder. Filters reduce token cost by excluding metadata and sidecar files (like .nfo, .jpg, .srt). Built-in filters include "all", "folders_only", "videos", "music", and "pictures".</tool>
<tool name="list_folder">List the contents of a directory. Requires an absolute path within one of the configured roots and a filter name from list_inclusion_filters. Returns name, size_mb, and is_folder for each entry.</tool>
</tools>
<guidelines>
<item>Call health_check if the user asks whether the server is available.</item>
<item>Call list_root_paths before attempting to list files so you know where to start.</item>
<item>Call list_inclusion_filters to discover available filters before calling list_folder.</item>
<item>Use list_folder with a path returned by list_root_paths and an appropriate filter name to browse the filesystem efficiently.</item>
<item>Prefer specific filters (videos, music, pictures) over "all" to reduce token usage when browsing media directories.</item>
<item>Do not guess paths — only navigate to paths you have discovered through the tools.</item>
</guidelines>
</system>Available Tools
Tool | Description |
| Returns |
| Returns the configured root directory paths. Call this first to discover where to start listing. |
| Returns all available filters for |
| Lists the contents of a directory within one of the configured roots. Requires an |
Architecture
The template follows a clean three-layer separation:
File | Purpose |
| Pure Python functions — one function per tool, no framework coupling |
| FastMCP wiring — registers tool functions with |
| TOML config loading — typed dataclasses for |
| Structured logger factory |
Adding a tool
Add a function to
src/mcp_base/tools.pywith a Google-style docstring and full type annotations.Import the function in
src/mcp_base/server.pyand register it with@mcp.tool().Add a unit test in
tests/unit/.Add a row to the Available Tools table in this README.
Running Tests
uv run pytest tests/unit/Contributing / Maintaining
See MAINTAINERS.md for setup, development commands, AI agent rails, and how to run tests.
License
Copyright (c) Sean Esopenko 2026
This project is licensed under the GNU General Public License v3.0.
Acknowledgement: Riding on the Backs of Giants
This project was built with the assistance of Claude Code, an AI coding assistant developed by Anthropic.
AI assistants like Claude are trained on enormous amounts of data — much of it written by the open-source community: the libraries, tools, documentation, and decades of shared knowledge that developers have contributed freely. Without that foundation, tools like this would not be possible.
In recognition of that debt, this project is released under the GNU General Public License v3.0. The GPL ensures that this code — and any derivative work — remains open source. It is a small act of reciprocity: giving back to the commons that made it possible.
To every developer who ever pushed a commit to a public repo, wrote a Stack Overflow answer, or published a package under an open license — thank you.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.