Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Basic MCPfetch the latest article about AI from TechCrunch"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
basic mcp
The goal of this is just to write a simple MCP server with a handful of tools that can be plugged in to a local LLM.
It's just for experimentation but also straightforward enough to be human-readable (for those trying to learn)
Learning
In src/basic_mcp/main.py you see how straightforward it is to make an MCP server with FastMCP. It's just calling something, and the pandoc comments are what are given to the LLM.
Note that MCP servers are (in some way) just polite suggestions of text to LLMs, so without the comments here it actually won't work (or will work many orders of magnitude worse).
In src/basic_mcp/tools/web_tools.py there's a simple 'fetch article' tool. This is just so I have a placeholder/structure for extra tools as they're needed
Related MCP server: FastMCP Boilerplate
install
As with everything, you should set this up in a virtualenv.
Then it's just a pip install -e .
running
cd src/basic_mcp && python main.py
uvx
This can be called with uvx --from /path/to/basic-mcp/ basic-mcp if it needs to be launched that way (for example with Jan)
Also note/remember that if it's in a virtualenv, uvx should be called with the full path name from within the virtualenv's bin.
Output
Note that the output is stdio and not a web server. This can be changed simply enough inside the main function and removing the transport variable parameter. Then it will default to being a web server