Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@LLM Tool-Calling Assistantcalculate the monthly payment for a $250,000 mortgage at 4.5% interest over 30 years"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
This project connects a local LLM (e.g. Qwen) to tools such as a calculator or a knowledge base via the MCP protocol. The assistant automatically detects and calls these tools to help answer user queries.
π¦ Features
π§ Tool execution through MCP server
π§ Local LLM integration via HTTP or OpenAI SDK
π Knowledge base support (
data.json)β‘ Supports
stdioandssetransports
Related MCP server: MCP Documentation Server
π Project Files
File | Description |
| Registers tools and starts MCP server |
| Uses |
| Uses OpenAI-compatible SDK for LLM + tool call logic |
| MCP client using stdio |
| MCP client using SSE |
| Q&A knowledge base |
π₯ Installation
Requirements
Python 3.8+
Install dependencies:
requirements.txt
π Getting Started
1. Run the MCP server
This launches your tool server with functions like add, multiply, and get_knowledge_base.
2. Start a client
Option A: HTTP client (local LLM via raw API)
Option B: OpenAI SDK client
Option C: stdio transport
Option D: SSE transport
Make sure server.py sets:
Then run:
π¬ Example Prompts
Math Tool Call
Response:
Knowledge Base Question
Response will include the relevant answer from data.json.
π Example: data.json
π§ Configuration
Inside client-http.py or clientopenai.py, update the following:
Make sure your LLM is serving OpenAI-compatible API endpoints.
π§Ή Cleanup
Clients handle tool calls and responses automatically. You can stop the server or client using Ctrl+C.
πͺͺ License
MIT License. See LICENSE file.