MCP Command and Search Server
MCP-based Client-Server App for Command Executor and Brave Search
Overview
This project implements a client-server architecture using MCP (Model Context Protocol) to handle user prompts, determine their intent using a Large Language Model (LLM), and route them to the appropriate service for execution. The system consists of two main components:
- Client: Processes user input, sends it to the LLM, and forwards the request to the appropriate server based on the LLM's decision.
- Server: Handles requests based on the tool specified in the LLM's JSON response. It either executes system commands or fetches web data using the Brave Search API.
The LLM determines whether the user request requires command execution or web search. If the prompt is unclear, the LLM asks follow-up questions before generating a structured JSON response specifying the tool name (command_execution
or fetch_web_data
) and its required arguments.
Flow Diagram
<img src="flow_diagram.png" alt="MCP Client-Server Flow" width="600" height="700">Working
- User Input: The user enters a prompt in the CLI.
- Client Processing: The client forwards the prompt to the LLM.
- LLM Decision:
- If the intent is unclear, the LLM asks follow-up questions.
- It generates a JSON response specifying the tool name and required arguments.
- Client Routing:
- If the tool is
command_execution
:- The request is sent to the Command Server.
- The Command Server executes the command using Python’s
subprocess
module. - A success or failure response is returned.
- If the tool is
fetch_web_data
:- The request is sent to the Fetch Web Data Server.
- The server queries the Brave Search API for relevant results.
- The search results are returned to the client.
- If the tool is
- Client Response: The client sends the final response back to the user via the CLI.
Prerequisites
- Python 3.x
- pip (Python package manager)
- Virtual environment setup (optional, but recommended)
- Install UV/UVX: UV Installation Guide
- Brave Search API: Brave Search API Key
- Any of the below LLM:
Installation
1. Clone the repository
2. Create a virtual environment and activate it
3. Install dependencies
4. Configure the LLM Model
Using Ollama Model
- Install the Ollama CLI tool by following the instructions at Ollama Installation Guide.
- Then, check the Ollama:Copy
- Specify the model in the client command (llama3 or llama2):Copy
Using Groq Model
- Create a
.env
file to store Groq’s API Key:Copy - Add your Groq’s API key to the
.env
file:Copy
5. Configure the Brave Search API
Add your Brave’s API key to the .env
file:
Run
- For using Ollama Model
- For using Groq Model
Test
Give query to client (e.g., touch test.txt
, create text file with test
, rm test.txt file
, etc.)
🚀 Docker Project Setup Guide
📌 Steps to Run the Code
1️⃣ Clone the Git Repository
2️⃣ Edit Configuration for Model Selection
Modify the config.ini file to specify the model type and name:
3️⃣ Build the Docker Containers
4️⃣ Run the Model Client
This server cannot be installed
Facilitates executing system commands and retrieving web data using the Brave Search API by interpreting user intents via a Large Language Model (LLM).