MCP Command and Search Server

MCP-based Client-Server App for Command Executor and Brave Search

Overview

This project implements a client-server architecture using MCP (Model Context Protocol) to handle user prompts, determine their intent using a Large Language Model (LLM), and route them to the appropriate service for execution. The system consists of two main components:

  • Client: Processes user input, sends it to the LLM, and forwards the request to the appropriate server based on the LLM's decision.
  • Server: Handles requests based on the tool specified in the LLM's JSON response. It either executes system commands or fetches web data using the Brave Search API.

The LLM determines whether the user request requires command execution or web search. If the prompt is unclear, the LLM asks follow-up questions before generating a structured JSON response specifying the tool name (command_execution or fetch_web_data) and its required arguments.

Flow Diagram

<img src="flow_diagram.png" alt="MCP Client-Server Flow" width="600" height="700">

Working

  1. User Input: The user enters a prompt in the CLI.
  2. Client Processing: The client forwards the prompt to the LLM.
  3. LLM Decision:
    • If the intent is unclear, the LLM asks follow-up questions.
    • It generates a JSON response specifying the tool name and required arguments.
  4. Client Routing:
    • If the tool is command_execution:
      • The request is sent to the Command Server.
      • The Command Server executes the command using Python’s subprocess module.
      • A success or failure response is returned.
    • If the tool is fetch_web_data:
      • The request is sent to the Fetch Web Data Server.
      • The server queries the Brave Search API for relevant results.
      • The search results are returned to the client.
  5. Client Response: The client sends the final response back to the user via the CLI.

Prerequisites

Installation

1. Clone the repository

git clone -https://github.com/mjunaid46/mcp cd mcp

2. Create a virtual environment and activate it

# Create a virtual environment uv venv # Activate virtual environment # On Unix or MacOS: source .venv/bin/activate # On Windows: .venv\Scripts\activate

3. Install dependencies

pip install -r requirements.txt python -m ensurepip

4. Configure the LLM Model

Using Ollama Model

  1. Install the Ollama CLI tool by following the instructions at Ollama Installation Guide.
  2. Then, check the Ollama:
    ollama list
  3. Specify the model in the client command (llama3 or llama2):
    uv run client/client.py server/command_server.py server/web_server.py ollama llama3

Using Groq Model

  1. Create a .env file to store Groq’s API Key:
    touch .env
  2. Add your Groq’s API key to the .env file:
    GROQ_API_KEY=<your_groq_api_key_here>

5. Configure the Brave Search API

Add your Brave’s API key to the .env file:

BRAVE_SEARCH_API_KEY=<your_brave_search_api_key_here>

Run

  • For using Ollama Model
uv run client/client.py server/command_server.py server/web_server.py ollama llama3
  • For using Groq Model
uv run client/client.py server/command_server.py server/web_server.py groq

Test

Give query to client (e.g., touch test.txt, create text file with test, rm test.txt file, etc.)

# Try the below prompts one by one to test. What is the capital of Pakistan. What is MCP? Create a file in my present working directory

🚀 Docker Project Setup Guide

📌 Steps to Run the Code

1️⃣ Clone the Git Repository

git clone https://github.com/mjunaid46/mcp/ cd mcp

2️⃣ Edit Configuration for Model Selection

Modify the config.ini file to specify the model type and name:

[settings] model_type = ollama # Change to "groq" if using Groq model_name = llama3 # Update model name if needed

3️⃣ Build the Docker Containers

docker-compose build

4️⃣ Run the Model Client

docker-compose run pull-model-client