Skip to main content
Glama
amin2179

Universal MCP Server

by amin2179

MCP Server for LLMs

This is a basic implementation of an MCP (Model Context Protocol) server that can be used with LLMs, including GGUF models.

Features

  • Resource management (create, read, update, delete)

  • Tool management (create, read, delete)

  • Prompt handling with LLM integration

  • RESTful API

  • Automatic GGUF model detection and loading

  • Model selection via configuration

  • Built-in and dynamic tool support

  • Web data fetching and file downloading

  • Secure command execution

  • File read/write operations

  • System information retrieval

Related MCP server: MCP Documentation Server

Setup

  1. Create a virtual environment:

    python3 -m venv venv
  2. Activate the virtual environment:

    source venv/bin/activate
  3. Install dependencies:

    pip install -r requirements.txt

Configuration

The server can be configured using the config.json file:

  • server: Server host and port settings

  • logging: Logging level

  • llm: LLM configuration including:

    • model_path: Directory to search for GGUF models

    • model_file: Specific model file to use (optional, will use first found if not specified)

    • n_ctx: Context window size

    • n_threads: Number of CPU threads to use

  • cors: CORS settings for web integration

By default, the server looks for GGUF models in /home/kali/.lmstudio/models/lmstudio-community/. You can change this in the config file.

To see what models are available, run:

python check_models.py

Running the Server

You can start the server in several ways:

  1. Using the run script (recommended):

    ./run.sh
  2. Directly with Python:

    python main.py
  3. In the background with logging:

    nohup python3 main.py > server.log 2>&1 &

The server will start on http://localhost:3000.

Stopping the Server

To stop the server, you can use the stop script:

./stop.sh

Or if running in the foreground, use Ctrl+C to stop it.

If you started it manually in the background, you can stop it with:

pkill -f "python3 main.py"

API Endpoints

  • GET / - Server information

  • GET /resources - List all resources

  • GET /resources/{uri} - Get a specific resource

  • POST /resources - Create a new resource

  • PUT /resources/{uri} - Update a resource

  • DELETE /resources/{uri} - Delete a resource

  • GET /tools - List all tools

  • GET /tools/{name} - Get a specific tool

  • POST /tools - Create a new tool

  • DELETE /tools/{name} - Delete a tool

  • POST /tools/{name}/execute - Execute a tool

  • POST /prompts - Handle a prompt request

Tool Support

The server includes several built-in tools:

  1. get_current_time - Get the current date and time

  2. calculate - Perform basic arithmetic calculations

  3. search_resources - Search for resources containing specific text

  4. get_system_info - Get comprehensive system information including OS, memory, CPU, and disk usage

Additionally, the server includes dynamically loaded tools from the tools/ directory:

  1. fetch_web_data - Fetch data from a web URL

  2. download_file - Download a file from a URL

  3. execute_command - Execute a safe shell command (limited to safe commands only)

  4. read_file - Read content from a file (restricted to temporary directories)

  5. write_file - Write content to a file (restricted to temporary directories)

You can create custom tool plugins in the tools/ directory. See tools/example_tool.py for an example.

Security Considerations

The server implements several security measures:

  • File operations are restricted to temporary directories only

  • Command execution is limited to a whitelist of safe commands

  • Web requests include appropriate headers to avoid blocking by servers

Testing

You can test the server with the provided test clients:

  1. Basic test client:

    python test_client.py
  2. Enhanced tool test client:

    python test_tool_client.py
  3. Comprehensive tools test client:

    python test_comprehensive_tools.py
  4. LLM test client:

    python llm_client.py
  5. MCP LLM test client:

    python mcp_llm_client.py
  6. Model checking script:

    python check_models.py

Make sure the server is running before running the test clients.

Integration with LLMs

To use this MCP server with LLMs:

  1. Start the server

  2. Configure your LLM application to connect to http://localhost:3000

  3. Use the MCP API to manage resources and tools that your LLM can access

  4. The server will automatically use GGUF models from LM Studio or Ollama

The server automatically detects and loads the specified GGUF model from the configured model directory. It uses llama-cpp-python for inference.

Extending the Server

To add more functionality:

  1. Add new endpoints in main.py

  2. Implement additional business logic

  3. Add new Pydantic models for request/response validation

  4. Create custom tool plugins in the tools/ directory

-
security - not tested
-
license - not tested
-
quality - not tested

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/amin2179/mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server