Allows running the MCP server with local LLM models through Ollama, with specific support for models like qwen3 that can utilize MCP tools
Used for testing the MCP server functionality and ensuring proper integration with HPC tools
Serves as a core dependency for the MCP server, enabling development and execution of HPC-focused tools
hpc-mcp ::
This project provides MCP tools for HPC. These are designed to integrate with LLMs. My initial plan is to integrate with LLMs called from IDEs such as cursor and vscode.
Quick Start Guide :
This project uses uv for dependency management and installation. If you don't have uv installed, follow installation instructions on their website.
Once we have uv
installed we can install the dependencies and run the tests with the following
command:
Adding the MCP Server
Cursor
- Open Cursor and go to settings.
- Click
Tools & Integrations
- Click
Add Custom MCP
Note
This will open your system-wide MCP settings ($HOME/.cursor/mcp.json
). If you prefer to set this
on a project-by-project basis, then you can create a local configuration using
<path/to/project/root>/.cursor/mcp.json
.
- Add the following configuration:
VSCode
- Open command palette (Ctrl+Shift+p) and select
MCP: Add Server...
- Choose the option
command (stdio)
since the server will be run locally - Type the command to run the MCP server:
- Select reasonable name for the server e.g. "HpcMcp" (camel case is a convention)
- Select whether to add the server locally or globally.
- You can tune the settings by opening
setting.json
(global settings) or.vscode/setting.json
(workspace settings)
Zed
- Open Zed and go to settings.
- Open general settings
CTRL-ALT-C
- Under section Model Context Protocol (MCP) Servers click
Add Custom Server
- Add the following text (changing the
<path/to>/hpc-mcp
to your actual path)
Test the MCP Server
Test the MCP using our simple example
- open terminal
cd example/simple
- build the example using
make
- this should generate
segfault.exe
- then type the following prompt into your IDE LLM agent
- this should ask your permission to run
debug_crash
MCP tool - accept and you should get a response like the following
Running local LLMs with Ollama
To run the hpc-mcp
MCP tool with a local Ollama model use the Zed text editor. It should
automatically detect local running ollama models and make them available. As long as you have
installed the hpc-mcp
MCP server in zed (see instructions here) it
should be available to your models. For more info on ollama integration with zed see zed's
documentation.
Note
Not all models support calling of MCP tools. I managed to have success with
qwen3:latest
.
Core Dependencies
python
uv
fastmcp
Tools
A server that provides Model Control Protocol (MCP) tools for High Performance Computing, designed to integrate with Large Language Models in IDEs like Cursor and VSCode for debugging and other HPC tasks.
Related MCP Servers
- -securityAlicense-qualityMCP Server simplifies the implementation of the Model Context Protocol by providing a user-friendly API to create custom tools and manage server workflows efficiently.Last updated -04TypeScriptMIT License
- AsecurityAlicenseAqualityA Model Context Protocol (MCP) server for Cursor IDE that simplifies the installation and configuration of other MCP servers.Last updated -323163JavaScriptMIT License
- AsecurityFlicenseAqualityA specialized server that helps users create new Model Context Protocol (MCP) servers by providing tools and templates for scaffolding projects with various capabilities.Last updated -82TypeScript
- AsecurityFlicenseAqualityAn all-in-one Model Context Protocol (MCP) server that connects your coding AI to numerous databases, data warehouses, data pipelines, and cloud services, streamlining development workflow through seamless integrations.Last updated -2Python