Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@HexagonML ModelManager MCP Serverlist all my deployed models"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
HexagonML ModelManager MCP Server
This is a HexagonML MCP server that provides a Model context protocol interface for HexagonML ModelManager tools.
Local Development
Prerequisites
Python:
python3Virtualenv: recommended (project uses
.venvin examples)
Environment variables
The server reads configuration from environment variables (and will also load a local .env file automatically).
SECRET_KEY: secret key for ModelManager API authMM_API_BASE_URL: base URL for ModelManager API (example:http://localhost:8000)OUTPUT_DIR: directory where generated HTML outputs are writtenHOST(optional): defaults to0.0.0.0PORT(optional): defaults to9000
Example .env:
SECRET_KEY=your-secret-key
MM_API_BASE_URL=http://localhost:8000
OUTPUT_DIR=./output
HOST=0.0.0.0
PORT=9000Run the server
From the repo root:
python3 server/mm_mcp_server.pyTo run with the FastMCP inspector (dev mode):
fastmcp dev server/mm_mcp_server.pyTroubleshooting
Port in use (FastMCP inspector): If you see
Proxy Server PORT IS IN USE(commonly6277), stop the previous inspector process and retry.Missing env vars: The server will exit with a message listing missing required variables.
Configuration For mcp integration on host (g: windsurf, vscode, claude desktop)
Local Configuration
{
"mcpServers": {
"hex-mm-mcp": {
"command": "hex-mm-mcp/.venv/bin/mcp",
"args": ["run", "hex-mm-mcp/server/mm_mcp_server.py"]
}
}
}Docker Configuration
For Dev (Using Local URL)
Run the ModelManager server in
--host 0.0.0.0 --port 8000cmd
python manage.py runserver 0.0.0.0:8000
Get the hostname of your system using
hostname -Icommandeg:
192.168.10.75 172.17.0.1 2400:1a00:4b26:2af0:8f53:ede1:ec3a:c59b 2400:1a00:4b26:2af0:9139:c926:2fb5:6008use first ip address from the list eg:
192.168.10.75
Replace
your-api-base-urlwithhttp://<hostIP>:8000
{
"mcpServers": {
"hex-mm-mcp-docker": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"--network=host",
"-e",
"SECRET_KEY",
"-e",
"MM_API_BASE_URL",
"-v",
"OUTPUT_DIR",
"<image-name>:<tag>"
],
"env": {
"SECRET_KEY": "your-secret-key",
"MM_API_BASE_URL": "your-api-base-url",
"OUTPUT_DIR": "your-output-dir"
}
}
}
}Docker Commands
Build Image
docker build --platform=linux/amd64 -t modelmanagerdev/mcp:version_id .
Run Container
docker run --platform=linux/amd64 -d --name mm-mcp -p 9000:9000 --env-file .env modelmanagerdev/mcp:v6Model Insights Tools
The server exposes Model Insights endpoints via MCP tools.
Create Insight
Tool name:
create_insightInput:
data(dict)Backend:
POST /api/mmanager-modelinsights/create_insight/
Get Insights
Tool name:
get_insightInput:
usecase_id(str)Backend:
GET /api/mmanager-modelinsights/get_insights/?usecase_id=...Response shape:
If the backend returns a dict, the tool returns that dict.
If the backend returns a list, the tool returns
{ "data": [...] }.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.