This server provides a Model Context Protocol (MCP) interface for running Garak LLM vulnerability scans against various AI models.
Key Capabilities:
Discover Available Models - List supported model types (Ollama, OpenAI, HuggingFace, GGML) and browse specific models within each platform
Browse Security Probes - View all available Garak attack probes and vulnerability tests
Run Vulnerability Scans - Execute security attacks against specified models using chosen probes to identify vulnerabilities
Retrieve Scan Reports - Access the file path to the latest scan results for review
Automated Testing - Integrate into development workflows through MCP-compatible tools like Claude Desktop and Cursor, with support for CLI operations and GitHub Actions for scheduled scanning
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Garak-MCPrun an attack on llama2 using the encoding probe"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Server For Garak LLM Vulnerability Scanner
A lightweight MCP (Model Context Protocol) server for Garak.
Example:
https://github.com/user-attachments/assets/f6095d26-2b79-4ef7-a889-fd6be27bbbda
Tools Provided
Overview
Name | Description |
list_model_types | List all available model types (ollama, openai, huggingface, ggml) |
list_models | List all available models for a given model type |
list_garak_probes | List all available Garak attacks/probes |
get_report | Get the report of the last run |
run_attack | Run an attack with a given model and probe |
Detailed Description
list_model_types
List all available model types that can be used for attacks
Returns a list of supported model types (ollama, openai, huggingface, ggml)
list_models
List all available models for a given model type
Input parameters:
model_type(string, required): The type of model to list (ollama, openai, huggingface, ggml)
Returns a list of available models for the specified type
list_garak_probes
List all available Garak attacks/probes
Returns a list of available probes/attacks that can be run
get_report
Get the report of the last run
Returns the path to the report file
run_attack
Run an attack with the given model and probe
Input parameters:
model_type(string, required): The type of model to usemodel_name(string, required): The name of the model to useprobe_name(string, required): The name of the attack/probe to use
Returns a list of vulnerabilities found
Related MCP server: Mercado Livre MCP Server
Prerequisites
Python 3.11 or higher: This project requires Python 3.11 or newer.
# Check your Python version python --versionInstall uv: A fast Python package installer and resolver.
pip install uvOr use Homebrew:
brew install uvOptional: Ollama: If you want to run attacks on ollama models be sure that the ollama server is running.
ollama serveInstallation
Clone this repository:
git clone https://github.com/BIGdeadLock/Garak-MCP.gitConfigure your MCP Host (Claude Desktop ,Cursor, etc):
{
"mcpServers": {
"garak-mcp": {
"command": "uv",
"args": ["--directory", "path-to/Garak-MCP", "run", "garak-server"],
"env": {}
}
}
}
Tested on:
Cursor
Claude Desktop
Running Vulnerability Scans
You can run Garak vulnerability scans directly using the included CLI tool.
Prerequisites for Scanning
Ollama must be running:
ollama servePull a model to scan:
ollama pull llama2
Using the CLI Scanner
After installation, you can use the garak-scan command:
# List available Ollama models
uv run garak-scan --list-models
# Scan a specific model with all probes
uv run garak-scan --model llama2
# Scan with specific probes
uv run garak-scan --model llama2 --probes encoding
# Scan with custom output directory
uv run garak-scan --model llama2 --output-dir ./my_scans
# Run multiple parallel attempts
uv run garak-scan --model llama2 --parallel-attempts 4Scan Results
Scan results are saved in the output/ directory (or your specified directory) as JSONL files. Each scan creates a timestamped report file:
output/scan_llama2_20250125_143022.report.jsonlGitHub Actions Integration
This repository includes a GitHub Actions workflow that automatically runs vulnerability scans:
Triggers: Push to main/master, pull requests, weekly schedule (Mondays at 2am UTC)
Manual runs: Go to Actions → Garak LLM Vulnerability Scan → Run workflow
Custom options: Specify model and probes when running manually
Results: Scan results are uploaded as workflow artifacts
To enable automated scanning:
Ensure the workflow file exists at
.github/workflows/garak-scan.ymlPush to your repository
Check the Actions tab to view scan results
Future Steps
Add support for Smithery AI: Docker and config
Improve Reporting
Test and validate OpenAI models (GPT-3.5, GPT-4)
Test and validate HuggingFace models
Test and validate local GGML models
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.