Provides AI-powered tools for Zig programming language development including code generation from natural language descriptions, code debugging and analysis, and explanations of language features and standard library documentation.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Zignetanalyze this Zig code for syntax errors"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
ZigNet
MCP Server for Zig β Intelligent code analysis, validation, and documentation powered by a fine-tuned LLM
ZigNet integrates with Claude (and other MCP-compatible LLMs) to provide real-time Zig code analysis without leaving your chat interface.
π― Features
MCP Tools
Analyze Zig code for syntax errors, type mismatches, and semantic issues using zig ast-check.
Example usage:
User: "Analyze this Zig code"
Claude: [calls analyze_zig tool]
Response: "β
Syntax: Valid | Type Check: PASS | Warnings: 0"Capabilities:
Lexical analysis (tokenization)
Syntax parsing (AST generation)
Type checking and validation
Semantic error detection
Line/column error reporting
Validate and format Zig code using zig fmt, generating clean, idiomatic output.
Example:
// Input (messy)
fn add(a:i32,b:i32)i32{return a+b;}
// Output (formatted)
fn add(a: i32, b: i32) i32 {
return a + b;
}Capabilities:
Code formatting (2-space indentation)
Syntax validation
Best practices enforcement
Preserves semantics
Retrieve Zig documentation and explanations for language features using a fine-tuned LLM.
Example:
Query: "comptime"
Response: "comptime enables compile-time evaluation in Zig..."Powered by:
Fine-tuned Qwen2.5-Coder-7B model
13,756 examples from Zig 0.13-0.15
Specialized on advanced Zig idioms (comptime, generics, error handling)
Get intelligent code fix suggestions for Zig errors using AI-powered analysis.
Example:
// Error: "Type mismatch: cannot assign string to i32"
var x: i32 = "hello";
// Suggestions:
// Option 1: var x: []const u8 = "hello"; // If you meant string
// Option 2: var x: i32 = 42; // If you meant integerFeatures:
Context-aware suggestions
Multiple fix options
Explanation of the issue
Zig idiom recommendations
π Usage
ZigNet is an MCP server β configure it once in your MCP client, then use it naturally in conversation.
Configuration file location:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonLinux:
~/.config/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.json
Add this:
{
"mcpServers": {
"zignet": {
"command": "npx",
"args": ["-y", "zignet"]
}
}
}Then restart Claude Desktop and start using:
You: "Analyze this Zig code for errors"
[paste code]
Claude: [uses analyze_zig tool]
"Found 1 type error: variable 'x' expects i32 but got []const u8"Method 1: VS Code Marketplace (coming soon)
Open VS Code Extensions (
Ctrl+Shift+X/Cmd+Shift+X)Search for
@mcp zignetClick Install
Restart VS Code
Method 2: Manual configuration (available now)
Install GitHub Copilot extension (if not already installed)
Open Copilot settings
Add to MCP servers config:
{
"mcpServers": {
"zignet": {
"command": "npx",
"args": ["-y", "zignet"]
}
}
}Then restart VS Code and Copilot will have access to ZigNet tools.
What happens after configuration?
First use:
npxdownloads and caches ZigNet automaticallyZig compiler: Downloads on-demand (supports Zig 0.13, 0.14, 0.15)
Tools available:
analyze_zig,compile_zig(+get_zig_docs,suggest_fixcoming soon)Zero maintenance: Updates automatically via
npx -y zignet
βοΈ Configuration
GPU Selection (Multi-GPU Systems)
If you have multiple GPUs (e.g., AMD + NVIDIA), you can control which GPU ZigNet uses via environment variables.
Windows (PowerShell):
$env:ZIGNET_GPU_DEVICE="0"
npx -y zignetmacOS/Linux:
export ZIGNET_GPU_DEVICE="0"
npx -y zignetVS Code MCP Configuration with GPU selection:
{
"mcpServers": {
"zignet": {
"command": "npx",
"args": ["-y", "zignet"],
"env": {
"ZIGNET_GPU_DEVICE": "0"
}
}
}
}Claude Desktop configuration with GPU selection:
macOS/Linux (~/.config/Claude/claude_desktop_config.json):
{
"mcpServers": {
"zignet": {
"command": "npx",
"args": ["-y", "zignet"],
"env": {
"ZIGNET_GPU_DEVICE": "0"
}
}
}
}Windows (%APPDATA%\Claude\claude_desktop_config.json):
{
"mcpServers": {
"zignet": {
"command": "npx",
"args": ["-y", "zignet"],
"env": {
"ZIGNET_GPU_DEVICE": "0"
}
}
}
}GPU Device Values:
"0"- Use first GPU only (e.g., RTX 4090)"1"- Use second GPU only"0,1"- Use both GPUsNot set - Use all available GPUs (default)
Identify your GPUs:
# NVIDIA GPUs
nvidia-smi
# Output shows GPU indices:
# GPU 0: NVIDIA RTX 4090
# GPU 1: AMD Radeon 6950XT (won't be used by CUDA anyway)Advanced Configuration
All configuration options can be set via environment variables:
Variable | Default | Description |
| auto | GPU device selection (CUDA_VISIBLE_DEVICES) |
| 35 | Number of model layers on GPU (0=CPU only) |
|
| Custom model path |
| true | Auto-download model from HuggingFace |
| 4096 | LLM context window size |
| 0.7 | LLM creativity (0.0-1.0) |
| 0.9 | LLM sampling parameter |
| 0.13.0,0.14.0,0.15.2 | Supported Zig versions |
| 0.15.2 | Default Zig version |
See .env.example for detailed examples.
ποΈ Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Claude / MCP Client β
ββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββ
β MCP Protocol (JSON-RPC)
ββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββ
β ZigNet MCP Server (TypeScript) β
β ββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Tool Handlers β β
β β - analyze_zig β β
β β - compile_zig β β
β β - get_zig_docs β β
β β - suggest_fix β β
β βββββββββββββββ¬βββββββββββββββββββββββββββββββββ β
β βΌ β
β ββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Zig Compiler Integration β β
β β - zig ast-check (syntax + type validation) β β
β β - zig fmt (official formatter) β β
β β - Auto-detects system Zig installation β β
β β - Falls back to downloading if needed β β
β βββββββββββββββ¬βββββββββββββββββββββββββββββββββ β
β βΌ β
β ββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Fine-tuned LLM (Qwen2.5-Coder-7B) β β
β β - Documentation lookup β β
β β - Intelligent suggestions β β
β ββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββWhy this architecture?
Official Zig compiler (100% accurate, always up-to-date) instead of custom parser
System integration (uses existing Zig installation if available)
LLM-powered suggestions (get_zig_docs, suggest_fix) for intelligence
No external API calls (local inference via node-llama-cpp)
Fast (< 100ms for validation, < 2s for LLM suggestions)
Note: When Zig releases a new version (e.g., 0.16.0), ZigNet will need to re-train the LLM model on updated documentation and examples.
π§ͺ Development Status
Component | Status | Notes |
Zig Compiler Wrapper | β Complete | ast-check + fmt integration |
System Zig Detection | β Complete | Auto-detects installed Zig versions |
Multi-version Cache | β Complete | Downloads Zig 0.13-0.15 on demand |
MCP Server | β Complete | All 4 tools fully implemented |
LLM Fine-tuning | β Complete | Trained on 13,756 Zig examples |
get_zig_docs | β Complete | LLM-powered documentation lookup |
suggest_fix | β Complete | LLM-powered intelligent suggestions |
GGUF Conversion | β Complete | Q4_K_M quantized (4.4GB) |
E2E Testing | β Complete | 27/27 tests passing (8.7s) |
Claude Integration | β³ Planned | Final deployment to Claude Desktop |
Current Phase: Ready for deployment - All core features complete
π§ͺ Testing
Running Tests
# Run all tests (unit + E2E)
pnpm test
# Run only E2E tests
pnpm test tests/e2e/mcp-integration.test.ts
# Run deterministic tests only (no LLM required)
SKIP_LLM_TESTS=1 pnpm test tests/e2e
# Watch mode for development
pnpm test:watchTest Coverage
E2E Test Suite: 27 tests covering all MCP tools
Tool | Tests | Type | Pass Rate |
analyze_zig | 4 | Deterministic | 100% |
compile_zig | 3 | Deterministic | 100% |
get_zig_docs | 5 | LLM-powered | 100% |
suggest_fix | 5 | LLM-powered | 100% |
Integration | 3 | Mixed | 100% |
Performance | 3 | Stress tests | 100% |
Edge Cases | 4 | Error paths | 100% |
Execution time: 8.7 seconds (without LLM model, deterministic only)
With LLM model: ~60-120 seconds (includes model loading + inference)
Test Behavior
Deterministic tests (12 tests): Always run, use Zig compiler directly
LLM tests (15 tests): Auto-skip if model not found, graceful degradation
CI/CD ready: Runs on GitHub Actions without GPU requirements
For detailed testing guide, see tests/e2e/README.md
π¦ Project Structure
zignet/
βββ src/
β βββ config.ts # Environment-based configuration
β βββ mcp-server.ts # MCP protocol handler
β βββ zig/
β β βββ manager.ts # Multi-version Zig download/cache
β β βββ executor.ts # zig ast-check + fmt wrapper
β βββ llm/
β β βββ model-downloader.ts # Auto-download GGUF from HuggingFace
β β βββ session.ts # node-llama-cpp integration
β βββ tools/
β βββ analyze.ts # analyze_zig tool (COMPLETE)
β βββ compile.ts # compile_zig tool (COMPLETE)
β βββ docs.ts # get_zig_docs tool (COMPLETE)
β βββ suggest.ts # suggest_fix tool (COMPLETE)
βββ scripts/
β βββ train-qwen-standard.py # Fine-tuning script (COMPLETE)
β βββ scrape-zig-repos.js # Dataset collection
β βββ install-zig.js # Zig version installer
β βββ test-config.cjs # Config system tests
βββ data/
β βββ training/ # 13,756 examples (train/val/test)
β βββ zig-docs/ # Scraped documentation
βββ models/
β βββ zignet-qwen-7b/ # Fine-tuned model + LoRA adapters
βββ tests/
β βββ *.test.ts # Unit tests (lexer, parser, etc.)
β βββ e2e/
β βββ mcp-integration.test.ts # 27 E2E tests
β βββ README.md # Testing guide
βββ docs/
β βββ AGENTS.md # Detailed project spec
β βββ DEVELOPMENT.md # Development guide
β βββ TESTING.md # Testing documentation
βββ README.md # This fileπ€ Model Details
Base Model: Qwen/Qwen2.5-Coder-7B-Instruct
Fine-tuning: QLoRA (4-bit) on 13,756 Zig examples
Dataset: 97% real-world repos (Zig 0.13-0.15), 3% documentation
Training: RTX 3090 (24GB VRAM), 3 epochs, ~8 hours
Output: fulgidus/zignet-qwen2.5-coder-7b (HuggingFace)
Quantization: Q4_K_M (~4GB GGUF for node-llama-cpp)
Why Qwen2.5-Coder-7B?
Best Zig syntax understanding (benchmarked vs 14 models)
Modern idioms (comptime, generics, error handling)
Fast inference (~15-20s per query post-quantization)
π Benchmarks
Model | Pass Rate | Avg Time | Quality | Notes |
Qwen2.5-Coder-7B | 100% | 29.58s | βββββ | SELECTED - Best idioms |
DeepSeek-Coder-6.7B | 100% | 27.86s | βββββ | Didactic, verbose |
Llama3.2-3B | 100% | 12.27s | ββββ | Good balance |
CodeLlama-7B | 100% | 24.61s | βββ | Confuses Zig/Rust |
Qwen2.5-Coder-0.5B | 100% | 3.94s | β | Invents syntax |
Full benchmarks: scripts/test-results/
π οΈ Development
# Run tests
pnpm test
# Run specific component tests
pnpm test -- lexer
pnpm test -- parser
pnpm test -- type-checker
# Watch mode
pnpm test:watch
# Linting
pnpm lint
pnpm lint:fix
# Build
pnpm buildπ€ Contributing
See AGENTS.md for detailed project specification and development phases.
Current needs:
Testing on diverse Zig codebases
Edge case discovery (parser/type-checker)
Performance optimization
Documentation improvements
π License
WTFPL v2 β Do What The Fuck You Want To Public License
π Links
Repository: https://github.com/fulgidus/zignet
Model (post-training): https://huggingface.co/fulgidus/zignet-qwen2.5-coder-7b
MCP Protocol: https://modelcontextprotocol.io
Zig Language: https://ziglang.org
Status: β Phase 4 Complete - Ready for deployment (fine-tuning complete, E2E tests passing)