Skip to main content
Glama

Qwen3-Coder MCP Server

by keithah
start-qwen3-optimized.sh2.22 kB
#!/bin/bash # Qwen3-Coder optimized startup script for 64GB RAM Mac # This script optimizes Ollama settings for maximum performance echo "Starting Qwen3-Coder with optimizations for 64GB RAM..." # Kill any existing Ollama processes pkill -f ollama # Set environment variables for optimal performance on 64GB RAM export OLLAMA_NUM_PARALLEL=8 # Increased parallel requests for more RAM export OLLAMA_MAX_LOADED_MODELS=4 # Can keep more models in memory export OLLAMA_FLASH_ATTENTION=1 # Enable flash attention for efficiency export OLLAMA_KV_CACHE_TYPE=q8_0 # High quality KV cache export OLLAMA_MAX_VRAM=0 # Use system RAM instead of GPU VRAM export OLLAMA_HOST=0.0.0.0:11434 # Allow external connections export OLLAMA_KEEP_ALIVE=24h # Keep models loaded for 24 hours export OLLAMA_RUNNERS_DIR=/tmp/ollama # Use fast temp directory for runners # Create runners directory mkdir -p /tmp/ollama echo "Environment variables set:" echo " OLLAMA_NUM_PARALLEL=$OLLAMA_NUM_PARALLEL" echo " OLLAMA_MAX_LOADED_MODELS=$OLLAMA_MAX_LOADED_MODELS" echo " OLLAMA_FLASH_ATTENTION=$OLLAMA_FLASH_ATTENTION" echo " OLLAMA_KV_CACHE_TYPE=$OLLAMA_KV_CACHE_TYPE" echo " OLLAMA_KEEP_ALIVE=$OLLAMA_KEEP_ALIVE" # Start Ollama server in background echo "Starting Ollama server..." ollama serve & OLLAMA_PID=$! # Wait for server to start sleep 5 # Pre-load Qwen3-Coder model echo "Pre-loading Qwen3-Coder model..." ollama run qwen3-coder:30b "Ready to code!" > /dev/null 2>&1 & echo "✅ Qwen3-Coder setup complete!" echo "Server PID: $OLLAMA_PID" echo "Model: qwen3-coder:30b" echo "Host: http://localhost:11434" echo "" echo "To use with Claude Code:" echo "1. Restart Claude Code to load the MCP server" echo "2. Use the qwen3_* tools in your conversations" echo "" echo "Available tools:" echo " - qwen3_code_review: Review code quality" echo " - qwen3_code_explain: Explain code functionality" echo " - qwen3_code_generate: Generate new code" echo " - qwen3_code_fix: Fix bugs in code" echo " - qwen3_code_optimize: Optimize code performance" echo "" echo "Press Ctrl+C to stop the server" # Wait for user interrupt wait $OLLAMA_PID

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/keithah/qwen3-coder-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server