Qwen MCP Tool
Model Context Protocol server for Qwen CLI integration. This tool enables AI assistants like Claude to leverage Qwen's powerful code analysis and large context window capabilities through the MCP protocol.
Features
Large Context Windows: Leverage Qwen's massive token capacity for analyzing large files and entire codebases
File Analysis: Use
@filenameor@directorysyntax to include file contents in your queriesSandbox Mode: Safely execute code and run tests in isolated environments
Multiple Models: Support for various Qwen models (qwen3-coder-plus, qwen3-coder-turbo, etc.)
Flexible Approval Modes: Control tool execution with plan/default/auto-edit/yolo modes
MCP Protocol: Seamless integration with MCP-compatible AI assistants
Prerequisites
Node.js v16 or higher
Qwen CLI installed and configured (qwen-code)
Installation
Quick Setup (Easiest - Recommended)
Use Claude Code's built-in MCP installer:
This single command configures everything automatically!
Via Global Install
Install via npm:
Then add to Claude Code MCP settings (~/.config/claude/mcp_settings.json):
Via npx (Manual Configuration)
Manually configure to use npx without installing:
From Source (Development)
Clone and install dependencies:
Build the project:
Link locally:
Available Tools
ask-qwen
The main tool for interacting with Qwen AI.
Parameters:
prompt(required): Your question or instructionUse
@filenameto include a file's contentsUse
@directoryto include all files in a directory
model(optional): Model to use (qwen3-coder-plus, qwen3-coder-turbo, etc.)sandbox(optional): Enable sandbox mode for safe code executionapprovalMode(optional): Control tool execution approvalplan: Analyze tool calls without executingdefault: Prompt for approval (default behavior)auto-edit: Auto-approve file editsyolo: Auto-approve all tool calls
yolo(optional): Shortcut for approvalMode='yolo'allFiles(optional): Include all files in current directory as contextdebug(optional): Enable debug mode
Examples:
ping
Simple echo test to verify the connection.
Parameters:
prompt(optional): Message to echo (defaults to "Pong!")
Help
Display Qwen CLI help information.
Parameters: None
Configuration
The tool uses the following default models:
Primary: qwen3-coder-plus
Fallback: qwen3-coder-turbo (used if primary hits quota limits)
You can override these by specifying the model parameter in your requests.
Usage with Claude Code
Once installed as an MCP server, you can use it within Claude Code:
Claude will automatically use the ask-qwen tool with the appropriate parameters.
Project Structure
How It Works
The MCP server listens for tool calls via stdio transport
When a tool is called, the server validates the arguments using Zod schemas
For
ask-qwen, the prompt is passed to the Qwen CLI with appropriate flagsFile references (
@filename) are handled by Qwen's built-in file processingOutput is captured and returned to the MCP client
If quota limits are hit, the server automatically falls back to the turbo model
Comparison with Gemini MCP Tool
This tool is inspired by gemini-mcp-tool but adapted for Qwen CLI:
Feature  | Gemini MCP  | Qwen MCP  | 
File references  | ✅  | ✅ (more advanced)  | 
Sandbox mode  | ✅  | ✅  | 
Multiple models  | ✅  | ✅  | 
Approval modes  | ❌  | ✅  | 
Directory traversal  | Basic  | Advanced (git-aware)  | 
Multimodal support  | Limited  | Images, PDFs, audio, video  | 
Troubleshooting
"Qwen CLI not found"
Make sure the Qwen CLI is installed and available in your PATH:
"Command timed out"
For very large files or codebases, the analysis may take longer than the default 10-minute timeout. Consider:
Using
.qwenignoreto exclude unnecessary filesBreaking down large queries into smaller chunks
Using
approvalMode: "plan"to analyze without executing
"Invalid tool arguments"
Check that your arguments match the tool schema. Use the Help tool to see available options.
License
MIT
Contributing
Contributions are welcome! Please feel free to submit issues or pull requests.
Credits
Inspired by gemini-mcp-tool by jamubc. Built for use with Qwen Code.
local-only server
The server can only run on the client's local machine because it depends on local resources.
Enables AI assistants to leverage Qwen's code analysis capabilities with large context windows, supporting file/directory analysis, sandbox execution, and multiple approval modes for safe code operations.