Skip to main content
Glama
keithah

Qwen3-Coder MCP Server

qwen3_code_review

Review code for quality and issues using AI analysis. Submit code with its programming language to receive feedback on improvements and potential problems.

Instructions

Review code using Qwen3-Coder

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYesThe code to review
languageNoProgramming language of the code

Implementation Reference

  • The handler logic for the 'qwen3_code_review' tool. It constructs a prompt with the provided code and language, then calls the shared callQwen3Coder function to get the AI response.
          case "qwen3_code_review":
            prompt = `Please review the following ${args.language || 'code'} and provide feedback on code quality, potential bugs, best practices, and suggestions for improvement:
    
    \`\`\`${args.language || ''}
    ${args.code}
    \`\`\`
    
    Please provide a detailed code review with specific suggestions.`;
            result = await callQwen3Coder(prompt);
            break;
  • The input schema defining the parameters for the qwen3_code_review tool: requires 'code' string, optional 'language' string.
    inputSchema: {
      type: "object",
      properties: {
        code: {
          type: "string",
          description: "The code to review"
        },
        language: {
          type: "string",
          description: "Programming language of the code"
        }
      },
      required: ["code"]
    }
  • Registration of the qwen3_code_review tool in the ListTools response, including name, description, and schema.
    {
      name: "qwen3_code_review",
      description: "Review code using Qwen3-Coder",
      inputSchema: {
        type: "object",
        properties: {
          code: {
            type: "string",
            description: "The code to review"
          },
          language: {
            type: "string",
            description: "Programming language of the code"
          }
        },
        required: ["code"]
      }
    },
  • Shared helper function that spawns Ollama process to run qwen3-coder:30b model with the given prompt and returns the output.
    async function callQwen3Coder(prompt, options = {}) {
      return new Promise((resolve, reject) => {
        const ollamaProcess = spawn('ollama', ['run', 'qwen3-coder:30b', prompt], {
          stdio: ['pipe', 'pipe', 'pipe']
        });
    
        let output = '';
        let error = '';
    
        ollamaProcess.stdout.on('data', (data) => {
          output += data.toString();
        });
    
        ollamaProcess.stderr.on('data', (data) => {
          error += data.toString();
        });
    
        ollamaProcess.on('close', (code) => {
          if (code === 0) {
            resolve(output.trim());
          } else {
            reject(new Error(`Ollama process exited with code ${code}: ${error}`));
          }
        });
    
        // Set timeout for long-running requests
        setTimeout(() => {
          ollamaProcess.kill();
          reject(new Error('Request timeout'));
        }, options.timeout || 120000); // 2 minutes default timeout
      });
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'using Qwen3-Coder' but doesn't disclose behavioral traits like whether it's read-only, what the output format is, potential rate limits, or any side effects. This leaves the agent uncertain about the tool's operation and results.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words, making it appropriately sized. However, it's not front-loaded with critical details like purpose differentiation or usage context, which slightly reduces its effectiveness despite the brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a code review tool with no annotations and no output schema, the description is incomplete. It fails to explain what the review entails, what the output might look like, or how it differs from sibling tools, leaving significant gaps for an AI agent to understand and use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already documents both parameters ('code' and 'language') adequately. The description doesn't add any meaning beyond this, such as examples or constraints, but it doesn't need to compensate as the schema provides sufficient baseline information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Review code using Qwen3-Coder' states the action (review) and resource (code) but is vague about what 'review' entails. It doesn't differentiate from siblings like 'explain', 'fix', 'optimize', or 'generate', leaving the specific purpose unclear beyond a generic code analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like qwen3_code_explain or qwen3_code_fix. The description lacks context about scenarios where a 'review' is appropriate, such as for quality assessment, bug detection, or style checking, leaving the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/keithah/qwen3-coder-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server