Skip to main content
Glama

qwen_max

Generate text content using the Qwen Max language model with configurable parameters for tailored responses.

Instructions

Generate text using Qwen Max model

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe text prompt to generate content from
max_tokensNoMaximum number of tokens to generate
temperatureNoSampling temperature (0-2)

Implementation Reference

  • MCP CallToolRequestSchema handler that validates tool name 'qwen_max', extracts arguments, calls Qwen Max model using OpenAI client, and returns the generated text.
    async (request) => { if (request.params.name !== "qwen_max") { throw new McpError( ErrorCode.MethodNotFound, `Unknown tool: ${request.params.name}` ); } const { prompt, max_tokens = 8192, temperature = 0.7 } = request.params.arguments as QwenMaxArgs; try { const completion = await this.openai.chat.completions.create({ model: "qwen-max-latest", messages: [{ role: "user", content: prompt }], max_tokens, temperature }); return { content: [{ type: "text", text: completion.choices[0].message.content || "" }] }; } catch (error: any) { console.error("Qwen API Error:", error); throw new McpError( ErrorCode.InternalError, `Qwen API error: ${error.message}` ); } } );
  • JSON schema for qwen_max tool inputs, defining prompt (required), max_tokens, and temperature parameters.
    inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The text prompt to generate content from" }, max_tokens: { type: "number", description: "Maximum number of tokens to generate", default: 8192 }, temperature: { type: "number", description: "Sampling temperature (0-2)", default: 0.7, minimum: 0, maximum: 2 } }, required: ["prompt"] }
  • TypeScript interface defining the arguments for the qwen_max tool handler.
    interface QwenMaxArgs { prompt: string; max_tokens?: number; temperature?: number; }
  • src/index.ts:59-89 (registration)
    MCP ListToolsRequestSchema handler that registers the qwen_max tool with its description and input schema.
    this.server.setRequestHandler( ListToolsRequestSchema, async () => ({ tools: [{ name: "qwen_max", description: "Generate text using Qwen Max model", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The text prompt to generate content from" }, max_tokens: { type: "number", description: "Maximum number of tokens to generate", default: 8192 }, temperature: { type: "number", description: "Sampling temperature (0-2)", default: 0.7, minimum: 0, maximum: 2 } }, required: ["prompt"] } }] }) );
  • src/index.ts:33-36 (registration)
    Initialization of the MCP Server instance named 'qwen_max' with tool capabilities.
    this.server = new Server( { name: "qwen_max", version: "1.0.0" }, { capabilities: { tools: {} } } );
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/66julienmartin/MCP-server-Qwen_Max'

If you have feedback or need assistance with the MCP directory API, please join our Discord server