qwen_max
Generate text content using the Qwen Max language model with configurable parameters for tailored responses.
Instructions
Generate text using Qwen Max model
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The text prompt to generate content from | |
| max_tokens | No | Maximum number of tokens to generate | |
| temperature | No | Sampling temperature (0-2) |
Implementation Reference
- src/index.ts:93-126 (handler)MCP CallToolRequestSchema handler that validates tool name 'qwen_max', extracts arguments, calls Qwen Max model using OpenAI client, and returns the generated text.async (request) => { if (request.params.name !== "qwen_max") { throw new McpError( ErrorCode.MethodNotFound, `Unknown tool: ${request.params.name}` ); } const { prompt, max_tokens = 8192, temperature = 0.7 } = request.params.arguments as QwenMaxArgs; try { const completion = await this.openai.chat.completions.create({ model: "qwen-max-latest", messages: [{ role: "user", content: prompt }], max_tokens, temperature }); return { content: [{ type: "text", text: completion.choices[0].message.content || "" }] }; } catch (error: any) { console.error("Qwen API Error:", error); throw new McpError( ErrorCode.InternalError, `Qwen API error: ${error.message}` ); } } );
- src/index.ts:65-86 (schema)JSON schema for qwen_max tool inputs, defining prompt (required), max_tokens, and temperature parameters.inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The text prompt to generate content from" }, max_tokens: { type: "number", description: "Maximum number of tokens to generate", default: 8192 }, temperature: { type: "number", description: "Sampling temperature (0-2)", default: 0.7, minimum: 0, maximum: 2 } }, required: ["prompt"] }
- src/index.ts:22-26 (schema)TypeScript interface defining the arguments for the qwen_max tool handler.interface QwenMaxArgs { prompt: string; max_tokens?: number; temperature?: number; }
- src/index.ts:59-89 (registration)MCP ListToolsRequestSchema handler that registers the qwen_max tool with its description and input schema.this.server.setRequestHandler( ListToolsRequestSchema, async () => ({ tools: [{ name: "qwen_max", description: "Generate text using Qwen Max model", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The text prompt to generate content from" }, max_tokens: { type: "number", description: "Maximum number of tokens to generate", default: 8192 }, temperature: { type: "number", description: "Sampling temperature (0-2)", default: 0.7, minimum: 0, maximum: 2 } }, required: ["prompt"] } }] }) );
- src/index.ts:33-36 (registration)Initialization of the MCP Server instance named 'qwen_max' with tool capabilities.this.server = new Server( { name: "qwen_max", version: "1.0.0" }, { capabilities: { tools: {} } } );