Skip to main content
Glama

Qwen Max MCP Server

qwen_max

Generate text content using the Qwen Max language model, configured with parameters like prompt, token limit, and sampling temperature. Ideal for creating structured outputs via MCP integration.

Instructions

Generate text using Qwen Max model

Input Schema

NameRequiredDescriptionDefault
max_tokensNoMaximum number of tokens to generate
promptYesThe text prompt to generate content from
temperatureNoSampling temperature (0-2)

Input Schema (JSON Schema)

{ "properties": { "max_tokens": { "default": 8192, "description": "Maximum number of tokens to generate", "type": "number" }, "prompt": { "description": "The text prompt to generate content from", "type": "string" }, "temperature": { "default": 0.7, "description": "Sampling temperature (0-2)", "maximum": 2, "minimum": 0, "type": "number" } }, "required": [ "prompt" ], "type": "object" }
Install Server

Other Tools from Qwen Max MCP Server

Related Tools

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/66julienmartin/MCP-server-Qwen_Max'

If you have feedback or need assistance with the MCP directory API, please join our Discord server