sampleLLM
Generate text responses from language models by providing prompts and token limits through MCP server integration.
Instructions
Demonstrates LLM sampling capability using the MCP sampling feature. Requests the MCP client to sample from an LLM on behalf of this tool.
Args: prompt: The prompt to send to the LLM maxTokens: Maximum number of tokens to generate (default: 100)
Returns: The generated LLM response text
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | ||
| maxTokens | No | ||
| ctx | No |