Skip to main content
Glama

sampleLLM

Generate text responses from language models by providing prompts and token limits through MCP server integration.

Instructions

Demonstrates LLM sampling capability using the MCP sampling feature. Requests the MCP client to sample from an LLM on behalf of this tool.

Args: prompt: The prompt to send to the LLM maxTokens: Maximum number of tokens to generate (default: 100)

Returns: The generated LLM response text

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYes
maxTokensNo
ctxNo

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kcbabo/everything-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server