Skip to main content
Glama
execution-run

execution-run-mcp

Official

compute

Execute LLM requests by burning Shells to get AI-generated responses with calculated costs based on model and token usage.

Instructions

Execute an LLM request by burning Shells. The cost is calculated based on the model and token usage. Returns the LLM response content and the cost in Shells.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelYesModel identifier (e.g., 'gemini-2.0-flash', 'gpt-4o', 'claude-3-5-sonnet-latest')
messagesYesConversation messages
temperatureNoSampling temperature (0-2, optional)
maxTokensNoMaximum tokens to generate (optional)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/execution-run/execution-run-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server