compute
Execute LLM requests by burning Shells to get AI-generated responses with calculated costs based on model and token usage.
Instructions
Execute an LLM request by burning Shells. The cost is calculated based on the model and token usage. Returns the LLM response content and the cost in Shells.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | Yes | Model identifier (e.g., 'gemini-2.0-flash', 'gpt-4o', 'claude-3-5-sonnet-latest') | |
| messages | Yes | Conversation messages | |
| temperature | No | Sampling temperature (0-2, optional) | |
| maxTokens | No | Maximum tokens to generate (optional) |