Chat with LLM
atlas_chatSend chat completion requests to LLM models using the Atlas Cloud API. Configure models, messages, and parameters to generate AI responses for various applications.
Instructions
Send a chat completion request to an LLM model via Atlas Cloud API (OpenAI-compatible format).
Args:
model (string, required): The LLM model ID (e.g., "deepseek-ai/deepseek-v3.2", "qwen/qwen3-32b")
messages (array, required): Array of message objects with "role" and "content" fields. Roles: "system", "user", "assistant"
temperature (number, optional): Sampling temperature, 0-2. Default: 1
max_tokens (number, optional): Maximum tokens in the response
top_p (number, optional): Nucleus sampling parameter, 0-1. Default: 1
Returns: The LLM response including the generated message, token usage, and finish reason.
Examples:
model="deepseek-ai/deepseek-v3.2", messages=[{"role": "user", "content": "Hello"}]
model="qwen/qwen3-32b", messages=[{"role": "system", "content": "You are a helpful assistant"}, {"role": "user", "content": "Explain quantum computing"}], temperature=0.7
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | Yes | LLM model ID | |
| messages | Yes | Array of chat messages | |
| temperature | No | Sampling temperature, 0-2. Default: 1 | |
| max_tokens | No | Maximum tokens in the response | |
| top_p | No | Nucleus sampling parameter, 0-1. Default: 1 |