Skip to main content
Glama

saptiva_chat

Send chat requests to AI models for generating responses, reasoning, and tool-compatible outputs using customizable parameters.

Instructions

Send a chat completion request to Saptiva AI models. Supports multiple models including Saptiva Turbo (fast), Cortex (reasoning), Legacy (tool-compatible), and more.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelNoModel to use. Options: Saptiva Turbo, Saptiva Cortex, Saptiva Ops, Saptiva Legacy, Saptiva KALSaptiva Turbo
messagesYesArray of message objects with role and content
max_tokensNoMaximum tokens to generate
temperatureNoSampling temperature (0.0 to 1.0)
top_pNoTop-p sampling parameter (0.0 to 1.0)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/LaraArias/MCP-Saptiva'

If you have feedback or need assistance with the MCP directory API, please join our Discord server