chat_completion
Send chat completion requests to AI models with configurable parameters like temperature and max_tokens. Returns raw API responses without format conversion for enterprise AI integration.
Instructions
Send a chat completion request to the configured AI API provider (ANTHROPIC). Supports parameters like model, messages, temperature, max_tokens, stream, etc. Returns the raw response from the API without format conversion.
Custom AI model for enterprise use
Input Schema
Name | Required | Description | Default |
---|---|---|---|
frequency_penalty | No | Penalizes new tokens based on their frequency | |
max_tokens | No | Maximum number of tokens to generate (default: 4096) | |
messages | Yes | Array of message objects with role and content | |
model | No | Model to use for completion (default: claude-3-sonnet-20240229) | |
presence_penalty | No | Penalizes new tokens based on whether they appear in the text | |
response_format | No | Format of the response (OpenAI only). Supports json_object and json_schema types. | |
stop | No | Up to 4 sequences where the API will stop generating further tokens | |
stream | No | Whether to stream the response | |
temperature | No | Controls randomness in the response (default: 0.7) | |
top_p | No | Controls diversity via nucleus sampling |
Input Schema (JSON Schema)
{
"properties": {
"frequency_penalty": {
"description": "Penalizes new tokens based on their frequency",
"maximum": 2,
"minimum": -2,
"type": "number"
},
"max_tokens": {
"description": "Maximum number of tokens to generate (default: 4096)",
"minimum": 1,
"type": "number"
},
"messages": {
"description": "Array of message objects with role and content",
"items": {
"properties": {
"content": {
"type": "string"
},
"role": {
"enum": [
"system",
"user",
"assistant"
],
"type": "string"
}
},
"required": [
"role",
"content"
],
"type": "object"
},
"type": "array"
},
"model": {
"description": "Model to use for completion (default: claude-3-sonnet-20240229)",
"type": "string"
},
"presence_penalty": {
"description": "Penalizes new tokens based on whether they appear in the text",
"maximum": 2,
"minimum": -2,
"type": "number"
},
"response_format": {
"description": "Format of the response (OpenAI only). Supports json_object and json_schema types.",
"properties": {
"json_schema": {
"description": "JSON schema definition (required when type is json_schema)",
"properties": {
"name": {
"description": "Name of the schema",
"type": "string"
},
"schema": {
"description": "JSON schema object",
"type": "object"
},
"strict": {
"description": "Whether to use strict validation",
"type": "boolean"
}
},
"required": [
"name",
"schema"
],
"type": "object"
},
"type": {
"description": "The type of response format",
"enum": [
"text",
"json_object",
"json_schema"
],
"type": "string"
}
},
"required": [
"type"
],
"type": "object"
},
"stop": {
"description": "Up to 4 sequences where the API will stop generating further tokens",
"oneOf": [
{
"type": "string"
},
{
"items": {
"type": "string"
},
"type": "array"
}
]
},
"stream": {
"default": false,
"description": "Whether to stream the response",
"type": "boolean"
},
"temperature": {
"description": "Controls randomness in the response (default: 0.7)",
"maximum": 2,
"minimum": 0,
"type": "number"
},
"top_p": {
"description": "Controls diversity via nucleus sampling",
"maximum": 1,
"minimum": 0,
"type": "number"
}
},
"required": [
"messages"
],
"type": "object"
}