BridgeML_API
Enable dynamic text generation by sending POST requests with customizable parameters like temperature, max tokens, and message sequences to integrate advanced conversational AI capabilities.
Instructions
Make a POST request to bridgeml/codellama/bridgeml/codellama
Input Schema
Name | Required | Description | Default |
---|---|---|---|
frequency_penalty | No | Frequency penalty value | |
max_tokens | No | Maximum number of tokens to generate | |
messages | No | List of messages | |
stream | No | Flag indicating if response should be streamed | |
temperature | No | Temperature for text generation | |
top_p | No | Top P sampling value |