BridgeML_API
Enable seamless text generation by sending POST requests to the BridgeML API on the API-Market MCP Server. Process user and assistant messages, adjust parameters like temperature and max_tokens, and generate natural language responses for dynamic applications.
Instructions
Make a POST request to bridgeml/codellama/bridgeml/codellama
Input Schema
Name | Required | Description | Default |
---|---|---|---|
frequency_penalty | No | Frequency penalty value | |
max_tokens | No | Maximum number of tokens to generate | |
messages | No | List of messages | |
stream | No | Flag indicating if response should be streamed | |
temperature | No | Temperature for text generation | |
top_p | No | Top P sampling value |
Input Schema (JSON Schema)
{
"properties": {
"frequency_penalty": {
"description": "Frequency penalty value",
"example": 0,
"type": "number"
},
"max_tokens": {
"description": "Maximum number of tokens to generate",
"example": 256,
"type": "number"
},
"messages": {
"description": "List of messages",
"example": [
{
"content": "hello",
"role": "user"
},
{
"content": "",
"role": "assistant"
}
],
"items": {
"properties": {
"content": {
"description": "Content of the message",
"type": "string"
},
"role": {
"description": "Role of the message sender",
"enum": [
"user",
"assistant"
],
"type": "string"
}
},
"type": "object"
},
"type": "array"
},
"stream": {
"description": "Flag indicating if response should be streamed",
"example": false,
"type": "boolean"
},
"temperature": {
"description": "Temperature for text generation",
"example": 1,
"type": "number"
},
"top_p": {
"description": "Top P sampling value",
"example": 1,
"type": "number"
}
},
"type": "object"
}