Skip to main content
Glama

API-Market MCP Server

by Noveum

BridgeML_API

Enable dynamic text generation by sending POST requests with customizable parameters like temperature, max tokens, and message sequences to integrate advanced conversational AI capabilities.

Instructions

Make a POST request to bridgeml/codellama/bridgeml/codellama

Input Schema

NameRequiredDescriptionDefault
frequency_penaltyNoFrequency penalty value
max_tokensNoMaximum number of tokens to generate
messagesNoList of messages
streamNoFlag indicating if response should be streamed
temperatureNoTemperature for text generation
top_pNoTop P sampling value

Input Schema (JSON Schema)

{ "properties": { "frequency_penalty": { "description": "Frequency penalty value", "example": 0, "type": "number" }, "max_tokens": { "description": "Maximum number of tokens to generate", "example": 256, "type": "number" }, "messages": { "description": "List of messages", "example": [ { "content": "hello", "role": "user" }, { "content": "", "role": "assistant" } ], "items": { "properties": { "content": { "description": "Content of the message", "type": "string" }, "role": { "description": "Role of the message sender", "enum": [ "user", "assistant" ], "type": "string" } }, "type": "object" }, "type": "array" }, "stream": { "description": "Flag indicating if response should be streamed", "example": false, "type": "boolean" }, "temperature": { "description": "Temperature for text generation", "example": 1, "type": "number" }, "top_p": { "description": "Top P sampling value", "example": 1, "type": "number" } }, "type": "object" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Noveum/api-market-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server