Skip to main content
Glama

n8n-workflow-builder-mcp

by ifmelate
langchain_lmOpenAi.json4.33 kB
{ "nodeType": "@n8n/n8n-nodes-langchain.lmOpenAi", "displayName": "OpenAI Model", "description": "For advanced usage with an AI chain", "version": 1, "properties": [ { "name": "deprecated", "displayName": "This node is using OpenAI completions which are now deprecated. Please use the OpenAI Chat Model node instead.", "type": "notice", "default": "" }, { "name": "model", "displayName": "Model", "type": "resourceLocator", "default": "{ mode: 'list', value: 'gpt-3.5-turbo-instruct' }", "description": "The model which will generate the completion. <a href=\"https://beta.openai.com/docs/models/overview\">Learn more</a>.", "required": true }, { "name": "notice", "displayName": "When using non OpenAI models via Base URL override, not all models might be chat-compatible or support other features, like tools calling or JSON response format.", "type": "notice", "default": "", "displayOptions": "{\n\t\t\t\t\tshow: {\n\t\t\t\t\t\t'/options.baseURL': [{ _cnd: { exists: true }" }, { "name": "options", "displayName": "Options", "type": "collection", "default": {}, "description": "Additional options to add", "placeholder": "Add Option", "options": [ { "name": "baseURL", "displayName": "Base URL", "type": "string", "default": "https://api.openai.com/v1", "description": "Override the default base URL for the API" }, { "name": "frequencyPenalty", "displayName": "Frequency Penalty", "type": "number", "default": 0, "description": "Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim" }, { "name": "maxTokens", "displayName": "Maximum Number of Tokens", "type": "number", "description": "The maximum number of tokens to generate in the completion. Most models have a context length of 2048 tokens (except for the newest models, which support 32,768)." }, { "name": "presencePenalty", "displayName": "Presence Penalty", "type": "number", "default": 0, "description": "Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics" }, { "name": "temperature", "displayName": "Sampling Temperature", "type": "number", "default": 0.7, "description": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive." }, { "name": "timeout", "displayName": "Timeout", "type": "number", "default": 60000, "description": "Maximum amount of time a request is allowed to take in milliseconds" }, { "name": "maxRetries", "displayName": "Max Retries", "type": "number", "default": 2, "description": "Maximum number of retries to attempt" }, { "name": "topP", "displayName": "Top P", "type": "number", "default": 1, "description": "Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered. We generally recommend altering this or temperature but not both." } ], "typeOptions": { "minValue": -2, "maxValue": 2 } } ], "credentialsConfig": [ { "name": "openAiApi", "required": true }, { "name": "deprecated", "required": true }, { "name": "notice", "required": false }, { "name": "options", "required": false } ], "io": { "inputs": [], "outputs": [], "outputNames": [ "Model" ], "hints": {} }, "wiring": { "role": "model", "requires": [], "optional": [], "consumedBy": [ "AiAgent", "AiChain" ], "consumes": [], "produces": [] } }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ifmelate/n8n-workflow-builder-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server