Skip to main content
Glama

mcp-google-sheets

translation.json3.69 kB
{ "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.", "Ask Deepseek": "Ask Deepseek", "Ask Deepseek anything you want!": "Ask Deepseek anything you want!", "Model": "Model", "Question": "Question", "Frequency penalty": "Frequency penalty", "Maximum Tokens": "Maximum Tokens", "Presence penalty": "Presence penalty", "Response Format": "Response Format", "Temperature": "Temperature", "Top P": "Top P", "Memory Key": "Memory Key", "Roles": "Roles", "The model which will generate the completion.": "The model which will generate the completion.", "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.", "The maximum number of tokens to generate. Possible values are between 1 and 8192.": "The maximum number of tokens to generate. Possible values are between 1 and 8192.", "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.", "The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself", "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.", "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.", "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.", "Array of roles to specify more accurate response": "Array of roles to specify more accurate response", "Text": "Text", "JSON": "JSON" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/activepieces/activepieces'

If you have feedback or need assistance with the MCP directory API, please join our Discord server