Skip to main content
Glama

mcp-google-sheets

zh.json3.4 kB
{ "Mistral AI provides state-of-the-art open-weight and hosted language models for text generation, embeddings, and reasoning tasks.": "Mistral AI provides state-of-the-art open-weight and hosted language models for text generation, embeddings, and reasoning tasks.", "You can obtain your API key from the Mistral AI dashboard. Go to https://console.mistral.ai, generate an API key, and paste it here.": "You can obtain your API key from the Mistral AI dashboard. Go to https://console.mistral.ai, generate an API key, and paste it here.", "Ask Mistral": "Ask Mistral", "Create Embeddings": "Create Embeddings", "Upload File": "Upload File", "List Models": "List Models", "Custom API Call": "自定义 API 呼叫", "Ask Mistral anything you want!": "Ask Mistral anything you want!", "Creates new embedding in Mistral AI.": "Creates new embedding in Mistral AI.", "Upload a file to Mistral AI (e.g., for fine-tuning or context storage).": "Upload a file to Mistral AI (e.g., for fine-tuning or context storage).", "Retrieves a list of available Mistral AI models.": "Retrieves a list of available Mistral AI models.", "Make a custom API call to a specific endpoint": "将一个自定义 API 调用到一个特定的终点", "Model": "Model", "Question": "Question", "Temperature": "Temperature", "Top P": "Top P", "Max Tokens": "Max Tokens", "Random Seed": "Random Seed", "Timeout (ms)": "Timeout (ms)", "Input": "Input", "File": "文件", "Purpose": "Purpose", "Method": "方法", "Headers": "信头", "Query Parameters": "查询参数", "Body": "正文内容", "Response is Binary ?": "Response is Binary ?", "No Error on Failure": "失败时没有错误", "Timeout (in seconds)": "超时(秒)", "Select a Mistral model. List is fetched live from your account.": "Select a Mistral model. List is fetched live from your account.", "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.", "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.", "The input text for which to create an embedding.": "The input text for which to create an embedding.", "The file to upload (max 512MB).For fine tuning purspose provide .jsonl file.": "The file to upload (max 512MB).For fine tuning purspose provide .jsonl file.", "Purpose of the file.": "Purpose of the file.", "Authorization headers are injected automatically from your connection.": "授权头自动从您的连接中注入。", "Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..", "fine-tune": "fine-tune", "batch": "batch", "ocr": "ocr", "GET": "获取", "POST": "帖子", "PATCH": "PATCH", "PUT": "弹出", "DELETE": "删除", "HEAD": "黑色" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/activepieces/activepieces'

If you have feedback or need assistance with the MCP directory API, please join our Discord server