Skip to main content
Glama

mcp-google-sheets

id.json5.77 kB
{ "Groq": "Groq", "Use Groq's fast language models and audio processing capabilities.": "Use Groq's fast language models and audio processing capabilities.", "Enter your Groq API Key": "Enter your Groq API Key", "Ask AI": "Ask AI", "Transcribe Audio": "Transcribe Audio", "Translate Audio": "Translate Audio", "Custom API Call": "Custom API Call", "Ask Groq anything using fast language models.": "Ask Groq anything using fast language models.", "Transcribes audio into text in the input language.": "Transcribes audio into text in the input language.", "Translates audio into English text.": "Translates audio into English text.", "Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint", "Model": "Model", "Question": "Question", "Temperature": "Temperature", "Maximum Tokens": "Maximum Tokens", "Top P": "Top P", "Frequency penalty": "Frequency penalty", "Presence penalty": "Presence penalty", "Memory Key": "Memory Key", "Roles": "Roles", "Audio File": "Audio File", "Language": "Language", "Prompt": "Prompt", "Response Format": "Response Format", "Method": "Method", "Headers": "Headers", "Query Parameters": "Query Parameters", "Body": "Body", "No Error on Failure": "No Error on Failure", "Timeout (in seconds)": "Timeout (in seconds)", "The model which will generate the completion.": "The model which will generate the completion.", "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.", "The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.", "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.", "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.", "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.", "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.", "Array of roles to specify more accurate response": "Array of roles to specify more accurate response", "The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.", "The model to use for transcription.": "The model to use for transcription.", "The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.", "An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.", "The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.", "The format of the transcript output.": "The format of the transcript output.", "The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.", "The model to use for translation.": "The model to use for translation.", "An optional text in English to guide the model's style or continue a previous audio segment.": "An optional text in English to guide the model's style or continue a previous audio segment.", "The format of the translation output.": "The format of the translation output.", "Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.", "JSON": "JSON", "Text": "Text", "Verbose JSON": "Verbose JSON", "GET": "GET", "POST": "POST", "PATCH": "PATCH", "PUT": "PUT", "DELETE": "DELETE", "HEAD": "HEAD" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/activepieces/activepieces'

If you have feedback or need assistance with the MCP directory API, please join our Discord server