Skip to main content
Glama

mcp-google-sheets

translation.json6.11 kB
{ "Transform unstructured website content into clean, AI-ready data": "Transform unstructured website content into clean, AI-ready data", "\n You can obtain API key from [API Section](https://app.dumplingai.com/api-keys).": "\n You can obtain API key from [API Section](https://app.dumplingai.com/api-keys).", "Web Search": "Web Search", "Search News": "Search News", "Generate Image": "Generate Image", "Scrape Website": "Scrape Website", "Crawl Website": "Crawl Website", "Extract Document Data": "Extract Document Data", "Custom API Call": "Custom API Call", "Search the web and optionally retrieve content from top results.": "Search the web and optionally retrieve content from top results.", "Search for news articles using Google News.": "Search for news articles using Google News.", "Generate images based on a text prompt using AI.": "Generate images based on a text prompt using AI.", "Scrapes data from a specified URL and format the result.": "Scrapes data from a specified URL and format the result.", "Crawl a website and return structured content from multiple pages.": "Crawl a website and return structured content from multiple pages.", "Extract structured data from documents using vision-capable AI.": "Extract structured data from documents using vision-capable AI.", "Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint", "Search Query": "Search Query", "Country": "Country", "Location": "Location", "Language": "Language", "Date Range": "Date Range", "Page Number": "Page Number", "Scrape Results": "Scrape Results", "Number of Results to Scrape": "Number of Results to Scrape", "Scrape Format": "Scrape Format", "Clean Output": "Clean Output", "Model": "Model", "Prompt": "Prompt", "Aspect Ratio": "Aspect Ratio", "Number of Images": "Number of Images", "Seed": "Seed", "Output Format": "Output Format", "URL": "URL", "Clean Output ?": "Clean Output ?", "Render JavaScript ?": "Render JavaScript ?", "Page Limit": "Page Limit", "Crawl Depth": "Crawl Depth", "File": "File", "Extraction Prompt": "Extraction Prompt", "JSON Mode": "JSON Mode", "Method": "Method", "Headers": "Headers", "Query Parameters": "Query Parameters", "Body": "Body", "Response is Binary ?": "Response is Binary ?", "No Error on Failure": "No Error on Failure", "Timeout (in seconds)": "Timeout (in seconds)", "Two-letter country code for location bias (e.g., \"US\" for United States).": "Two-letter country code for location bias (e.g., \"US\" for United States).", "Specific location to focus the search (e.g., \"New York, NY\").": "Specific location to focus the search (e.g., \"New York, NY\").", "Language code for the search results (e.g., \"en\" for English).": "Language code for the search results (e.g., \"en\" for English).", "Filter results by date.": "Filter results by date.", "Page number for paginated results.": "Page number for paginated results.", "Whether to scrape top search results.": "Whether to scrape top search results.", "Number of top results to scrape (max: 10).": "Number of top results to scrape (max: 10).", "Format of scraped content": "Format of scraped content", "Whether the scraped output should be cleaned.": "Whether the scraped output should be cleaned.", "The search query for Google News.": "The search query for Google News.", "Country code for location bias (e.g., \"US\" for United States).": "Country code for location bias (e.g., \"US\" for United States).", "The model to use for image generation": "The model to use for image generation", "The text prompt for image generation": "The text prompt for image generation", "Aspect ratio of the generated image": "Aspect ratio of the generated image", "Number of images to generate (1-4)": "Number of images to generate (1-4)", "Seed for reproducible results": "Seed for reproducible results", "Format of the generated image": "Format of the generated image", "The format of the output": "The format of the output", "Whether the output should be cleaned.": "Whether the output should be cleaned.", "Whether to render JavaScript before scraping.": "Whether to render JavaScript before scraping.", "The website URL to crawl.": "The website URL to crawl.", "Maximum number of pages to crawl.": "Maximum number of pages to crawl.", "Depth of crawling (distance between base URL path and sub paths).": "Depth of crawling (distance between base URL path and sub paths).", "Format of the output content.": "Format of the output content.", "File URL or base64-encoded file.": "File URL or base64-encoded file.", "The prompt describing what data to extract from the document.": "The prompt describing what data to extract from the document.", "Whether to return the result in JSON format.": "Whether to return the result in JSON format.", "Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.", "Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..", "Any Time": "Any Time", "Past Hour": "Past Hour", "Past Day": "Past Day", "Past Week": "Past Week", "Past Month": "Past Month", "Past Year": "Past Year", "Markdown": "Markdown", "HTML": "HTML", "Screenshot": "Screenshot", "FLUX.1-schnell": "FLUX.1-schnell", "FLUX.1-dev": "FLUX.1-dev", "FLUX.1-pro": "FLUX.1-pro", "FLUX.1.1-pro": "FLUX.1.1-pro", "recraft-v3": "recraft-v3", "Square (1:1)": "Square (1:1)", "Landscape 16:9": "Landscape 16:9", "Landscape 21:9": "Landscape 21:9", "Landscape 3:2": "Landscape 3:2", "Landscape 4:3": "Landscape 4:3", "Portrait 2:3": "Portrait 2:3", "Portrait 3:4": "Portrait 3:4", "Portrait 4:5": "Portrait 4:5", "Portrait 9:16": "Portrait 9:16", "Portrait 9:21": "Portrait 9:21", "WebP": "WebP", "JPG": "JPG", "PNG": "PNG", "Text": "Text", "Raw": "Raw", "GET": "GET", "POST": "POST", "PATCH": "PATCH", "PUT": "PUT", "DELETE": "DELETE", "HEAD": "HEAD" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/activepieces/activepieces'

If you have feedback or need assistance with the MCP directory API, please join our Discord server