Skip to main content
Glama

mcp-google-sheets

vi.json6.04 kB
{ "Exa": "Exa", "AI-powered search and content extraction from the web.": "AI-powered search and content extraction from the web.", "Obtain your API key from [Dashboard Setting](https://dashboard.exa.ai/api-keys).": "Obtain your API key from [Dashboard Setting](https://dashboard.exa.ai/api-keys).", "Get Contents": "Get Contents", "Ask AI": "Ask AI", "Perform Search": "Perform Search", "Find Similar Links": "Find Similar Links", "Custom API Call": "Custom API Call", "Retrieve clean HTML content from specified URLs.": "Retrieve clean HTML content from specified URLs.", "Provides direct answers to queries by summarizing results.": "Provides direct answers to queries by summarizing results.", "Search the web using semantic or keyword-based search.": "Search the web using semantic or keyword-based search.", "Find pages similar to a given URL.": "Find pages similar to a given URL.", "Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint", "URLs": "URLs", "Return Full Text": "Return Full Text", "Livecrawl Option": "Livecrawl Option", "Livecrawl Timeout (ms)": "Livecrawl Timeout (ms)", "Number of Subpages": "Number of Subpages", "Subpage Target": "Subpage Target", "Query": "Query", "Include Text Content": "Include Text Content", "Model": "Model", "Search Type": "Search Type", "Category": "Category", "Number of Results": "Number of Results", "Include Domains": "Include Domains", "Exclude Domains": "Exclude Domains", "Start Crawl Date": "Start Crawl Date", "End Crawl Date": "End Crawl Date", "Start Published Date": "Start Published Date", "End Published Date": "End Published Date", "Include Text": "Include Text", "Exclude Text": "Exclude Text", "URL": "URL", "Start Crawl Date (ISO)": "Start Crawl Date (ISO)", "End Crawl Date (ISO)": "End Crawl Date (ISO)", "Start Published Date (ISO)": "Start Published Date (ISO)", "End Published Date (ISO)": "End Published Date (ISO)", "Method": "Method", "Headers": "Headers", "Query Parameters": "Query Parameters", "Body": "Body", "No Error on Failure": "No Error on Failure", "Timeout (in seconds)": "Timeout (in seconds)", "Array of URLs to crawl": "Array of URLs to crawl", "If true, returns full page text. If false, disables text return.": "If true, returns full page text. If false, disables text return.", "Options for livecrawling pages.": "Options for livecrawling pages.", "Timeout for livecrawling in milliseconds.": "Timeout for livecrawling in milliseconds.", "Number of subpages to crawl.": "Number of subpages to crawl.", "Keyword(s) to find specific subpages.": "Keyword(s) to find specific subpages.", "Ask a question to get summarized answers from the web.": "Ask a question to get summarized answers from the web.", "If true, includes full text content from the search results": "If true, includes full text content from the search results", "Choose the Exa model to use for the answer.": "Choose the Exa model to use for the answer.", "Search query to find related articles and data.": "Search query to find related articles and data.", "Type of search to perform.": "Type of search to perform.", "Category of data to focus the search on.": "Category of data to focus the search on.", "Number of results to return (max 100).": "Number of results to return (max 100).", "Limit results to only these domains.": "Limit results to only these domains.", "Exclude results from these domains.": "Exclude results from these domains.", "Only include results crawled after this ISO date.": "Only include results crawled after this ISO date.", "Only include results crawled before this ISO date.": "Only include results crawled before this ISO date.", "Only include results published after this ISO date.": "Only include results published after this ISO date.", "Only include results published before this ISO date.": "Only include results published before this ISO date.", "Strings that must be present in the text of results.": "Strings that must be present in the text of results.", "Strings that must not be present in the text of results.": "Strings that must not be present in the text of results.", "Reference URL to find semantically similar links.": "Reference URL to find semantically similar links.", "List of domains to include in results.": "List of domains to include in results.", "List of domains to exclude from results.": "List of domains to exclude from results.", "Include links crawled after this date (ISO format).": "Include links crawled after this date (ISO format).", "Include links crawled before this date (ISO format).": "Include links crawled before this date (ISO format).", "Only include links published after this date (ISO format).": "Only include links published after this date (ISO format).", "Only include links published before this date (ISO format).": "Only include links published before this date (ISO format).", "Strings that must be present in the webpage text (max 1 string of up to 5 words).": "Strings that must be present in the webpage text (max 1 string of up to 5 words).", "Strings that must not be present in the webpage text (max 1 string of up to 5 words).": "Strings that must not be present in the webpage text (max 1 string of up to 5 words).", "Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.", "Never": "Never", "Fallback": "Fallback", "Always": "Always", "Auto": "Auto", "Exa Pro": "Exa Pro", "Keyword": "Keyword", "Neural": "Neural", "Company": "Company", "Research Paper": "Research Paper", "News": "News", "PDF": "PDF", "GitHub": "GitHub", "Tweet": "Tweet", "Personal Site": "Personal Site", "LinkedIn Profile": "LinkedIn Profile", "Financial Report": "Financial Report", "GET": "GET", "POST": "POST", "PATCH": "PATCH", "PUT": "PUT", "DELETE": "DELETE", "HEAD": "HEAD" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/activepieces/activepieces'

If you have feedback or need assistance with the MCP directory API, please join our Discord server