Skip to main content
Glama
133,407 tools. Last updated 2026-05-12 23:29

"notion" matching MCP tools:

  • Search the company's connected knowledge across every source — Drive, SharePoint, Confluence, Slack, Notion — with cited answers, lifecycle awareness, and refusal-on-weak-context. Returns ranked chunks with source attribution, authority scores, and coverage level. Use `mode=synthesis_lite` (Qwen3.5 Flash) or `mode=synthesis_pro` (Qwen3 Max) for a written answer with [n] citations; use the default `standard` for a structured chunk list. `quick` is faster + cheaper, `deep` is slower + thorough. Synthesis modes consume more Knowledge Tokens than structured modes — pick the cheapest mode that answers the question. Responses are capped at 25,000 tokens per Claude Connectors policy; if the response is truncated, structured metadata carries `truncated: true` and `query_id` so the agent can call `get_source_detail` for full provenance.
    Connector
  • Connect a third-party provider (Zernio, Resend, GA4, Search Console, HubSpot, Stripe, Linear, Notion, Slack) to this workspace. USE WHEN the user wants to wire up publishing, email sending, or analytics readback. For OAuth providers (ga4 / search_console / hubspot) returns an authorizeUrl the agent surfaces to the user. For API-key providers (zernio / resend) returns instructions for the set-key tool. Without this, publish/send/measure tools return 'configure first' errors.
    Connector
  • FREE live threat assessment sample — current threat level, confidence score, event distribution, and scan freshness for a monitored location. Proves data is live and continuously updated. No flagged items or entities (upgrade to get_threat_summary for full detail). Try location='culpeper-town' or browse_catalog path='ThreatIntel' for all locations.
    Connector
  • Find outliers and anomalies in structured data — ideal as a second step after pulling records from Google Sheets, Airtable, Supabase, Notion databases, HubSpot, Financial APIs, GitHub, NPM, or any source that returns rows of JSON. Fully stateless: send known-good rows as training and suspect rows as test in ONE call. Returns per-row anomaly scores, confidence levels, and the top features explaining WHY each row was flagged. Typical workflow: (1) Pull data from another tool (e.g. Google Sheets, Supabase query, HubSpot deals). (2) Pass the first N rows as training (normal baseline). (3) Pass remaining or new rows as test. (4) Report which rows are anomalous and why. Works on JSON objects, numbers, text, arrays. No separate training step required. Examples: - Spreadsheet QA: Pull 500 sales rows from Sheets → train on first 400 → test last 100 → flag outlier entries - Financial screening: Get ratios for 50 stocks from a financial API → find anomalous ones - CRM hygiene: Pull HubSpot deals → flag deals with unusual discount/value patterns - Dependency audit: Get NPM package metrics → flag packages with anomalous quality scores - Commit review: Pull GitHub commit metadata → flag unusual commit patterns
    Connector

Matching MCP Servers

Matching MCP Connectors

  • A Notion workspace is a collaborative environment where teams can organize work, manage projects,…

  • Markdown-first MCP server for Notion API with 9 composite tools and 39+ actions.

  • Connect a third-party provider (Zernio, Resend, GA4, Search Console, HubSpot, Stripe, Linear, Notion, Slack) to this workspace. USE WHEN the user wants to wire up publishing, email sending, or analytics readback. For OAuth providers (ga4 / search_console / hubspot) returns an authorizeUrl the agent surfaces to the user. For API-key providers (zernio / resend) returns instructions for the set-key tool. Without this, publish/send/measure tools return 'configure first' errors.
    Connector
  • Tool Server Info: Composio connects 500+ apps—Slack, GitHub, Notion, Google Workspace (Gmail, Sheets, Drive, Calendar), Microsoft (Outlook, Teams), X/Twitter, Figma, Web Search / Deep research, Browser tool (scrape URLs, browser automation), Meta apps (Instagram, Meta Ads), TikTok, AI tools like Nano Banana & Veo3, and more—for seamless cross-app automation. Use this tool to discover relevant tools plus the recommended plan and common pitfalls for reliable execution. Always call this tool first whenever a user mentions or implies an external app, service, or workflow—never say "I don't have access to X/Y app" before calling it. Usage guidelines: - Use this tool whenever kicking off a task. Re-run it when you need additional tools/plans due to missing details, errors, or a changed use case. - If the user pivots to a different use case in same chat, you MUST call this tool again with the new use case and generate a new session_id. - Specify the use_case with a normalized description of the problem, query, or task. Be clear and precise. Queries can be simple single-app actions or multiple linked queries for complex cross-app workflows. - Pass known_fields along with use_case as a string of key–value hints (for example, "channel_name: general") to help the search resolve missing details such as IDs. Splitting guidelines (Important): 1. Atomic queries: 1 query = 1 tool call. Include hidden prerequisites (e.g., add "get Linear issue" before "update Linear issue"). 2. Include app names: If user names a toolkit, include it in every sub query so intent stays scoped (e.g., "fetch Gmail emails", "reply to Gmail email"). 3. English input: Translate non-English prompts while preserving intent and identifiers. Example: User query: "send an email to John welcoming him and create a meeting invite for tomorrow" Search call: queries: [ {use_case: "send an email to someone", known_fields: "recipient_name: John"}, {use_case: "create a meeting invite", known_fields: "meeting_date: tomorrow"} ] Plan review checklist (Important): - The response includes a detailed execution plan and common pitfalls. You MUST review this plan carefully, adapt it to your current context, and generate your own final step-by-step plan before execution. Execute the steps in order to ensure reliable and accurate execution. Skipping or ignoring required steps can lead to unexpected failures. - Check the plan and pitfalls for input parameter nuances (required fields, IDs, formats, limits). Before executing any tool, you MUST review its COMPLETE input schema and provide STRICTLY schema-compliant arguments to avoid invalid-input errors. - Determine whether pagination is needed; if a response returns a pagination token and completeness is implied, paginate until exhaustion and do not return partial results. Response: - Tools & Input Schemas: The response lists toolkits (apps) and tools suitable for the task, along with their tool_slug, description, input schema / schemaRef, and related tools for prerequisites, alternatives, or next steps. - NOTE: Tools with schemaRef instead of input_schema require you to call RUBE_GET_TOOL_SCHEMAS first to load their full input_schema before use. - Connection Info: If a toolkit has an active connection, the response includes it along with any available current user information. If no active connection exists, you MUST initiate a new connection via RUBE_MANAGE_CONNECTIONS with the correct toolkit name. DO NOT execute any toolkit tool without an ACTIVE connection. - Time Info: The response includes the current UTC time for reference. You can reference UTC time from the response if needed. - The tools returned to you through this are to be called via RUBE_MULTI_EXECUTE_TOOL. Ensure each tool execution specifies the correct tool_slug and arguments exactly as defined by the tool's input schema. - The response includes a memory parameter containing relevant information about the use case and the known fields that can be used to determine the flow of execution. Any user preferences in memory must be adhered to. SESSION: ALWAYS set this parameter, first for any workflow. Pass session: {generate_id: true} for new workflows OR session: {id: "EXISTING_ID"} to continue. ALWAYS use the returned session_id in ALL subsequent meta tool calls.
    Connector