Skip to main content
Glama

Server Details

Notion MCP Pack

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-notion_connect
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 10 of 10 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: notion_* tools handle Notion operations, memory tools (remember, recall, forget) handle storage, and ask_pipeworx and discover_tools provide meta-functionality. However, ask_pipeworx could overlap with other tools if used for database queries, and discover_tools is redundant given the tool count is only 10.

Naming Consistency3/5

The notion_* tools follow a consistent verb_noun pattern (e.g., notion_get_database). However, ask_pipeworx and discover_tools break this pattern by using verbs first without the 'notion' prefix. Memory tools (remember, recall, forget) use single verbs, mixing conventions.

Tool Count4/5

10 tools is a reasonable count for a Notion integration, covering core operations. The inclusion of memory and meta-tools (ask_pipeworx, discover_tools) is slightly excessive for the server's main purpose, but not severely over-scoped.

Completeness3/5

The Notion operations cover basic CRUD: list pages, get page, get database, query database, search. Missing are create, update, delete operations for pages/databases, and block manipulation. This is a notable gap for a Notion server, limiting agents to read-only workflows.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description explains that the tool picks the best data source, fills arguments, and returns results. This clarifies its internal behavior. However, it doesn't disclose potential limitations like latency, cost implications, or data freshness. No annotations are provided, so the description carries full burden; it does well but could be more transparent about edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences, each adding value. It front-loads the core action and provides examples. Could be slightly tighter by removing the last sentence examples if redundant, but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple schema (1 param, no output schema), the description is nearly complete. It explains the tool's function, usage pattern, and gives examples. It doesn't mention return format, error handling, or scope limitations, but these are less critical for a natural language interface tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning beyond the input schema by explaining that the single parameter 'question' should be a natural language request, and provides examples. Schema coverage is 100%, so baseline is 3. The description adds context about how the question is processed (tool selection, argument filling), justifying a higher score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to answer plain English questions using the best available data source. It distinguishes itself from siblings by emphasizing that it selects the right tool and fills arguments automatically, which is unique among the listed tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool: when you want to ask a question in plain English without browsing tools or learning schemas. It provides examples that clarify the range of queries, and implicitly advises against using sibling tools directly when this agent can handle routing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It discloses that the tool returns the most relevant tools with names and descriptions, and that it uses natural language queries. However, it does not specify whether the search is case-sensitive or if there are any performance implications, but the behavioral description is sufficient for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with clear front-loading: the first sentence states the core purpose, and the second provides usage guidance. Every word is meaningful, with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is no output schema, the description does not explain the return format beyond 'names and descriptions', which is adequate for a search tool. It also provides important context about when to use it (500+ tools). It could be slightly improved by mentioning pagination or sorting, but overall it is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds context by explaining that the query is a 'Natural language description', but does not elaborate on the limit parameter beyond what the schema already provides. Thus, it meets the baseline without adding significant extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Search', 'Returns') and a clear resource ('Pipeworx tool catalog'). It distinguishes itself by instructing to call this tool first when many tools are available, setting it apart from other tools that perform different tasks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones'), which provides clear guidance. It implies the tool is for discovery rather than direct action, effectively guiding the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the action is 'delete,' implying it is destructive, but does not mention irreversibility, permissions needed, or what happens if the key does not exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words, perfectly concise for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 required parameter, no output schema), the description is adequate but could mention return behavior (e.g., success confirmation) or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description adds no extra meaning beyond the schema. The parameter 'key' is described similarly in both, so the description provides no additional semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (delete) and the resource (stored memory by key), distinguishing it from sibling tools like recall and remember which are for reading or storing memories.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like recall or remember. The description does not mention prerequisites or when deletion is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

notion_get_databaseBInspect

Get a Notion database schema by ID. Returns all properties, field types, and configuration to understand structure.

ParametersJSON Schema
NameRequiredDescriptionDefault
database_idYesNotion database ID (UUID format, with or without dashes)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description states it returns schema, properties, and metadata but does not disclose behavioral traits like read-only nature, rate limits, or what happens if database not found.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is clear and front-loaded with action and object. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With only one parameter and no output schema, the description adequately explains input and output, but lacks details on error handling or additional constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a good description for database_id. The description adds context that the ID is UUID format and flexible with dashes, which is helpful beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it gets a Notion database by ID and returns schema, properties, and metadata. It distinguishes from sibling tools like notion_get_page (gets a page) and notion_query_database (queries a database) but doesn't explicitly differentiate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when to use vs alternatives, but the purpose is clear: retrieving database metadata. Sibling names suggest other tools for pages or queries, implying this is for the database object itself.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

notion_get_pageAInspect

Get a Notion page by ID. Returns full properties, metadata, and content structure for reading or editing.

ParametersJSON Schema
NameRequiredDescriptionDefault
page_idYesNotion page ID (UUID format, with or without dashes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It states the tool retrieves a page and returns properties/metadata, but does not disclose any behavioral traits such as rate limits, required permissions, or whether it supports archived pages. Adequate but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences, concise and front-loaded with the core action. No extraneous information, but could be more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is adequate. However, it does not specify the return format or potential errors, which could be useful for an agent. Without annotations, slightly more detail would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter 'page_id', which has a description. The tool description adds no additional meaning beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it gets a Notion page by ID, returning properties and metadata. The verb 'Get' and resource 'page' are specific, distinguishing it from siblings like notion_get_database and notion_query_database.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description does not explicitly state when to use this tool vs alternatives like notion_search or notion_list_pages. It implies usage for retrieving a single page by ID, but lacks guidance on when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

notion_list_pagesAInspect

List all accessible pages in your Notion workspace. Returns titles and IDs to discover available content.

ParametersJSON Schema
NameRequiredDescriptionDefault
page_sizeNoNumber of results to return (default 10, max 100)
start_cursorNoPagination cursor for next page of results
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It states it uses search with a page filter, which hints at behavior, but does not disclose limitations like potential missing pages due to integration access or search indexing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, clear and to the point. Could be slightly more concise by removing 'Uses search with page filter' if that is already implied by the tool name, but it is still efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with 2 parameters and no output schema, the description is reasonably complete. It explains the scope (accessible pages) and method (search with page filter). No further details are needed given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both page_size and start_cursor have descriptions). The description adds no extra parameter meaning beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool lists pages the integration has access to, using search with a page filter. This clearly distinguishes it from sibling tools like notion_get_page (single page by ID), notion_query_database (queries a specific database), and notion_search (general search).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you need to list all accessible pages, but does not explicitly state when not to use it or mention alternatives like notion_query_database for database-scoped queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

notion_query_databaseAInspect

Query a Notion database with filters and sorting (e.g., status='Done', sort by date). Returns matching rows with property values.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortsNoArray of sort objects (e.g., [{ "property": "Created", "direction": "descending" }])
filterNoNotion filter object (e.g., { "property": "Status", "select": { "equals": "Done" } })
page_sizeNoNumber of results to return (default 10, max 100)
database_idYesNotion database ID to query
start_cursorNoPagination cursor for next page of results
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries the burden of behavioral disclosure. It does not mention side effects (likely read-only), rate limits, authentication needs, or error conditions. However, 'query' suggests a read operation, and the description is consistent with that. It does not add rich behavioral context beyond the obvious, so a 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 12 words, which is concise and front-loaded. It efficiently communicates the core purpose. However, it could be slightly more structured (e.g., mentioning pagination or default page size) without adding much length, so it loses a point for not including important usage details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the moderate complexity (5 params, nested objects), the description is minimal. It covers the basic purpose but lacks details on pagination (start_cursor, page_size defaults), response format (no output schema), and error handling. For a query tool with no annotations and no output schema, more context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already describes all parameters well. The description adds no extra meaning beyond summarizing the tool's purpose. Baseline 3 is correct because the schema does the heavy lifting; the description does not compensate with additional context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool queries a Notion database with optional filters and sorts, and returns matching pages/rows. It uses a specific verb ('query') and resource ('Notion database'), and the optionality of filters/sorts is explicit. The tool name 'notion_query_database' is distinct from siblings like 'notion_search' or 'notion_list_pages', and the description reinforces this by specifying filters and sorts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for querying a database with filters and sorts, but does not explicitly state when to use this tool versus alternatives like 'notion_list_pages' or 'notion_search'. No guidance is given on when not to use it or what scenarios favor other tools. With siblings that perform similar functions, this lack of differentiation is a gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that the tool can list all keys when key is omitted, which is a key behavioral trait. However, it doesn't mention what happens if key doesn't exist (error vs. null).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that front-load the primary action. Every word adds value; no wasted space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with a single optional parameter and no output schema, the description is complete. It explains both modes of use (by key or listing). Minor gap: behavior on missing key.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the single parameter 'key' is already described in the schema. The description adds no additional semantics beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It distinguishes itself from sibling tools like 'remember' and 'forget' by focusing on retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use the tool ('to retrieve context you saved earlier') and implies when not to (omit key to list all). It contrasts with 'remember' and 'forget' via the tool names and context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses memory persistence (authenticated vs 24-hour) and that it's session-based. Could add more about storage limits or overwrite behavior, but still strong.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose, second explains when and for whom. No wasted words, front-loaded with core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple key-value structure, description covers what agent needs: what it does, when to use, and behavioral differences. Lacks mention of overwrite behavior or value size limits, but sufficient for a simple memory tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already provides clear descriptions for both parameters (key with examples, value with purpose). Description reinforces usage context without repeating schema, adding value by framing as session memory.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'store' and resource 'key-value pair in session memory'. Distinguishes from siblings like 'recall' (retrieval) and 'forget' (deletion).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this to save intermediate findings, user preferences, or context across tool calls', and notes persistence differences for authenticated vs anonymous users.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.