Skip to main content
Glama

Server Details

A Notion workspace is a collaborative environment where teams can organize work, manage projects,…

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
smithery-ai/mcp-servers
GitHub Stars
95

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

10 tools
append_blocksCInspect

Append blocks to a page

ParametersJSON Schema
NameRequiredDescriptionDefault
blocksYesArray of block objects to append
page_idYesID of the page
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to state whether this is a destructive operation, if it requires specific permissions, or what the append semantics are (end of page only?). It does not address idempotency or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely brief at four words. While efficient without filler, it is arguably underspecified rather than concise; however, the single sentence structure is appropriately front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a content-modification tool with no output schema or annotations, the description is insufficient. It omits return value details, side effects, block structure specifics, and error handling that would be necessary for confident agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, documenting 'page_id' and 'blocks' adequately. The description adds no semantic depth beyond the schema (e.g., block object structure, ID format), meriting the baseline score for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic verb (append) and target (blocks to a page), avoiding complete tautology. However, it fails to differentiate from sibling 'update_page' or clarify what constitutes a 'block' in this context, leaving the scope vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like 'update_page' or 'create_page'. No mention of prerequisites, positioning (end vs. insertion), or workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_databaseCInspect

Create a new database

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesTitle of the database
parent_idYesID of the parent page
propertiesYesDatabase properties schema
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention mutation side effects, idempotency behavior, error conditions (e.g., duplicate titles), or permission requirements beyond the implied 'Create' verb.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief at three words, but lacks structural value as it contains no actionable guidance or distinguishing information beyond the tool name itself.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a creation tool with three required parameters—including a complex nested properties object with additionalProperties—no annotations, and no output schema, the description is inadequate as it fails to explain the parent-child relationship or the expected structure of the properties schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters (title, parent_id, properties). The description adds no additional semantic context (e.g., that parent_id refers to a page, or that properties defines the database schema), warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Create a new database' restates the tool name (create_database) without distinguishing from siblings like list_databases or get_database, fitting the definition of tautology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as list_databases, nor does it mention prerequisites like the parent_id requirement or when creation is preferred over querying existing databases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_pageDInspect

Create a new page

ParametersJSON Schema
NameRequiredDescriptionDefault
titleNoTitle of the page
contentNoContent in markdown format
parent_idYesID of the parent page or database
propertiesNoPage properties (required for database pages)
parent_typeYesType of parent (database or page)
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not disclose whether the operation is idempotent, what happens if the parent_id doesn't exist, what the tool returns (page ID? full object?), or any side effects like notifications or permissions changes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While only three words long, this represents under-specification rather than effective conciseness. The single sentence fails to earn its place by providing only tautological information that duplicates the tool name, wasting the opportunity to front-load critical context about the page hierarchy or parent requirements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters (including nested objects and conditional requirements), no annotations, and no output schema, the description is completely inadequate. It fails to explain the relationship between parent_type and parent_id, the requirement for properties when creating database pages, or the expected behavior of the created page in the system.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, documenting all 5 parameters including the markdown format for content and the database/page enum for parent_type. The description adds no parameter semantics beyond the schema, but the schema is sufficiently descriptive to meet baseline expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Create a new page' is a tautology that restates the tool name (create_page). While it identifies the core verb and resource, it fails to distinguish from sibling tool 'create_database' or specify what kind of page system this operates within (implied by schema to be a database/page hierarchy but not stated explicitly).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like 'append_blocks' (when to create new vs. add to existing) or 'create_database'. No mention of prerequisites such as needing an existing parent page/database ID, or that 'properties' is required when parent_type is 'database'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_blocksCInspect

Delete blocks from a page

ParametersJSON Schema
NameRequiredDescriptionDefault
block_idYesID of the block to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It fails to mention that deletion is destructive/permanent, whether child blocks are recursively deleted, or error conditions. Additionally, the description uses plural 'blocks' while the schema only accepts a single block_id, creating ambiguity about batch deletion capability.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief (4 words) and front-loaded, but for a destructive operation with no annotations, this brevity constitutes under-specification rather than efficient communication. Every word earns its place, but critical information is missing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a destructive mutation tool with no annotations and no output schema, the description is inadequate. It omits permanence warnings, return value information (success/failure indicators), and the aforementioned plural/singular discrepancy leaves the actual capability ambiguous.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal value regarding the block_id parameter itself, though 'from a page' provides ownership context. The plural 'blocks' in the description slightly conflicts with the singular block_id parameter in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Delete') and resource ('blocks') and provides context ('from a page'). However, it fails to distinguish from the sibling tool 'append_blocks' or clarify when deletion is preferred over updating.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus alternatives, prerequisites (e.g., permissions), or warnings about permanent deletion. The description stands alone without operational context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_databaseCInspect

Retrieve a database by ID

ParametersJSON Schema
NameRequiredDescriptionDefault
database_idYesID of the database to retrieve
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. While 'Retrieve' implies a read operation, the description lacks disclosure of error behavior (what happens if ID invalid?), idempotency, auth requirements, or return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient at 4 words with zero redundancy. Every word earns its place. However, the brevity approaches under-specification given the lack of annotations and output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple single-parameter getter, but significant gaps remain: no return value description, no error handling context, and no differentiation from query_database despite the sibling relationship.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the parameter 'database_id' already described as 'ID of the database to retrieve'. The description adds no parameter syntax details or examples, but baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Retrieve'), resource ('database'), and scope ('by ID'). The 'by ID' phrasing implicitly distinguishes this from sibling tools like list_databases and query_database, though it doesn't explicitly name them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like query_database (for filtering) or list_databases (for enumeration). No prerequisites or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pageCInspect

Retrieve a page by ID

ParametersJSON Schema
NameRequiredDescriptionDefault
page_idYesID of the page to retrieve
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Retrieve' implies a read-only operation, the description fails to specify error behavior (e.g., what happens if the ID doesn't exist), authentication requirements, rate limits, or the structure of the returned page data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at four words with no filler. However, given the lack of annotations and output schema, this brevity leaves critical gaps in documentation, suggesting it may be undersized for proper tool invocation despite not being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool lacks both annotations and an output schema, yet the description doesn't compensate by describing the return format, page structure, or error states. For a retrieval tool, omitting what gets returned represents a significant completeness gap despite the simple single-parameter input.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage ('ID of the page to retrieve'), so the baseline is 3. The description implies the ID parameter with 'by ID' but adds no additional semantic context—such as ID format or where to obtain it—beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Retrieve') and resource ('page') and identifies the lookup method ('by ID'), which distinguishes it from search-based siblings like 'search'. However, it doesn't clarify the difference between this and 'get_database' beyond the resource name itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search' (which finds pages by query) or 'get_database'. It omits prerequisites such as needing a valid page ID from prior calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_databasesBInspect

List all accessible databases

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It implies a read-only operation and mentions 'accessible' to hint at permission scoping, but lacks details on pagination limits, rate limiting, return structure, or what determines database accessibility.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four words long with zero redundancy. It is front-loaded with the action verb and every word serves a purpose. The brevity is appropriate for a zero-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (no parameters, simple listing operation) and lack of output schema, the description is minimally adequate. However, it could improve by clarifying the return format or pagination behavior since no output schema exists to document these aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score of 4. The description does not need to compensate for missing schema documentation since there are no parameters to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('List') and resource ('databases') to clearly define the scope. However, it does not explicitly differentiate from siblings like 'get_database' (singular retrieval) or 'query_database' (content querying), though the plural form implies bulk retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_database', 'query_database', or 'search'. There are no stated prerequisites, exclusions, or conditions for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_databaseCInspect

Query a database with filters and sorting

ParametersJSON Schema
NameRequiredDescriptionDefault
sortsNoSorting parameters
filterNoFilter conditions
page_sizeNoNumber of results per page
database_idYesID of the database to query
start_cursorNoPagination cursor
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. While it mentions filtering and sorting capabilities, it omits critical behavioral details: read-only status, pagination mechanics (despite 'start_cursor' parameter), result limits, or what the query returns (records, pages, metadata).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The six-word description is efficiently structured and front-loaded with the verb, but it is inappropriately concise given the tool's complexity (5 parameters, pagination support, nested objects). It leaves critical behavioral and contextual information to inference.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters including pagination cursors and nested filter objects, the description is inadequate. With no output schema provided, the description fails to indicate what data structure is returned or how pagination behaves, leaving significant gaps in the agent's understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description references 'filters and sorting' which maps to the filter and sorts parameters, but adds no semantic detail beyond the schema's 'Filter conditions' and 'Sorting parameters' descriptions. No compensation needed for the complex nested filter object structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (query), resource (database), and key capabilities (filters, sorting). However, it fails to distinguish from the sibling 'search' tool, which likely performs similar retrieval functions but with different semantics (global vs. structured).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions available features (filters, sorting) but provides no guidance on when to use this tool versus alternatives like 'search' or 'get_database'. No prerequisites or exclusions are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_pageCInspect

Update an existing page

ParametersJSON Schema
NameRequiredDescriptionDefault
page_idYesID of the page to update
propertiesYesUpdated page properties
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. While 'Update' implies mutation, it doesn't specify if this is a partial update (merging properties) or full replacement, doesn't mention idempotency, permissions required, or side effects. Significant gap for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely terse (4 words) but under-specified rather than elegantly concise. No structure issues, but the brevity wastes opportunity to provide necessary behavioral context. Neither verbose nor effectively structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with nested object parameters (properties with additionalProperties), the description is insufficient. It doesn't explain what page properties are valid, update semantics, or expected behavior given the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear parameter descriptions ('ID of the page to update', 'Updated page properties'). The description adds no semantic value beyond the schema, but meets the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Update an existing page' is tautological, restating the tool name 'update_page' with minimal addition ('existing'). It fails to distinguish from siblings like 'append_blocks' or 'create_page' regarding content modification scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus 'append_blocks' (for adding content) or 'create_page' (for new pages). No prerequisites or error conditions mentioned (e.g., what happens if page_id doesn't exist).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.