smithery-notion
Server Details
A Notion workspace is a collaborative environment where teams can organize work, manage projects,…
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- smithery-ai/mcp-servers
- GitHub Stars
- 95
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
10 toolsappend_blocksCInspect
Append blocks to a page
| Name | Required | Description | Default |
|---|---|---|---|
| blocks | Yes | Array of block objects to append | |
| page_id | Yes | ID of the page |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to state whether this is a destructive operation, if it requires specific permissions, or what the append semantics are (end of page only?). It does not address idempotency or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely brief at four words. While efficient without filler, it is arguably underspecified rather than concise; however, the single sentence structure is appropriately front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a content-modification tool with no output schema or annotations, the description is insufficient. It omits return value details, side effects, block structure specifics, and error handling that would be necessary for confident agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, documenting 'page_id' and 'blocks' adequately. The description adds no semantic depth beyond the schema (e.g., block object structure, ID format), meriting the baseline score for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic verb (append) and target (blocks to a page), avoiding complete tautology. However, it fails to differentiate from sibling 'update_page' or clarify what constitutes a 'block' in this context, leaving the scope vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'update_page' or 'create_page'. No mention of prerequisites, positioning (end vs. insertion), or workflow context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_databaseCInspect
Create a new database
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Title of the database | |
| parent_id | Yes | ID of the parent page | |
| properties | Yes | Database properties schema |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention mutation side effects, idempotency behavior, error conditions (e.g., duplicate titles), or permission requirements beyond the implied 'Create' verb.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief at three words, but lacks structural value as it contains no actionable guidance or distinguishing information beyond the tool name itself.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with three required parameters—including a complex nested properties object with additionalProperties—no annotations, and no output schema, the description is inadequate as it fails to explain the parent-child relationship or the expected structure of the properties schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters (title, parent_id, properties). The description adds no additional semantic context (e.g., that parent_id refers to a page, or that properties defines the database schema), warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Create a new database' restates the tool name (create_database) without distinguishing from siblings like list_databases or get_database, fitting the definition of tautology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as list_databases, nor does it mention prerequisites like the parent_id requirement or when creation is preferred over querying existing databases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_pageDInspect
Create a new page
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | Title of the page | |
| content | No | Content in markdown format | |
| parent_id | Yes | ID of the parent page or database | |
| properties | No | Page properties (required for database pages) | |
| parent_type | Yes | Type of parent (database or page) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not disclose whether the operation is idempotent, what happens if the parent_id doesn't exist, what the tool returns (page ID? full object?), or any side effects like notifications or permissions changes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While only three words long, this represents under-specification rather than effective conciseness. The single sentence fails to earn its place by providing only tautological information that duplicates the tool name, wasting the opportunity to front-load critical context about the page hierarchy or parent requirements.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters (including nested objects and conditional requirements), no annotations, and no output schema, the description is completely inadequate. It fails to explain the relationship between parent_type and parent_id, the requirement for properties when creating database pages, or the expected behavior of the created page in the system.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all 5 parameters including the markdown format for content and the database/page enum for parent_type. The description adds no parameter semantics beyond the schema, but the schema is sufficiently descriptive to meet baseline expectations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Create a new page' is a tautology that restates the tool name (create_page). While it identifies the core verb and resource, it fails to distinguish from sibling tool 'create_database' or specify what kind of page system this operates within (implied by schema to be a database/page hierarchy but not stated explicitly).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'append_blocks' (when to create new vs. add to existing) or 'create_database'. No mention of prerequisites such as needing an existing parent page/database ID, or that 'properties' is required when parent_type is 'database'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_blocksCInspect
Delete blocks from a page
| Name | Required | Description | Default |
|---|---|---|---|
| block_id | Yes | ID of the block to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It fails to mention that deletion is destructive/permanent, whether child blocks are recursively deleted, or error conditions. Additionally, the description uses plural 'blocks' while the schema only accepts a single block_id, creating ambiguity about batch deletion capability.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (4 words) and front-loaded, but for a destructive operation with no annotations, this brevity constitutes under-specification rather than efficient communication. Every word earns its place, but critical information is missing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a destructive mutation tool with no annotations and no output schema, the description is inadequate. It omits permanence warnings, return value information (success/failure indicators), and the aforementioned plural/singular discrepancy leaves the actual capability ambiguous.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal value regarding the block_id parameter itself, though 'from a page' provides ownership context. The plural 'blocks' in the description slightly conflicts with the singular block_id parameter in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Delete') and resource ('blocks') and provides context ('from a page'). However, it fails to distinguish from the sibling tool 'append_blocks' or clarify when deletion is preferred over updating.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternatives, prerequisites (e.g., permissions), or warnings about permanent deletion. The description stands alone without operational context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_databaseCInspect
Retrieve a database by ID
| Name | Required | Description | Default |
|---|---|---|---|
| database_id | Yes | ID of the database to retrieve |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While 'Retrieve' implies a read operation, the description lacks disclosure of error behavior (what happens if ID invalid?), idempotency, auth requirements, or return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at 4 words with zero redundancy. Every word earns its place. However, the brevity approaches under-specification given the lack of annotations and output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple single-parameter getter, but significant gaps remain: no return value description, no error handling context, and no differentiation from query_database despite the sibling relationship.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the parameter 'database_id' already described as 'ID of the database to retrieve'. The description adds no parameter syntax details or examples, but baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Retrieve'), resource ('database'), and scope ('by ID'). The 'by ID' phrasing implicitly distinguishes this from sibling tools like list_databases and query_database, though it doesn't explicitly name them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like query_database (for filtering) or list_databases (for enumeration). No prerequisites or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pageCInspect
Retrieve a page by ID
| Name | Required | Description | Default |
|---|---|---|---|
| page_id | Yes | ID of the page to retrieve |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Retrieve' implies a read-only operation, the description fails to specify error behavior (e.g., what happens if the ID doesn't exist), authentication requirements, rate limits, or the structure of the returned page data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at four words with no filler. However, given the lack of annotations and output schema, this brevity leaves critical gaps in documentation, suggesting it may be undersized for proper tool invocation despite not being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool lacks both annotations and an output schema, yet the description doesn't compensate by describing the return format, page structure, or error states. For a retrieval tool, omitting what gets returned represents a significant completeness gap despite the simple single-parameter input.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage ('ID of the page to retrieve'), so the baseline is 3. The description implies the ID parameter with 'by ID' but adds no additional semantic context—such as ID format or where to obtain it—beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Retrieve') and resource ('page') and identifies the lookup method ('by ID'), which distinguishes it from search-based siblings like 'search'. However, it doesn't clarify the difference between this and 'get_database' beyond the resource name itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'search' (which finds pages by query) or 'get_database'. It omits prerequisites such as needing a valid page ID from prior calls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_databasesBInspect
List all accessible databases
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It implies a read-only operation and mentions 'accessible' to hint at permission scoping, but lacks details on pagination limits, rate limiting, return structure, or what determines database accessibility.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four words long with zero redundancy. It is front-loaded with the action verb and every word serves a purpose. The brevity is appropriate for a zero-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (no parameters, simple listing operation) and lack of output schema, the description is minimally adequate. However, it could improve by clarifying the return format or pagination behavior since no output schema exists to document these aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description does not need to compensate for missing schema documentation since there are no parameters to document.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') and resource ('databases') to clearly define the scope. However, it does not explicitly differentiate from siblings like 'get_database' (singular retrieval) or 'query_database' (content querying), though the plural form implies bulk retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_database', 'query_database', or 'search'. There are no stated prerequisites, exclusions, or conditions for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query_databaseCInspect
Query a database with filters and sorting
| Name | Required | Description | Default |
|---|---|---|---|
| sorts | No | Sorting parameters | |
| filter | No | Filter conditions | |
| page_size | No | Number of results per page | |
| database_id | Yes | ID of the database to query | |
| start_cursor | No | Pagination cursor |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. While it mentions filtering and sorting capabilities, it omits critical behavioral details: read-only status, pagination mechanics (despite 'start_cursor' parameter), result limits, or what the query returns (records, pages, metadata).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The six-word description is efficiently structured and front-loaded with the verb, but it is inappropriately concise given the tool's complexity (5 parameters, pagination support, nested objects). It leaves critical behavioral and contextual information to inference.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters including pagination cursors and nested filter objects, the description is inadequate. With no output schema provided, the description fails to indicate what data structure is returned or how pagination behaves, leaving significant gaps in the agent's understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description references 'filters and sorting' which maps to the filter and sorts parameters, but adds no semantic detail beyond the schema's 'Filter conditions' and 'Sorting parameters' descriptions. No compensation needed for the complex nested filter object structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (query), resource (database), and key capabilities (filters, sorting). However, it fails to distinguish from the sibling 'search' tool, which likely performs similar retrieval functions but with different semantics (global vs. structured).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions available features (filters, sorting) but provides no guidance on when to use this tool versus alternatives like 'search' or 'get_database'. No prerequisites or exclusions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchCInspect
Search pages and databases
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort by last edited or created time | |
| query | Yes | Search query | |
| filter | No | Filter by object type (page or database) | |
| page_size | No | Number of results per page | |
| start_cursor | No | Pagination cursor |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but discloses nothing about behavioral traits: pagination behavior, rate limits, whether results are sorted by relevance, or if the operation is read-only (implied but not stated).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at three words with no redundancy. However, given the tool complexity (5 params, nested objects, pagination), it is insufficiently descriptive rather than appropriately lean.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with nested filter/sort objects (additionalProperties: {}), no output schema, and no annotations, the description is incomplete. It fails to explain valid filter values, sorting options, or return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with all five parameters documented (query, filter, sort, page_size, start_cursor). The description adds no parameter-specific guidance, meeting the baseline for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the verb (search) and target resources (pages and databases), distinguishing it from siblings like get_page or list_databases. However, it fails to specify the search semantics (e.g., full-text vs title-only, case sensitivity), keeping it at minimum viable clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus query_database (which searches within a specific database) or list_databases (which lists all). The agent cannot determine if search is the right tool for filtering by object type versus using the filter parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_pageCInspect
Update an existing page
| Name | Required | Description | Default |
|---|---|---|---|
| page_id | Yes | ID of the page to update | |
| properties | Yes | Updated page properties |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While 'Update' implies mutation, it doesn't specify if this is a partial update (merging properties) or full replacement, doesn't mention idempotency, permissions required, or side effects. Significant gap for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse (4 words) but under-specified rather than elegantly concise. No structure issues, but the brevity wastes opportunity to provide necessary behavioral context. Neither verbose nor effectively structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with nested object parameters (properties with additionalProperties), the description is insufficient. It doesn't explain what page properties are valid, update semantics, or expected behavior given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with clear parameter descriptions ('ID of the page to update', 'Updated page properties'). The description adds no semantic value beyond the schema, but meets the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update an existing page' is tautological, restating the tool name 'update_page' with minimal addition ('existing'). It fails to distinguish from siblings like 'append_blocks' or 'create_page' regarding content modification scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus 'append_blocks' (for adding content) or 'create_page' (for new pages). No prerequisites or error conditions mentioned (e.g., what happens if page_id doesn't exist).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!