Revit MCP
Server Details
Revit model integration via APS — elements, parameters, schedules, clashes, IFC export.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 8 of 8 tools scored.
Each tool has a clearly distinct purpose targeting specific Revit/APS operations: clash detection, IFC export, element retrieval, parameter extraction, sheet listing, view listing, schedule execution, and file upload/translation. No ambiguity exists between tools as they handle non-overlapping workflows.
All tools follow a consistent 'revit_verb_noun' snake_case pattern (e.g., revit_get_elements, revit_export_ifc, revit_upload). The naming convention is perfectly uniform across all eight tools, making them predictable and easily parsable.
With 8 tools, the server is well-scoped for its domain of Revit model interaction and APS translation. Each tool earns its place by covering essential operations like data extraction, export, upload, and analysis without bloat or redundancy.
The toolset provides strong coverage for core Revit/APS workflows including upload/translation, data retrieval (elements, parameters, sheets, views), export (IFC), and analysis (clash detection, schedules). Minor gaps might include update/delete operations or more advanced model manipulation, but agents can work effectively with the provided surface.
Available Tools
8 toolsrevit_clash_detectCInspect
Detect spatial clashes between two categories using bounding box overlap analysis + D1 VDC rules
| Name | Required | Description | Default |
|---|---|---|---|
| model_id | Yes | Base64-encoded URN | |
| category_a | Yes | First category (e.g. 'Mechanical Equipment') | |
| category_b | Yes | Second category (e.g. 'Structural Framing') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the analysis method but doesn't describe what the tool returns (e.g., clash report, list of elements), performance characteristics (e.g., processing time, memory usage), or side effects (e.g., whether it modifies the model). For a detection tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Detect spatial clashes between two categories') and adds technical specifics ('using bounding box overlap analysis + D1 VDC rules'). Every word contributes to understanding the tool's function without redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (clash detection with specific rules), lack of annotations, and no output schema, the description is incomplete. It doesn't explain what the output looks like (e.g., a report, JSON list), how clashes are reported, or any limitations (e.g., only works with certain model types). For a tool with no structured output information, more descriptive context is needed to be fully usable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters (model_id, category_a, category_b) with clear descriptions. The description adds no additional parameter information beyond what the schema provides, such as format details for categories or examples of D1 VDC rules. Baseline 3 is appropriate when the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Detect' and the resource 'spatial clashes between two categories', specifying the analysis method 'bounding box overlap analysis + D1 VDC rules'. It distinguishes from siblings like export_ifc or get_elements by focusing on clash detection rather than data retrieval or export operations. However, it doesn't explicitly differentiate from potential similar tools not in the sibling list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a loaded model), exclusions (e.g., not for non-spatial data), or suggest other tools for related tasks like revit_get_elements for element inspection. Usage is implied through the action described but lacks explicit context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revit_export_ifcCInspect
Start IFC export translation job for a model
| Name | Required | Description | Default |
|---|---|---|---|
| model_id | Yes | Base64-encoded URN | |
| include_properties | No | Include property sets in IFC output |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Start IFC export translation job,' implying an asynchronous or batch process, but fails to describe critical traits like required permissions, whether it's destructive, rate limits, or the nature of the job (e.g., background task, immediate export). This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and wastes no space, making it highly concise and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of an export tool with no annotations and no output schema, the description is incomplete. It does not explain what the tool returns (e.g., job ID, status), error conditions, or behavioral aspects like async processing, which are essential for effective use in this context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting both parameters (model_id as 'Base64-encoded URN' and include_properties as 'Include property sets in IFC output'). The description adds no additional meaning beyond the schema, such as format details or usage examples, so it meets the baseline of 3 without compensating for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Start IFC export translation job') and target resource ('for a model'), providing a specific verb+resource combination. However, it does not distinguish this tool from its siblings (e.g., revit_clash_detect, revit_get_elements), which handle different operations like clash detection or data retrieval, leaving room for improvement in differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It lacks information on prerequisites (e.g., model availability), exclusions (e.g., when not to export), or comparisons to sibling tools, leaving the agent without context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revit_get_elementsBInspect
Get all elements from a translated Revit model by category (e.g. Walls, Doors, Windows)
| Name | Required | Description | Default |
|---|---|---|---|
| category | Yes | Revit category to filter (e.g. 'Walls', 'Doors', 'Windows', 'Structural Columns') | |
| model_id | Yes | Base64-encoded URN of the translated model |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves elements but does not describe return format, pagination, error conditions, or performance implications. While it implies a read-only operation, it misses critical details like whether it returns all elements at once or in batches, and any rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose with no wasted words. It is front-loaded with the core action and includes helpful examples, making it easy to parse quickly. Every part of the sentence contributes to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 2 parameters with full schema coverage but no annotations or output schema, the description is minimally adequate. It covers the basic purpose and parameter context but lacks details on return values, error handling, and behavioral traits. For a read operation with no output schema, more information on what is returned would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters in the input schema. The description adds minimal value by providing examples of categories (e.g., 'Walls, Doors, Windows'), but does not elaborate on parameter usage beyond what the schema already covers. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'elements from a translated Revit model', specifying filtering by category. It provides examples like 'Walls, Doors, Windows' to illustrate the category parameter. However, it does not explicitly differentiate from sibling tools such as revit_get_parameters or revit_get_views, which might retrieve different types of data from the model.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions filtering by category but offers no guidance on when to use this tool versus alternatives like revit_get_parameters or revit_get_sheets. It lacks explicit instructions on prerequisites, such as needing a translated model, or exclusions, such as not being suitable for non-element data. This leaves the agent without clear context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revit_get_parametersBInspect
Get all parameters for elements in a category (or a specific element)
| Name | Required | Description | Default |
|---|---|---|---|
| category | Yes | Revit category to filter | |
| model_id | Yes | Base64-encoded URN | |
| element_id | No | Optional specific element objectid to query |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but lacks behavioral details. It doesn't disclose whether this is a read-only operation, potential performance impacts for large categories, authentication needs, rate limits, or error conditions. The phrase 'Get all parameters' hints at a comprehensive retrieval but doesn't specify format or limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. Every word earns its place, with no redundant or vague phrasing. It's appropriately sized for a straightforward retrieval tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete for a tool with 3 parameters. It doesn't explain what 'parameters' entail (e.g., property names/values), return format, or how results differ between category and element queries. For a data retrieval tool in a complex domain like Revit, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters (model_id, category, element_id). The description adds marginal value by implying the category/element_id choice, but doesn't provide additional semantics like parameter interactions (e.g., if element_id is provided, category might be ignored) or examples of valid categories.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'parameters for elements', specifying it can target either a category or specific element. It distinguishes itself from siblings like revit_get_elements (which retrieves elements themselves) by focusing on parameters. However, it doesn't explicitly contrast with all siblings (e.g., revit_get_sheets/views).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning 'category (or a specific element)', suggesting it's for retrieving parameters at either a categorical or individual level. However, it provides no explicit guidance on when to use this versus alternatives like revit_get_elements for element data, nor does it mention prerequisites or exclusions (e.g., model must be loaded).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revit_get_sheetsCInspect
List all sheets in a translated Revit model
| Name | Required | Description | Default |
|---|---|---|---|
| model_id | Yes | Base64-encoded URN |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states it 'List all sheets' but doesn't describe what 'sheets' are in Revit context, whether this is a read-only operation, if there are rate limits, or what the output format looks like. This is inadequate for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's front-loaded with the core purpose and appropriately sized for the tool's complexity, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what 'sheets' are in Revit, the return format, or behavioral traits like safety or limitations. For a tool with zero structured coverage, this leaves significant gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'model_id' documented as a 'Base64-encoded URN'. The description adds no additional parameter semantics beyond implying the model must be 'translated', which is useful but minimal. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List all sheets') and resource ('in a translated Revit model'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'revit_get_views' or 'revit_get_elements' which might also list model components, missing full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites like needing a translated model, nor does it compare to siblings such as 'revit_get_views' for similar listing functions, leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revit_get_viewsCInspect
List all views (floor plans, sections, 3D views, etc.) in a translated Revit model
| Name | Required | Description | Default |
|---|---|---|---|
| model_id | Yes | Base64-encoded URN |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool lists views but does not describe the return format (e.g., list structure, pagination), performance characteristics (e.g., speed for large models), or error conditions (e.g., invalid model_id). This leaves significant gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('List all views') and includes relevant details (types of views, model context). There is no wasted language, and it is appropriately sized for a simple list tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It lacks details on return values (e.g., format of the view list), error handling, or behavioral traits like whether it's read-only or has side effects. For a tool with minimal structured data, the description should compensate more to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'model_id' documented as a 'Base64-encoded URN'. The description adds no additional parameter information beyond what the schema provides, such as examples or context for obtaining the model_id. Baseline 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'views in a translated Revit model', specifying the types of views included (floor plans, sections, 3D views, etc.). It distinguishes from siblings like revit_get_elements or revit_get_sheets by focusing specifically on views, but does not explicitly contrast with them in the description text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., needing a translated model), exclusions, or compare it to sibling tools like revit_get_elements for broader element retrieval. Usage is implied by the tool name and description but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revit_run_scheduleCInspect
Extract schedule-like tabular data from model properties matching a category or keyword
| Name | Required | Description | Default |
|---|---|---|---|
| model_id | Yes | Base64-encoded URN | |
| schedule_name | Yes | Category or keyword to build schedule from (e.g. 'Walls', 'Doors', 'Rooms') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool extracts data but doesn't clarify whether this is a read-only operation, if it requires specific permissions, what format the tabular data returns in, or if there are rate limits. For a data extraction tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized for a tool with two well-documented parameters and gets straight to the point with zero wasted verbiage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a data extraction tool with no annotations and no output schema, the description is insufficient. It doesn't explain what 'schedule-like tabular data' means in practice, what format it returns, whether there are pagination considerations, or what happens with partial matches. Given the complexity of extracting structured data from a model, more context about the output would be valuable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so both parameters are well-documented in the schema itself. The description adds minimal value beyond the schema - it mentions 'category or keyword' which aligns with the schedule_name parameter description, but provides no additional context about parameter interactions, constraints, or examples beyond what's already in the structured fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Extract schedule-like tabular data') and target ('from model properties matching a category or keyword'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'revit_get_elements' or 'revit_get_parameters' that might also retrieve model data, leaving some ambiguity about when this specific schedule extraction is preferred.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing considerations, or compare it to sibling tools like 'revit_get_elements' or 'revit_get_parameters' that might serve similar data retrieval purposes. The agent receives no usage context beyond the basic purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revit_uploadBInspect
Upload Revit file to APS and translate to viewable. Provide a publicly accessible file_url.
| Name | Required | Description | Default |
|---|---|---|---|
| file_url | Yes | Public URL to download the .rvt file from | |
| file_name | Yes | Name for the file (e.g. 'MyBuilding.rvt') | |
| project_name | No | Optional project label |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the upload and translation actions but lacks details on permissions, rate limits, error handling, or what 'viewable' entails (e.g., format, accessibility). The promise of a 'publicly accessible file_url' hints at output behavior but is vague on implementation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and outcome. It avoids redundancy and wastes no words, though it could be slightly more structured by separating the upload and translation steps for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (upload and translation with 3 parameters), no annotations, and no output schema, the description is minimally adequate. It states the purpose and hints at the output but lacks details on behavioral traits, error cases, or integration context, leaving gaps for an agent to infer usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds no additional meaning beyond what's in the schema (e.g., it doesn't clarify parameter interactions or usage examples). This meets the baseline of 3 when the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Upload Revit file to APS and translate to viewable') and the resource ('Revit file'), distinguishing it from siblings like clash detection, export, or data retrieval tools. It provides a concrete outcome ('Provide a publicly accessible file_url') that defines the tool's purpose beyond just uploading.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a Revit file), exclusions, or comparisons to sibling tools like revit_export_ifc for different output formats. Usage is implied by the action but not explicitly contextualized.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!