Ssemble AI Clipping
Server Details
Create AI-powered short-form video clips from YouTube videos. Supports webhook callbacks.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ssembleinc/ssemble-mcp-server
- GitHub Stars
- 4
- Server Listing
- ssemble-mcp-server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
9 toolscreate_shortAInspect
Create AI-generated short-form video clips from a YouTube video or uploaded file. Returns a request ID instantly. Processing takes 5-30 minutes. Costs 1 credit.
| Name | Required | Description | Default |
|---|---|---|---|
| end | Yes | End time in seconds (> start, max 1200s window) | |
| url | No | YouTube video URL | |
| music | No | Add background music | |
| start | Yes | Start time in seconds (>= 0) | |
| layout | No | Video framing layout | auto |
| ctaText | No | CTA text (max 200 chars, required when ctaEnabled=true) | |
| fileUrl | No | Public video file URL (alternative to url) | |
| language | No | Spoken language (ISO 639-1) | en |
| memeHook | No | Prepend a meme hook clip (2-5s attention grabber) | |
| gameVideo | No | Add split-screen gameplay overlay | |
| hookTitle | No | Add animated hook title at start | |
| musicName | No | Exact track name from list_music (case-sensitive) | |
| ctaEnabled | No | Show call-to-action text overlay | |
| noClipping | No | Skip AI clipping, process entire range as one clip | |
| templateId | No | Caption template ID from list_templates (24-char hex) | |
| webhookUrl | No | Optional webhook URL for completion/failure notifications. Receives a POST with results when processing finishes. | |
| musicVolume | No | Music volume 0-100 | |
| memeHookName | No | Exact meme hook name from list_meme_hooks (case-sensitive) | |
| gameVideoName | No | Exact game video name from list_game_videos (case-sensitive) | |
| captionLanguage | No | Caption language if different from spoken | |
| preferredLength | No | Target clip duration | under60sec |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden and successfully discloses cost (1 credit), timing (5-30 min processing), and return behavior (instant request ID). Lacks details on error handling, retries, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste. Front-loaded with core purpose, followed by critical operational details (return behavior, timing, cost). Each sentence provides distinct, necessary information for agent decision-making.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high schema richness (100% coverage, enum values, references to list_music/list_templates in param descriptions), the description appropriately focuses on high-level behavioral context rather than parameter enumeration. Could mention webhook or status checking pattern explicitly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema adequately documents all 21 parameters. The description mentions 'YouTube video or uploaded file' which maps to url/fileUrl, but adds minimal semantic detail beyond the comprehensive schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Create), resource (AI-generated short-form video clips), and input sources (YouTube video or uploaded file). Clearly distinguishes from sibling list/get/delete tools by establishing this as the creation entry point.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explains the asynchronous workflow (instant request ID return vs 5-30 minute processing), which is critical for correct invocation patterns. Does not explicitly reference get_status for polling or webhookUrl for callbacks, though the schema mentions the latter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_requestAInspect
Permanently delete a short creation request and all generated videos. Credits are NOT refunded. This action is irreversible.
| Name | Required | Description | Default |
|---|---|---|---|
| requestId | Yes | The request ID to delete (24-char hex). This action is irreversible and credits are NOT refunded. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden and excels by stating the action is 'permanently' destructive, specifying that 'all generated videos' are destroyed, and warning that 'credits are NOT refunded'—covering scope, side effects, and financial impact.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: first establishes the action and scope, second and third provide critical business-logic warnings (credits, irreversibility). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter destructive operation, the description is complete. It covers the destruction scope (request + videos), permanence, and credit implications that structured fields cannot convey. No output schema exists to document.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the requestId parameter is fully documented in the schema), establishing baseline 3. The tool description does not add parameter-specific semantics beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('delete') and resource ('short creation request and all generated videos'), clearly distinguishing it from sibling tools like list_requests, get_status, or create_short.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical usage context through warnings about irreversibility and credit non-refunds, implicitly guiding when to use (only when permanent removal is intended). However, it lacks explicit comparison to alternatives or specific when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_shortsGet ShortsAInspect
Retrieve all generated short clips for a completed request. Provide a request ID, or omit it to see your recent requests and pick one.
| Name | Required | Description | Default |
|---|---|---|---|
| requestId | No | The request ID from create_short (24-char hex). Omit to see your recent requests. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully indicates the 'completed request' state requirement and implies read-only retrieval, but omits explicit safety declarations (read-only/destructive), error behaviors for incomplete requests, and return format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste: first states purpose, second provides parameter usage guidance. Front-loaded with the action verb and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers the single optional parameter's behavior and the tool's primary function. Given the simple schema (1 param, no nesting) and lack of output schema, the description suffices, though it could strengthen workflow context regarding the 'completed' prerequisite and relationship to 'get_status'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (requestId fully documented with type, pattern, and omission behavior). The description echoes the schema's guidance on omitting the parameter but adds minimal semantic value beyond what the structured schema already provides, meriting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Retrieve' with clear resource 'generated short clips' and scope 'for a completed request'. It clearly distinguishes from creation-oriented siblings like 'create_short' and status-checking tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on the two usage modes (with requestId vs omitted), explaining the 'recent requests' fallback behavior. Lacks explicit workflow guidance regarding when to use versus 'get_status' or 'list_requests' siblings, though 'completed request' implies a prerequisite check.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_statusAInspect
Check processing status of a short creation request. Provide a request ID, or omit it to see your recent requests and pick one.
| Name | Required | Description | Default |
|---|---|---|---|
| requestId | No | The request ID from create_short (24-char hex). Omit to see your recent requests. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It effectively explains the behavioral difference when omitting the parameter (returns recent requests for selection), but fails to declare safety properties (read-only/idempotent) or error conditions that would help an agent understand execution risks.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose, second provides usage guidance. Every word earns its place; no redundancy with schema or title.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately complete for a single-parameter tool with 100% schema coverage. Explains the dual-mode behavior sufficiently. Minor gap: lacks explicit declaration of read-only safety (inferred but not stated) given absence of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100% and documents the technical pattern (24-char hex) and omission behavior, the description adds valuable workflow context ('pick one') explaining the user intent behind omitting the ID, which aids agent reasoning about multi-step interactions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Check') + resource ('processing status of a short creation request') clearly defines scope. Implicitly distinguishes from sibling list_requests by clarifying this tool's primary purpose is status checking (with listing recent requests only as a secondary helper for selection).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clearly documents the two usage modes: providing a request ID for specific status checks versus omitting it to view recent requests. However, it does not explicitly contrast with list_requests to clarify when to use that alternative instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_game_videosList Game VideosAInspect
List available gameplay videos for split-screen overlays (content top, game bottom). Use the exact gameVideoName when creating shorts.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number | |
| limit | No | Items per page (1-100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the output contains 'gameVideoName' (critical for the workflow), implying the return structure. However, it lacks explicit safety declarations (read-only nature), rate limits, or pagination behavior details despite being a list operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes purpose and context; second provides actionable usage guidance. Perfectly front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description compensates by mentioning the critical return field 'gameVideoName' needed for the create_short workflow. For a simple paginated list tool, this is adequate, though it could mention other returned fields (URLs, thumbnails, etc.).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% ('Page number', 'Items per page'), providing clear parameter documentation. The description adds no additional parameter semantics, but with full schema coverage, the baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity: specific verb 'List', specific resource 'gameplay videos', and specific use case 'split-screen overlays (content top, game bottom)'. Clearly distinguishes from siblings like list_music, list_templates, and list_meme_hooks by specifying the gameplay video context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage guidance by linking to sibling tool create_short ('when creating shorts') and specifying to 'use the exact gameVideoName'. Lacks explicit 'when not to use' or alternative tool comparisons, but the split-screen context provides clear selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_meme_hooksList Meme HooksAInspect
List available meme hook clips (2-5 second attention grabbers prepended to shorts). Use the exact memeHookName when creating shorts.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number | |
| limit | No | Items per page (1-100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable context explaining what meme hooks are (attention grabbers prepended to shorts) and output usage constraints (exact names required). However, omits operational behaviors like read-only safety, pagination semantics, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First defines the resource and its purpose; second provides critical usage constraints for the output. Front-loaded with the most important information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description adequately explains domain-specific terminology ('meme hooks') and workflow integration. Mentions 'memeHookName' implying output structure. Could improve by explicitly stating it returns a paginated list of available hooks.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (page/limit well documented). Description adds no parameter-specific guidance, but with high schema coverage, baseline 3 is appropriate per scoring rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'List' with clear resource 'meme hook clips' and scope clarification (2-5 second attention grabbers prepended to shorts). Distinguishes from siblings like list_music or list_templates by defining the specific domain and use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear workflow context by stating to 'Use the exact memeHookName when creating shorts,' implicitly linking to the create_short sibling tool. However, lacks explicit 'when not to use' guidance or direct comparison to alternative list tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_musicList MusicAInspect
List available background music tracks with names and durations. Use the exact musicName when creating shorts.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number | |
| limit | No | Items per page (1-100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context by specifying that returned tracks include 'names and durations' (critical since no output schema exists). However, it omits safety profile information (read-only status, rate limits, pagination behavior) that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences total with zero waste. The first sentence establishes purpose and return value; the second provides workflow integration guidance. Information is front-loaded and every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter listing tool with 100% schema coverage but no output schema, the description adequately compensates by describing the return fields (names, durations) and workflow context (creating shorts). Missing only safety/rate limit details that would elevate it to a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage for both 'page' and 'limit' parameters. The description adds no additional parameter semantics (e.g., default pagination strategy, maximum results), warranting the baseline score of 3 for cases where the schema is self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'List[s] available background music tracks with names and durations' - specific verb (List), specific resource (background music tracks), and specific attributes returned (names, durations). It also distinguishes from siblings by mentioning the downstream use case 'when creating shorts', directly linking to the create_short sibling tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence 'Use the exact musicName when creating shorts' provides clear workflow guidance linking this tool to create_short. However, it lacks explicit when-not-to-use guidance or mention of alternatives like list_templates or list_game_videos for different asset types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_requestsBInspect
List all short creation requests with optional status filtering, pagination, and sorting.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number | |
| limit | No | Items per page (1-100) | |
| sortBy | No | Sort field | createdAt |
| status | No | Filter by status | |
| sortOrder | No | Sort direction | desc |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. While 'List' implies read-only, the description does not confirm safety, idempotency, rate limits, or error conditions (e.g., behavior when no requests exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with efficient structure: core action front-loaded ('List all short creation requests'), followed by optional modifiers. No redundant or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 5-parameter list operation with complete schema coverage. However, lacking both output schema and annotations, it could better clarify what constitutes a 'short creation request' entity versus the 'shorts' returned by sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. The description maps parameters to functional groups (status filtering, pagination, sorting) but adds no semantic details beyond schema (e.g., does not explain status enum values or pagination behavior).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (List) and resource (short creation requests) clearly. However, it does not explicitly differentiate from sibling 'get_shorts' or 'get_status', leaving ambiguity about whether this returns job metadata or completed content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes capabilities (filtering, pagination, sorting) but provides no explicit when-to-use guidance or alternatives. Does not clarify when to use this versus 'get_shorts' or 'get_status' for monitoring creation jobs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_templatesList TemplatesAInspect
List all available caption style templates with preview thumbnails, names, and IDs. Use the templateId when creating shorts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return content (thumbnails, names, IDs) which is valuable behavioral context, but omits safety profile, rate limits, pagination behavior, or error handling that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences: first defines operation and return payload, second provides usage context. No redundant or filler content; every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless tool without output schema, description adequately compensates by documenting the three return fields (thumbnails, names, IDs) and workflow context (create_short). Minor gap regarding pagination or response format specifics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, establishing baseline 4 per rubric. Description correctly focuses on output semantics rather than inventing parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the specific action ('List'), resource ('caption style templates'), and return structure ('preview thumbnails, names, and IDs'). It effectively distinguishes from sibling list tools by specifying 'caption style' templates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context by linking output to the sibling tool create_short ('Use the templateId when creating shorts'), indicating this tool is for obtaining resources needed by that workflow. Lacks explicit 'when not to use' guidance compared to naming alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.