server
Server Details
An MCP server that let you interact with Cycloid.io Internal Development Portal and Platform
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- cycloidio/cycloid-mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
6 toolsCYCLOID_BLUEPRINT_LISTCInspect
List all available blueprints with their details. The LLM can filter the results based on user requirements.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | table |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool lists blueprints, implying a read-only operation, but doesn't cover critical aspects like authentication needs, rate limits, pagination, or error handling. This leaves significant gaps in understanding how the tool behaves in practice.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, stating the core purpose in the first sentence. The second sentence adds some context but could be more efficient. Overall, it avoids unnecessary verbosity, though it could be slightly more structured to enhance clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a list operation with one parameter), lack of annotations, no output schema, and low schema coverage, the description is incomplete. It doesn't explain the parameter, return values, or behavioral traits, making it inadequate for the agent to fully understand how to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter ('format') with 0% description coverage, and the tool description does not mention or explain this parameter at all. Since schema coverage is low, the description fails to compensate by adding meaning beyond the schema, leaving the parameter undocumented and unclear in its purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('all available blueprints with their details'), making the tool's purpose immediately understandable. It distinguishes from siblings like CYCLOID_BLUEPRINT_STACK_CREATE (which creates rather than lists) but doesn't explicitly differentiate from other list tools like CYCLOID_CATALOG_REPO_LIST or CYCLOID_PIPELINE_LIST, which is why it doesn't reach a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance by mentioning that 'The LLM can filter the results based on user requirements,' which implies usage for listing blueprints, but it lacks explicit when-to-use instructions, prerequisites, or alternatives. No comparison to sibling tools is made, leaving the agent with little context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
CYCLOID_BLUEPRINT_STACK_CREATEAInspect
Create a new Cycloid stack from a blueprint. CRITICAL: When elicitation context (ctx) is provided, the tool will ALWAYS use interactive elicitation to ask for parameters one by one, REGARDLESS of any parameters provided. The LLM should ONLY provide the 'ref' parameter and let elicitation handle the rest. DO NOT provide name, use_case, or service_catalog_source_canonical when elicitation is available. 🚨 CRITICAL: The LLM should NEVER provide default values, suggestions, or examples. Let the user make their own choices. Do NOT call this tool with guessed parameters.
| Name | Required | Description | Default |
|---|---|---|---|
| ref | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool uses interactive elicitation for parameters when context is provided, it requires only the 'ref' parameter in such cases, and it prohibits the LLM from providing default values or suggestions. This covers operational constraints and user interaction patterns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose but includes repetitive emphasis (e.g., 'CRITICAL' repeated, warnings in multiple sentences). While the information is necessary, the structure could be more streamlined to reduce redundancy and improve readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a creation tool with interactive elicitation), no annotations, and no output schema, the description does a good job covering usage rules and behavioral aspects. However, it lacks details on what happens after creation (e.g., success indicators or error handling), and the parameter 'ref' is not fully explained, leaving minor gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, so the description must compensate. It adds meaning by specifying that 'ref' is the only parameter to provide when elicitation is available, and it warns against providing other parameters like 'name' or 'use_case'. However, it does not explain what 'ref' represents (e.g., a blueprint identifier), leaving some semantic gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new Cycloid stack') and the resource ('from a blueprint'), making the purpose evident. However, it does not explicitly differentiate this tool from its sibling tools (e.g., CYCLOID_BLUEPRINT_LIST, CYCLOID_STACKFORMS_VALIDATE), which could involve similar resources but different operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when and how to use this tool, including critical instructions such as using only the 'ref' parameter when elicitation context is available, avoiding other parameters like 'name' or 'use_case', and not providing default values or guesses. It clearly distinguishes usage scenarios based on the presence of elicitation context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
CYCLOID_CATALOG_REPO_LISTBInspect
List all available service catalog repositories with their details. The LLM can filter the results based on user requirements.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | table |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a listing operation, implying it's read-only and non-destructive, but doesn't mention any behavioral traits like rate limits, authentication needs, pagination, error handling, or what 'details' include. The description adds minimal value beyond the basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that directly address the tool's purpose and a usage hint. It's front-loaded with the core functionality. There's no wasted text, though it could be slightly more structured (e.g., separating purpose from guidance more clearly).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (listing repositories with details), lack of annotations, no output schema, and minimal parameter explanation, the description is incomplete. It doesn't cover what 'details' include, how results are structured, potential limitations, or error cases. For a tool that likely returns structured data, more context is needed for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter ('format') with 0% description coverage in the schema itself. The description doesn't mention any parameters or add meaning beyond what the schema provides (e.g., it doesn't explain what 'format' controls or what 'table' means). However, with only one parameter and a default value provided in the schema, the baseline is 3 as the schema handles the minimal parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List all available service catalog repositories with their details.' This is a specific verb ('List') + resource ('service catalog repositories') combination. However, it doesn't explicitly differentiate from sibling tools like CYCLOID_BLUEPRINT_LIST or CYCLOID_PIPELINE_LIST, which also appear to list different resource types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some implied usage guidance: 'The LLM can filter the results based on user requirements.' This suggests the tool is for retrieving repositories that can then be filtered programmatically. However, it doesn't explicitly state when to use this tool versus alternatives (e.g., vs. CYCLOID_BLUEPRINT_LIST for blueprints) or mention any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
CYCLOID_EVENT_LISTCInspect
List organization events with optional filters (begin, end, severity, type).
| Name | Required | Description | Default |
|---|---|---|---|
| end | No | ||
| type | No | ||
| begin | No | ||
| format | No | json | |
| severity | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool lists events with filters but lacks details on permissions, rate limits, pagination, or output format. For a tool with 5 parameters and no output schema, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and mentions key parameters. There's no wasted text, making it appropriately concise for its content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters, 0% schema coverage, no annotations, and no output schema, the description is incomplete. It doesn't explain return values, error handling, or behavioral nuances, making it inadequate for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists filter parameters (begin, end, severity, type) but omits 'format' and provides no details on data types, formats, or constraints. This adds minimal value beyond the schema's structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('organization events'), making the purpose understandable. However, it doesn't differentiate this tool from its siblings (like CYCLOID_PIPELINE_LIST or CYCLOID_BLUEPRINT_LIST), which also list different resources, so it doesn't fully distinguish itself in context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions optional filters but provides no guidance on when to use this tool versus alternatives. There's no indication of prerequisites, context, or comparisons with sibling tools, leaving the agent with minimal usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
CYCLOID_PIPELINE_LISTCInspect
List all pipelines from Cycloid.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | summary |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden but adds minimal behavioral context. It doesn't disclose traits like pagination, rate limits, authentication needs, or what 'all pipelines' entails (e.g., scope or filtering). The description is too basic for a tool with no annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, straightforward sentence with no wasted words, making it highly concise and front-loaded. It efficiently communicates the core purpose without unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and minimal parameter explanation, the description is incomplete. It lacks details on behavior, output format, and usage context, which are essential for a list tool with potential complexity in handling pipelines.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter ('format') with 0% description coverage, and the tool description provides no information about parameters. It doesn't explain what 'format' does, its possible values, or how it affects output, leaving semantics unclear despite the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('all pipelines from Cycloid'), making the purpose immediately understandable. It doesn't differentiate from sibling tools like CYCLOID_BLUEPRINT_LIST or CYCLOID_CATALOG_REPO_LIST, which also list different resources, so it lacks explicit sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description only states what it does, without mentioning context, prerequisites, or comparisons to sibling tools like CYCLOID_EVENT_LIST for other list operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
CYCLOID_STACKFORMS_VALIDATECInspect
Validate a StackForms (.forms.yml) file using the Cycloid CLI. This tool can validate StackForms configuration and provide detailed feedback for fixing issues.
| Name | Required | Description | Default |
|---|---|---|---|
| forms_content | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool validates and provides feedback, but doesn't disclose behavioral traits like whether it's read-only, requires authentication, has rate limits, or what 'detailed feedback' entails. This is inadequate for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, with two sentences that directly state the tool's function and capability. There's no unnecessary information, making it efficient for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and low parameter coverage, the description is incomplete. It doesn't explain what validation entails, the format of feedback, or error handling. For a validation tool, more context on behavior and outputs is needed to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage. The description adds minimal semantics by implying 'forms_content' is a StackForms file content, but doesn't specify format, encoding, or constraints. It doesn't compensate for the low schema coverage, leaving the parameter poorly understood.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Validate a StackForms (.forms.yml) file using the Cycloid CLI.' It specifies the verb (validate), resource (StackForms file), and method (Cycloid CLI). However, it doesn't explicitly differentiate from sibling tools, which are unrelated (e.g., listing blueprints, creating stacks).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description mentions it 'can validate StackForms configuration and provide detailed feedback for fixing issues,' but doesn't specify prerequisites, when validation is needed, or what happens after validation. Without annotations or context, the agent lacks usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!