Workflow
Server Details
Create, browse, remix, collaborate on, and run durable AI workflow nodes from MCP hosts.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Jonnyton/Workflow
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 11 of 11 tools scored. Lowest: 2.8/5.
Each tool targets a distinct operation: specific reads, lists, searches, proposals, requests, and status inspection. No two tools have overlapping purposes.
All tools follow a consistent verb_workflow_noun pattern in snake_case, such as 'get_workflow_goal', 'list_workflow_runs', and 'search_workflow_wiki'. No deviations or mixed conventions.
11 tools cover a balanced set of operations for a workflow management domain: listing, reading, searching, proposing, and status. The count is appropriate without being excessive or sparse.
Missing update and delete operations for goals and universes, and no create/update/delete for wiki pages. The surface covers read and propose actions but lacks full lifecycle management, leaving notable gaps.
Available Tools
11 toolsget_workflow_goalGet Workflow GoalARead-onlyIdempotentInspect
Use this when the user wants details for a specific Workflow goal.
| Name | Required | Description | Default |
|---|---|---|---|
| goal_id | Yes | Goal identifier to inspect. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds context about returning bound branches, but does not disclose behavior on not-found errors, authorization needs, or response format. With annotations covering the safety profile, a score of 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that conveys the essential information without any fluff, earning maximum conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, output schema present), the description sufficiently states what it returns. It could hint at error handling, but overall it is complete enough for an agent with access to the schema and annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage and a clear schema description for goal_id ('Goal identifier to inspect'), the description adds no further parameter meaning. Baseline 3 is correct as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Return one Workflow goal and its bound branches' clearly specifies the verb (Return), resource (Workflow goal), and scope (one goal with bound branches), distinguishing it from siblings like list_workflow_goals and search_workflow_goals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus siblings. There is no mention of prerequisites, typical use cases, or when to avoid it, leaving the agent to infer from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_workflow_statusGet Workflow StatusARead-onlyIdempotentInspect
Use this when the user asks whether Workflow is reachable or safe to use.
| Name | Required | Description | Default |
|---|---|---|---|
| universe_id | No | Optional universe scope. Empty uses the active universe. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate the tool is read-only and non-destructive. The description adds value by specifying what information is returned (daemon status, routing evidence, safety caveats), which goes beyond the annotations and informs the agent about the output content.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that immediately conveys the core purpose. Every word is necessary, and there is no extraneous information. It is perfectly concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with one optional parameter and an output schema. The description covers what the tool returns, and the output schema handles return values. However, the mention of 'safety caveats' is vague and could be clarified. Overall, it is sufficiently complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers the only parameter (universe_id) with a description. The tool description does not add any additional information about the parameter, so it does not improve upon the schema's explanation. Baseline 3 is appropriate given 100% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Return' and identifies three distinct resources: daemon status, routing evidence, and safety caveats. This clearly distinguishes it from siblings like get_workflow_goal or inspect_workflow_universe, which target different aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There is no mention of context, prerequisites, or when not to use it, leaving the agent to infer usage from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
inspect_workflow_universeInspect Workflow UniverseARead-onlyIdempotentInspect
Use this when the user wants a summary of one Workflow universe.
| Name | Required | Description | Default |
|---|---|---|---|
| universe_id | No | Optional universe scope. Empty uses the active universe. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds 'summarize durable state' which provides context beyond the schema, but no additional behavioral details needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no waste. Perfectly concise for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity, rich annotations, and presence of output schema, the description is sufficient for an agent to understand the tool's purpose and behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage and describes the parameter. The description repeats 'Optional universe scope' but adds no new information beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool inspects one universe and summarizes durable state, distinguishing it from siblings like list_workflow_universes which list universes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. Implied for single universe inspection, but no exclusion or sibling references.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_workflow_goalsList Workflow GoalsARead-onlyIdempotentInspect
Use this when the user wants to browse existing shared Workflow goals.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | Optional comma-separated tag filter. | |
| limit | No | Maximum number of goals to return. | |
| author | No | Optional author filter. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover readOnlyHint, idempotentHint, and destructiveHint. The description adds 'shared' context but does not detail pagination, sorting, or other behavioral traits beyond what annotations state.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence without superfluous words, efficiently conveying the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With three optional parameters, rich annotations, and an output schema present, the description sufficiently covers the tool's functionality. The term 'shared' could be clarified but does not hinder understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for all three parameters. The description adds no additional meaning beyond the schema's parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'shared Workflow goals', which distinguishes it from sibling tools like get_workflow_goal (single goal) and search_workflow_goals (searching).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing all shared goals but provides no explicit guidance on when to use this versus alternatives like search_workflow_goals or get_workflow_goal.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_workflow_runsList Workflow RunsARead-onlyIdempotentInspect
Use this when the user wants recent Workflow run history.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of runs to return. | |
| status | No | Optional run status filter. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds the context 'recent' and 'branch runs', but does not disclose pagination, ordering, or other behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is highly concise and front-loaded with purpose. Every word contributes meaning with no wasted space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, the presence of an output schema, and comprehensive annotations, the description is complete enough. It covers what the tool does and its non-destructive nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The tool description does not add any additional meaning beyond what the schema provides for 'limit' and 'status'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'recent Workflow branch runs'. It explicitly says the tool does not start or stop work, distinguishing it from mutation tools among its siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing runs but does not provide explicit guidance on when to use this tool versus siblings like 'get_workflow_status' or 'inspect_workflow_universe'. No alternatives are named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_workflow_universesList Workflow UniversesARead-onlyIdempotentInspect
Use this when the user wants to browse available Workflow universes.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of universes to return. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds 'without changing state', which is consistent but redundant. No additional behavioral details like pagination or limits beyond schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no fluff, immediately clear what the tool does. Efficient use of words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple with one optional parameter and has an output schema. Description covers the core purpose. Could mention default limit or ordering, but not essential given schema and output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% and the single parameter 'limit' is well-described in the schema. Description adds no parameter information beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'List', the resource 'available Workflow universes', and adds 'without changing state'. It distinguishes from siblings like 'inspect_workflow_universe' and 'list_workflow_goals' by specifying the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description provides no guidance on when to use this tool versus alternatives like 'inspect_workflow_universe' or other list tools. No when-to-use or when-not-to-use context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
propose_workflow_goalPropose Workflow GoalBInspect
Use this when the user asks to create a shared Workflow goal proposal.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Human-readable goal name. | |
| tags | No | Optional comma-separated tags. | |
| visibility | No | Visibility value accepted by Workflow, usually public. | public |
| description | No | Optional goal description. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=false, destructiveHint=false, openWorldHint=true. The description adds 'shared' and 'proposal' but does not explain mutational behavior, required permissions, or what happens after creation. Minimal additional transparency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence, front-loading the key purpose. However, it could be slightly expanded to include usage guidance without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool with full schema coverage, annotations, and an output schema, the description is adequate but lacks contextual details like when to propose vs submit, or the nature of the proposal process.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description 'Create a shared Workflow goal proposal' does not add any parameter-specific details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Create' and the resource 'shared Workflow goal proposal', which distinguishes it from sibling tools like 'get_workflow_goal' (read) and 'submit_workflow_request' (submit, not propose). The word 'proposal' adds important nuance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives, such as when to propose vs submit. The description lacks context for selecting this tool over sibling tools like 'submit_workflow_request' or 'list_workflow_goals'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_workflow_wiki_pageRead Workflow Wiki PageBRead-onlyIdempotentInspect
Use this when the user wants to read one Workflow wiki page.
| Name | Required | Description | Default |
|---|---|---|---|
| page | Yes | Wiki page slug or path. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, idempotentHint, and destructiveHint. The description adds no additional behavioral context such as error handling or prerequisites, but does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with a single sentence that captures the essence, though it could be slightly expanded for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of a detailed input schema, output schema, and appropriate annotations, the description is sufficient for a simple read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for the 'page' parameter. The description does not add additional meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Read' and the resource 'Workflow wiki page', but does not differentiate from sibling tools such as 'search_workflow_wiki' which might search across pages.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'search_workflow_wiki' or 'list_workflow_goals'. No when-not or prerequisite information provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_workflow_goalsSearch Workflow GoalsARead-onlyIdempotentInspect
Use this when the user wants to find Workflow goals by text or tag.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of goals to return. | |
| query | Yes | Search text. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds that it searches across name, description, and tag, but doesn't elaborate return format or pagination behavior, which are covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence with no wasted words, directly stating the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with two parameters and an output schema, the description is adequate. It could mention pagination or read-only nature, but annotations cover the latter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the description adds meaning by specifying that the query parameter searches name, description, and tag, going beyond the schema's 'Search text' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches workflow goals by name, description, or tag, distinguishing it from list_workflow_goals (listing all) and get_workflow_goal (retrieving a specific goal).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool vs alternatives like list_workflow_goals or get_workflow_goal, nor any exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_workflow_wikiSearch Workflow WikiARead-onlyIdempotentInspect
Use this when the user wants to search Workflow project knowledge.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search text. | |
| category | No | Optional wiki category filter. | |
| max_results | No | Maximum number of wiki hits to return. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, so safety and idempotency are covered. The description adds no additional behavioral context (e.g., pagination, rate limits, or query requirements) beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence with no extraneous words. It is front-loaded with the key verb and resource, though it could include a brief note on the type of content searched.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, the presence of a complete input schema, comprehensive annotations, and an output schema, the description is adequate. However, it omits details on the scope of the wiki (public) and the nature of results (e.g., snippets), which might be inferred but are not explicit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with clear descriptions for all three parameters. The description does not add any extra meaning or examples beyond the schema, so it meets the baseline for good schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action ('Search') and the resource ('Workflow public knowledge wiki'), distinguishing it from sibling tools like 'read_workflow_wiki_page' and 'search_workflow_goals'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'read_workflow_wiki_page' for specific pages or 'search_workflow_goals' for goals. The name implies the usage but does not explicitly state criteria or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_workflow_requestSubmit Workflow RequestCInspect
Use this when the user wants the Workflow daemon to handle a bounded request.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Request text to queue. | |
| branch_id | No | Optional target branch identifier. | |
| universe_id | No | Optional target universe. Empty uses the active universe. | |
| request_type | No | Workflow request type. | scene_direction |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate non-destructive, non-idempotent, and non-readOnly behavior, which aligns with queuing. However, the description does not disclose side effects, auth needs, or rate limits, leaving agents uninformed about operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short sentence, which is concise but may be under-specified. It lacks structure and front-loading of critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the presence of an output schema, the description is too brief for a tool with 4 parameters and multiple siblings. It omits details about return values, error conditions, and the meaning of 'bounded'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100% with descriptions for all parameters. The description adds no additional meaning beyond the schema, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses verb 'Queue' and resource 'bounded request for the Workflow daemon', clearly distinguishing it from sibling tools that retrieve or list workflow data. However, 'bounded request' may be ambiguous to an agent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, nor any prerequisites or conditions. The description implicitly suggests it's for queuing requests, but does not clarify when submission is appropriate or not.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!