MCP Orchestrator Server
Server Quality Checklist
Latest release: v1.0.0
- Disambiguation5/5
Every tool has a clearly distinct purpose with no ambiguity. The tools target specific actions like creating, updating, deleting, completing, and retrieving tasks, with clear boundaries between them. For example, get_task_details and get_task_status serve different retrieval functions, and delete_task has a specific condition that distinguishes it.
Naming Consistency5/5All tool names follow a consistent verb_noun pattern throughout, such as create_task, update_task, and get_task_details. There are no deviations in naming conventions, making the set predictable and easy to understand for an agent.
Tool Count5/5With 7 tools, the count is well-scoped for a task management server, covering essential CRUD operations and status checks. Each tool earns its place by addressing a specific aspect of task handling, without being overly sparse or bloated.
Completeness5/5The tool surface provides complete CRUD/lifecycle coverage for task management, including creation, retrieval, updating, deletion, completion, and status monitoring. There are no obvious gaps, and agents can perform all necessary operations without dead ends.
Average 2.9/5 across 7 of 7 tools scored. Lowest: 2.3/5.
See the Tool Scores section below for per-tool breakdowns.
- No issues in the last 6 months
- No commit activity data available
- No stable releases found
- No critical vulnerability alerts
- No high-severity vulnerability alerts
- No code scanning findings
- CI status not available
This repository is licensed under MIT License.
This repository includes a README.md file.
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
If you are the author, simply .
If the server belongs to an organization, first add
glama.jsonto the root of your repository:{ "$schema": "https://glama.ai/mcp/schemas/server.json", "maintainers": [ "your-github-username" ] }Then . Browse examples.
Add related servers to improve discoverability.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. 'Create a new task' implies a write/mutation operation but doesn't specify permissions needed, whether creation is idempotent, what happens on duplicate IDs, or what the response contains. For a mutation tool with zero annotation coverage, this leaves critical behavioral aspects undocumented.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is maximally concise with a single three-word sentence that directly states the action. There's zero wasted language or unnecessary elaboration. While this conciseness comes at the cost of completeness, the description is perfectly structured for its limited content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 3 parameters, no annotations, and no output schema, the description is inadequate. It doesn't explain what happens after creation, what validation occurs, or how to interpret results. The agent lacks sufficient context to understand the tool's behavior and outcomes beyond the basic creation action.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all three parameters well-documented in the schema. The description adds no parameter information beyond what's already in the schema. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no param info in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose2/5Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Create a new task' is a tautology that restates the tool name without adding specificity. It doesn't distinguish this tool from sibling tools like 'update_task' or 'complete_task' beyond the basic verb. While the verb 'create' is clear, the description lacks detail about what constitutes a 'task' in this context or what resources are involved.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines1/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'update_task' or 'complete_task'. There's no mention of prerequisites, appropriate contexts, or exclusions. The agent must infer usage entirely from the tool name and schema, which is insufficient for effective tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a 'Get' operation, implying it's likely read-only, but doesn't confirm this or describe any other traits like authentication needs, rate limits, error conditions, or what 'details' encompass (e.g., full metadata vs. limited fields). For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with zero waste: 'Get details of a specific task'. It's appropriately sized and front-loaded, directly stating the tool's purpose without unnecessary elaboration. Every word earns its place, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a read operation with one parameter) and the absence of annotations and output schema, the description is incomplete. It doesn't explain what 'details' include, potential return values, or behavioral aspects like idempotency. For a tool in a set with multiple task-related siblings, more context is needed to distinguish it and guide proper use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'task_id' documented as 'ID of the task to get details for'. The description adds no additional meaning beyond this, such as format examples or constraints. With high schema coverage, the baseline score of 3 is appropriate, as the schema does the heavy lifting for parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get details of a specific task' clearly states the verb ('Get details') and resource ('a specific task'), making the purpose understandable. However, it doesn't differentiate this from sibling tools like 'get_task_status' or 'get_next_task', which likely also retrieve task information but with different scopes or filters. The description is adequate but lacks sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_task_status' (which might return only status info) and 'get_next_task' (which might retrieve the next pending task), there's no indication that this tool is for retrieving comprehensive details of a specified task ID. Usage is implied by the name but not explicitly stated in the description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. 'Get status of all tasks' implies a read-only operation that returns status information, but it doesn't specify what 'status' entails (e.g., pending, completed, error states), whether it includes metadata or just state, or if there are limitations like pagination or rate limits. For a tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence: 'Get status of all tasks'. It's front-loaded and wastes no words, making it easy to parse. However, it could be slightly more specific (e.g., 'Retrieve the current status for all tasks in the system') to enhance clarity without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of task management (with siblings for creation, completion, deletion, and other get operations), the description is incomplete. No annotations exist to clarify behavior, and there's no output schema to describe return values. The description doesn't explain what 'status' includes or how it differs from other get tools, leaving the agent with insufficient context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description doesn't mention any parameters, which is appropriate since none exist. It adds no semantic value beyond the schema, but with zero parameters, the baseline is 4 as there's nothing to compensate for.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get status of all tasks' clearly states the verb ('Get') and resource ('status of all tasks'), making the purpose understandable. However, it doesn't distinguish this tool from sibling tools like 'get_task_details' or 'get_next_task', leaving ambiguity about what specific status information is provided versus other get operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_task_details' (likely for individual tasks) and 'get_next_task' (likely for queued tasks), there's no indication whether this tool returns aggregated statuses, summary information, or a list of all tasks without filtering. No explicit when/when-not or alternative recommendations are included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but only states the basic action without behavioral details. It doesn't disclose whether this is a mutation (implied but not explicit), what permissions are needed, if it's irreversible, or how it affects task state (e.g., sets status to 'completed'). This leaves significant gaps for a tool that likely modifies data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action, making it easy to parse quickly, though this brevity contributes to gaps in other dimensions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is incomplete. It lacks details on behavior, side effects, return values, and differentiation from siblings. Given the complexity of task management and rich sibling tools, this minimal description leaves the agent under-informed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds no additional meaning beyond implying 'task_id' identifies the task to complete, which is already clear from the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Mark as completed') and resource ('a task'), making the purpose immediately understandable. However, it doesn't differentiate from siblings like 'update_task' which might also handle completion status, leaving room for ambiguity in a task management system.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'update_task' or 'delete_task'. The description lacks context about prerequisites (e.g., task must be in progress) or exclusions (e.g., cannot complete already completed tasks), offering minimal usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It implies a read operation but doesn't disclose behavioral traits like whether this marks tasks as in-progress, affects task state, has rate limits, or requires specific permissions. The phrase 'next available' hints at queue behavior but lacks detail on concurrency or locking mechanisms.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's appropriately sized and front-loaded, directly stating the tool's purpose without unnecessary elaboration, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of task management and lack of annotations or output schema, the description is incomplete. It doesn't explain what 'next available' entails, how tasks are prioritized, what data is returned, or error conditions, leaving significant gaps for an AI agent to understand the tool's behavior in context with its siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the single parameter 'instance_id'. The description doesn't add meaning beyond the schema, but with 0 parameters needing extra explanation, a baseline of 4 is appropriate as no compensation is required for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get the next available task' clearly states the action (get) and resource (task), but it's vague about what 'next available' means in context. It doesn't distinguish this tool from siblings like 'get_task_details' or 'get_task_status', leaving ambiguity about whether this fetches a specific task or the next one in a queue.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With siblings like 'get_task_details' and 'get_task_status', the description doesn't clarify if this is for polling workflows, task assignment, or other contexts, nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this updates an existing task but doesn't mention permissions needed, whether changes are reversible, error conditions, or what happens to unspecified fields. This is inadequate for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's appropriately sized and front-loaded, clearly stating the tool's purpose without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is incomplete. It doesn't address behavioral aspects like side effects, error handling, or return values, leaving significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters. The description adds no additional meaning about parameters beyond what's in the schema, such as format constraints or examples, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and target ('an existing pending task'), providing specific verb+resource. However, it doesn't distinguish this from sibling tools like 'complete_task' or 'create_task' in terms of when to use each, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'complete_task' or 'create_task'. It mentions 'pending task' but doesn't clarify if this is a prerequisite or exclusion criterion, leaving the agent with no usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the dependency check behavior, which is valuable. However, it lacks details on permissions needed, whether deletion is reversible, error handling for invalid IDs, or confirmation prompts. For a destructive operation with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key action and condition. Every word earns its place with no redundancy or fluff, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a destructive mutation tool with no annotations and no output schema, the description is incomplete. It covers the dependency condition but misses critical context like permissions, reversibility, error responses, or what happens post-deletion. For its complexity level, more behavioral disclosure is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'task_id', so the schema already documents it adequately. The description doesn't add parameter-specific details beyond implying task_id usage, but with 0 parameters needing extra semantics (only one well-covered param), baseline 4 is appropriate as no compensation is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and resource ('a task'), specifying the condition ('if it has no dependents'). It distinguishes from siblings like 'complete_task' or 'update_task' by focusing on removal rather than modification. However, it doesn't explicitly contrast with all siblings like 'get_task_status'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('if it has no dependents'), implying it should not be used when tasks have dependencies. It doesn't explicitly name alternatives or state when-not scenarios beyond the dependency condition, but the conditional guidance is strong.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/mokafari/orchestrator-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server