Skip to main content
Glama

Server Quality Checklist

50%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.3.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 10 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully indicates the tool performs a state-changing completion action and returns a summary. However, it lacks details on side effects (e.g., whether the plan becomes read-only), idempotency, or error conditions.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is exceptionally concise with two front-loaded sentences. The first states purpose and return value; the second documents the parameter. There is no redundant or wasted text.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (1 parameter) and existence of an output schema, the description is reasonably complete. It appropriately mentions the return value ('final summary') without needing to detail the schema structure. However, with 9 sibling tools, slightly more lifecycle positioning would strengthen completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 0% description coverage. The description compensates by documenting the single parameter in the Args section ('The plan ID to close'), providing basic semantic meaning. However, it omits format details, constraints, or where to obtain the plan_id.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool marks a plan as 'completed' and returns a 'final summary', using specific verbs and identifying the resource. The 'completed' and 'final' language effectively signals this is a terminal lifecycle operation, distinguishing it from siblings like morpheus_init or morpheus_advance without needing to explicitly name them.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage context through 'completed' and 'final summary', suggesting this is the last step in a workflow. However, it lacks explicit guidance on when to prefer this over morpheus_advance or prerequisites for closing.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. While 'Get' implies a read-only operation, the description does not explicitly confirm safety, idempotency, or side effects. It explains what data is returned but lacks behavioral traits beyond the operation type.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is well-structured with the purpose front-loaded, followed by default behavior and an Args section. There is slight redundancy between the prose explanation of plan_id and the Args section, but every sentence adds value.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema (not shown but indicated), the description appropriately omits return value details. It covers the essential operational context for a single-parameter status tool, though sibling differentiation would improve completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description compensates effectively by explaining that plan_id defaults to the most recent plan when omitted. This semantic explanation of the null/default value is crucial and not inferable from the schema alone.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves 'current plan progress, task states, and active phase' using specific verbs and resources. However, it does not differentiate from sibling tools like 'morpheus_progress' or 'morpheus_gate_summary' which may return similar information.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear guidance on parameter usage (when plan_id is omitted), but offers no guidance on when to select this tool versus siblings like morpheus_progress or morpheus_gate_summary. Usage is implied by the name but not explicitly contextualized.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses key behaviors: parsing markdown, persisting to a store ('saves plan and tasks'), and returning a summary with IDs. However, it omits critical mutation details like idempotency (whether calling twice overwrites state), file existence requirements, or error conditions.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with a clear one-sentence purpose, a second sentence detailing the processing steps, and an Args section documenting the single parameter. No extraneous information is present.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool has only one required parameter and an output schema exists, the description provides sufficient context. It explains the input parameter and gives a high-level summary of the return value without needing to replicate the full output schema structure.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 0% description coverage (only title and type). The description compensates by specifying that plan_file is a 'Filesystem path to the plan markdown file,' clarifying both the format (markdown) and nature (filesystem path). It could be improved by noting absolute vs. relative path handling or file encoding requirements.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Load a plan file'), the target system ('into Morpheus'), and the outcome ('begin tracking state'). It effectively distinguishes itself as the initialization entry point among siblings like morpheus_advance and morpheus_close.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The phrase 'begin tracking state' implies this is the first step in the workflow, but the description lacks explicit guidance on when to use this versus other tools (e.g., 'Call this before morpheus_advance') or prerequisites (e.g., whether Morpheus must be in a specific state).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses the long-term behavioral purpose ('reveals which gates produce value and which burn tokens'), implying data persistence and analysis. However, it omits idempotency guarantees, side effects, or what the output schema contains despite having one.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with purpose front-loaded, followed by timing, value proposition, then parameter documentation. The Args block is lengthy but necessary given zero schema coverage. No wasted words, though the inline parameter documentation could be considered redundant with proper schema descriptions.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a 6-parameter logging tool: documents all inputs with examples. However, given no annotations and an existing output schema, the description should ideally mention error conditions (e.g., invalid plan_id) or prerequisites. It leaves operational behavior (idempotency, overwrites) undocumented.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description fully compensates by documenting all 6 parameters in the Args block, including specific semantics ('True if the gate found something actionable') and concrete examples for 'gate' and 'detail' parameters. This is exemplary compensation for schema deficiencies.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'Record[s] whether a gate caught a real issue or was ceremony' — specific verb, resource (gate reflection), and scope. It distinguishes itself from sibling tools by focusing on dataset building for value analysis rather than gate execution or status checking.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when to use: 'Call this after each gate fires to build the Reflect dataset.' Provides clear temporal context. Lacks explicit 'when not to use' or named alternatives, though the dataset-building purpose implies this is specifically for logging reflection events.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses scoping behavior (lifetime stats across all plans vs scoped) and implies read-only analytics, but omits performance characteristics, rate limits, or computational cost of aggregating lifetime data. Adequate but missing operational safety details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is front-loaded with the core function, followed by default behavior, use case justification, and parameter documentation. Every sentence earns its place with zero redundancy. The Args section efficiently documents the single parameter without boilerplate.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given this is a simple single-parameter tool with an output schema (which absolves the description from detailing return values), the description is complete. It covers domain context (gates, plans), scoping behavior, and purpose. Minor gap in not defining 'gates' for unfamiliar agents, but sufficient given the sibling tool namespace.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0% (title only), so the description compensates by explaining plan_id is 'Optional' and used to 'scope the summary', with the additional context that omitting it returns cross-plan stats. It does not explain the format or constraints of plan_id values, preventing a 5.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Summarize') and resource ('gate outcomes'), detailing exactly what metrics are returned (fired frequency, caught issues, changed code). It clearly distinguishes this as an analytics/reporting tool among operational siblings like morpheus_advance and morpheus_init.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear contextual guidance: it explains the default behavior when plan_id is omitted (returns lifetime stats across all plans) and explicitly states the value proposition ('identify which gates produce value and which are ceremony'). It lacks explicit 'use instead of X' alternatives, but the use case is distinct enough.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It implies read-only behavior via 'Return,' but does not explicitly confirm safety, idempotency, or disclose rate limits. Mentions three specific version components being returned.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two highly efficient sentences with zero waste. Front-loaded with the return value specification, followed by usage guidance. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for a low-complexity tool with 0 parameters and existing output schema. Covers return values at high level and usage context. Could be strengthened by explicitly confirming read-only safety given lack of annotations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Tool has zero parameters, establishing baseline of 4. The description adds value by explicitly confirming 'No arguments required,' which aligns with the empty input schema and prevents parameter hallucination.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Return') and identifies exact resources (Morpheus server version, plan schema version, Python version). It clearly distinguishes this introspection tool from operational siblings like morpheus_advance and morpheus_init.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states 'No arguments required' and provides clear usage context ('introspection and compatibility checks'). Lacks explicit 'when not to use' guidance, though this is less critical for a non-destructive metadata tool.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full behavioral disclosure burden. It successfully declares the tool as 'Purely observational' (safety profile), explains that it 'records a timestamped message' (side effect), and notes visibility in morpheus_status (retrieval mechanism). It does not contradict any annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately front-loaded with the core action and contains zero filler. Each of the three prose sentences plus the two parameter lines conveys essential information. Minor deduction for the informal 'Args:' block structure rather than prose integration, but overall highly efficient.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (2 primitive parameters) and the presence of an output schema (which absolves the description from detailing return values), the description is complete. It covers purpose, behavioral constraints, parameter semantics, and visibility side effects sufficiently for correct agent invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, requiring the description to compensate. It fully documents both parameters under the 'Args:' section, explaining that task_id is 'The task ID to log progress for' and message is the 'Progress message to record.' While effective, the inline Args format is less structured than ideal schema documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with 'Log progress for a task without advancing phases,' providing a specific verb (Log), resource (progress), and critical scope constraint that distinguishes it from siblings morpheus_advance and morpheus_advance_batch. The 'Purely observational' phrase further differentiates it from state-mutating tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The phrase 'without advancing phases' effectively signals when NOT to use this tool (vs. morpheus_advance), and mentioning that output is 'Visible in morpheus_status' indicates the verification path. However, it stops short of explicitly naming the alternative tools or stating 'use this when you need to log without phase changes.'

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full disclosure burden. It successfully explains the validation logic, dual outcome paths (success with instructions vs. rejection with missing details), and auditability side effects ('Recorded in evidence for auditability'). Could be improved by mentioning idempotency or error states.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Efficiently structured with the core action front-loaded, followed by behavioral details, then the Args block. Every sentence conveys necessary information; the example in skip_reason is illustrative without being verbose. No redundant or filler content.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the workflow complexity (phase gates, validation, evidence collection) and lack of annotations, the description provides sufficient context for correct invocation. Since an output schema exists, the brief mention of return behavior ('Returns success...or rejection') is appropriate. Could explicitly note this is for single-task advancement.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Compensates perfectly for 0% schema description coverage by documenting all 4 parameters in the Args section. Provides critical enum values for 'phase' (CHECK, CODE, TEST, GRADE, COMMIT, ADVANCE), clarifies the JSON string format for 'evidence', and explains the bypass logic for 'skip_reason' with usage context.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific action ('Advance a task'), resource ('phase gate'), and mechanism ('with evidence'). The singular 'task' implicitly distinguishes it from sibling 'morpheus_advance_batch', while 'phase gate' differentiates it from tools like morpheus_close or morpheus_status.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit guidance on when to use the skip_reason parameter ('Use for intentional skips') with a concrete example ('greenfield — no diff for Seraph'). Explains the validation prerequisite (evidence must satisfy gate requirements). Lacks explicit comparison to the batch variant or when to prefer morpheus_gate_summary.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full disclosure burden. It reveals critical execution semantics: sequential processing and stop-on-first-failure behavior. It also documents the expected object structure (task_id, phase, evidence). It lacks rate limits or auth details, but the execution model is transparent.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured in four sentences: purpose statement, execution semantics, input format specification, and Args section. Every sentence earns its place; there is no redundancy or generic filler. Information is front-loaded with the core purpose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given that an output schema exists (per context signals), the description appropriately omits return value details. It covers the essential complexity of a batch operation—specifically the partial failure behavior—and compensates for the barren input schema. It could mention rate limits or prerequisites, but it is sufficient for tool invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%—the 'advances' parameter has no schema description and is typed only as 'string'. The description provides essential compensation by specifying that this string must contain a 'JSON array of objects with keys: task_id, phase, evidence', without which the tool would be unusable.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Advance') and clearly identifies the resource ('multiple tasks through phase gates') and batch nature ('in a single call'). It effectively distinguishes itself from the sibling tool 'morpheus_advance' by emphasizing the batch processing capability.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear behavioral context ('Stops at the first failure') which implicitly guides usage—warning that partial completion is possible. While it doesn't explicitly name 'morpheus_advance' as the single-task alternative, the phrase 'multiple tasks... in a single call' makes the batch use case clear relative to the singular sibling tool name.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses the mutation effect ('Clears any oil_change_due advisory') and the prerequisite relationship with sentinel_health_check. Missing explicit statements on idempotency or error states, but covers the primary behavioral traits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three tight sentences followed by a structured Args block. No filler text. Information is front-loaded (action, workflow, effect) before parameter details. Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the output schema exists (per context signals), the description appropriately omits return value details. It covers the action, workflow context, side effects, and all undocumented parameters. Minor gap: could mention what happens if health_check_id is invalid, but sufficient for selection.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0% (titles only), but the description compensates perfectly via the 'Args:' block. It documents all 3 parameters with specific semantics: health_check_id is linked to sentinel_health_check results, and commits_since_last clarifies the temporal reference ('since last health check').

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action ('Record an oil change') and resource ('plan'), with the metaphor explained as a 'macro-lens health check'. Describes the side effect (clears advisory). However, it does not explicitly differentiate from sibling morpheus tools (e.g., how this differs from morpheus_status or morpheus_reflect).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance: 'Call this after running sentinel_health_check and reviewing the results.' This establishes clear prerequisites and temporal ordering, telling the agent exactly when to invoke the tool vs. its dependency.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

morpheus-mcp MCP server

Copy to your README.md:

Score Badge

morpheus-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/evo-hydra/morpheus-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server