Skip to main content
Glama
seayniclabs

Mooring

by seayniclabs

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Clear separation between GitHub API (gh_*) and local git (repo_*) operations. While gh_issues combines list/create/update into one tool whereas PR operations are split into separate tools, the descriptions clearly distinguish their purposes with no overlapping functionality between different tools.

    Naming Consistency3/5

    Uses consistent prefixes (gh_ and repo_) but mixes naming patterns: gh_issues and gh_actions use nouns while PR tools use verb_noun (gh_pr_create, gh_pr_list). Local git tools follow git command names (mostly nouns) but lack verb consistency with the GitHub action-oriented names.

    Tool Count5/5

    Twelve tools is well-scoped for a server bridging GitHub API and local git operations. The count covers essential inspection and management tasks without bloat, with each tool earning its place in the set.

    Completeness3/5

    Strong read-only coverage (status, log, diff, blame, list operations) but notable gaps in write operations: no PR update/merge/close, no local git commit/push/pull/checkout, and limited Actions support (list only). The surface supports inspection workflows well but leaves agents unable to complete full git/PR lifecycle operations.

  • Average 3.4/5 across 12 of 12 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.1

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 12 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden for behavioral disclosure. While it notes default values (base=main), it omits critical mutation-related context: error conditions (e.g., duplicate PRs), idempotency characteristics, authentication requirements, and side effects of the creation operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately sized with a front-loaded purpose statement followed by a structured Args section. There is minimal waste, though the Args formatting is slightly unconventional for natural language processing.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While all parameters are documented and an output schema exists (reducing the need to describe return values), the description lacks behavioral context expected for a mutation tool with no safety annotations. Prerequisites and side effects remain undocumented.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Given 0% schema description coverage, the description compensates effectively by documenting all 7 parameters in the Args block, adding crucial semantics like 'owner/name format' for repo and clarifying that head/base refer to branch names. This provides the meaning entirely missing from the schema titles.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with 'Create a pull request,' providing a clear, specific verb and resource. However, it fails to distinguish from siblings like gh_pr_detail or gh_pr_list, which could confuse an agent about whether this tool retrieves or creates PRs.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description offers no guidance on when to use this tool versus alternatives (e.g., gh_pr_list for finding existing PRs), nor does it mention prerequisites such as requiring the head branch to exist on the remote repository before invocation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations are absent, so the description carries full behavioral burden, yet it omits critical details: it does not state this is read-only (implied only by 'Get'), does not mention authentication requirements, rate limits, or error behavior (e.g., PR not found).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The docstring-style structure (summary sentence followed by Args block) is efficiently organized and front-loaded. No redundant text, though the parameter documentation is slightly repetitive of the parameter names.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the simple, flat input schema (2 primitives) and existence of an output schema, the description is minimally complete. However, the lack of behavioral transparency and sibling differentiation leaves gaps that could cause invocation errors.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the Args section compensates effectively: it documents 'repo' with essential format constraints ('owner/name format') and clarifies 'number' refers to 'PR number'. However, it does not describe validation rules or constrints.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states a clear action ('Get') and resource ('detailed information about a specific pull request'). The use of 'specific' implicitly distinguishes it from sibling gh_pr_list, though it does not explicitly name alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this versus gh_pr_list (which likely searches/filter) or gh_pr_create. No prerequisites, conditions, or exclusions are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but provides minimal context. It doesn't mention pagination behavior, result limits, rate limiting, or authentication requirements that are relevant for GitHub API operations. The presence of an output schema is noted in context but the description doesn't hint at the return structure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with the purpose statement front-loaded followed by parameter documentation. The Args format is slightly informal but perfectly readable. No redundant or wasted text - every line adds specific information not present in the schema.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a read-only list operation with an output schema present. However, given it's a list operation potentially returning many results, the description should mention pagination behavior or result limits. Missing safety/scope guidance (though 'List' implies read-only) given lack of annotations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Excellent compensation for 0% schema description coverage. The Args section documents all 4 parameters with meaningful semantics: 'repo' includes the owner/name format requirement, 'state' enumerates valid values (open/closed/all) and default, and filter parameters 'author' and 'label' clarify their filtering purpose.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states 'List pull requests from a GitHub repository' with a specific verb and resource. However, it doesn't explicitly differentiate from the sibling tool 'gh_pr_detail' - it should clarify that this returns a collection/summary versus detailed information about a specific PR.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this versus 'gh_pr_detail' (which likely returns more comprehensive information about a single PR) or versus 'gh_issues' (for issues instead of PRs). The agent must infer based on tool names alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description fails to disclose behavioral specifics: it doesn't explain that 'pop' removes the stash after applying while 'apply' preserves it, nor that 'message' is only relevant for 'push'. It also omits mutation/safety warnings (e.g., potential for merge conflicts during pop/apply).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Uses a clear header sentence followed by an Args block. While the Args format is slightly informal/docstring-like, it is well-structured and every line conveys necessary information without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Covers basic operation listing but lacks crucial details for a multi-modal tool: the specific effects of each action (especially pop vs apply) and parameter constraints (message only used for push). Output schema exists so return values don't need description, but operational semantics are under-specified.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Compensates effectively for 0% schema description coverage by documenting all three parameters in the Args block, including valid values for 'action' and the optional nature of 'message'. It correctly notes the default 'list' behavior.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly identifies the tool manages git stash operations and lists the four specific supported actions (list, push, pop, apply). Distinguishes sufficiently from sibling repo_* tools by focusing on the stash-specific workflow, though it could explicitly mention 'Git' for absolute clarity.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use stash versus commit or other workflows, nor does it explain the critical difference between destructive (pop) and non-destructive (apply) operations. No 'when not to use' or prerequisite context is given.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description must carry the full disclosure burden. It adds value by specifying the output format ('Unified diff'), but lacks disclosure on error behavior (invalid refs, non-repo paths), whether the operation is read-only (implied but not stated), or performance characteristics for large repositories.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with a one-sentence purpose followed by an Args block. No filler text is present. The Args block is necessary given the lack of schema descriptions, so it earns its place despite the parameter listing.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema (so return values need not be described) and the parameter documentation in the description text, the definition is minimally complete. However, it lacks a title (null) and omits behavioral details like error conditions or parameter interdependencies that would help an agent invoke it correctly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, so the description compensates by documenting all four parameters in the 'Args' block. It adds semantic meaning (e.g., 'Starting reference' for from_ref) and notes the default for 'staged', which is not inferable from the schema titles alone.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly identifies the tool generates a 'Unified diff' (specific format) and identifies the comparison targets (refs, staged, working tree). However, it uses a noun phrase rather than an action verb and does not explicitly differentiate from 'repo_status' (which shows change status but not content).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this versus 'repo_status' or 'repo_log', nor does it explain the interaction between 'staged' and ref parameters (e.g., are they mutually exclusive?). The description only lists parameters without usage context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, description carries full disclosure burden. It adds 'Formatted' indicating output style and notes default max_count (20), but omits safety characteristics (read-only nature), error behaviors (invalid repo_path handling), or output schema details (though output schema exists separately).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with purpose front-loaded in first sentence followed by parameter documentation. 'Args:' section is necessary given schema coverage gap, though slightly informal. No redundant sentences.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Covers core functionality and parameters adequately given complexity. Missing sibling differentiation and behavioral edge cases, but acceptable since output schema exists to define return values and annotations are absent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Given 0% schema description coverage, the description compensates by documenting all 6 parameters with clear semantics and constraints (e.g., default 20 for max_count, date example for since). Falls short of 5 by not detailing type constraints or format validation beyond single examples.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (formatted commit log) and resource (git repository). Clearly distinguishes from GitHub API siblings (gh_*) by focusing on local repository operations, though could better differentiate from other local git tools like repo_diff or repo_blame.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no explicit guidance on when to use versus alternatives (e.g., repo_diff for changes, repo_blame for line attribution) or prerequisites. Only implies usage through filter descriptions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. States 'recent' but lacks pagination details, rate limits, authentication requirements, or cache behavior. 'List' implies read-only but safety profile isn't explicit.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Compact Args block format efficiently documents parameters. No wasted words. Slightly informal structure (Python docstring style) but appropriate for developer tooling.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Has output schema so return values don't need description. However, lacks critical operational context like GitHub token requirements and pagination behavior for 'recent' runs given the parameter complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Excellent compensation for 0% schema coverage. Describes repo format constraint (owner/name) and marks workflow/status as optional. Minor gap: doesn't enumerate valid status values (success, failure, etc.).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb 'List' and resource 'GitHub Actions workflow runs'. Distinct from siblings like gh_issues (issues), gh_pr_list (PRs), and repo_log (git history).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit when-to-use or when-not-to-use guidance. No comparison with alternatives or prerequisites mentioned (e.g., GitHub authentication requirements).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the burden of disclosure. It lists what data is retrieved (branch, stash count, file states), implying a read-only inspection, but does not explicitly state safety guarantees, failure modes (e.g., non-git directory), or that it requires local git installation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently sized with two distinct sections. While the 'Args:' section is slightly unconventional for MCP (parameters are usually documented solely in the schema), it earns its place by compensating for the schema's lack of descriptions. No redundant or filler text.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the low complexity (single parameter, read-only operation) and the existence of an output schema (per context signals), the description adequately covers the tool's purpose and parameter semantics without needing to detail return values.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 0% description coverage (only a title 'Repo Path'). The description compensates by documenting the single parameter in the Args section: 'repo_path: Path to local git repository', adding crucial context that this path must point to a git repository specifically.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states this retrieves 'Enhanced git status' and lists specific data points returned (branch, ahead/behind, stash count, staged/unstaged/untracked files). However, it does not explicitly differentiate from sibling tools like 'repo_stash' (which likely manages stashes vs. just counting them) or 'repo_diff'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'repo_diff' (for changes) or 'repo_log' (for history). There are no 'when-not-to-use' exclusions or prerequisites mentioned, such as requiring the path to be a valid git repository.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It implies read-only behavior through 'Return,' but omits auth requirements, caching behavior, or what constitutes 'status' (binary up/down vs detailed metrics). Minimal but adequate for a simple health endpoint.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely efficient: two words convey the complete operation. No redundancy, no wasted sentences. Appropriate for a parameterless health check endpoint.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (no inputs, output schema exists), the description is sufficient. The output schema handles return value documentation. Minor gap: 'status' could be clarified (health state vs general metadata).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters exist, warranting the baseline score of 4 per rubric guidelines. The description correctly implies no inputs are needed, matching the empty schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Return') and identifies the resource ('server version and status'). While it doesn't explicitly contrast with siblings, the scope is clearly distinct from the git-centric tools (gh_actions, repo_blame, etc.), making its purpose obvious in context.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to invoke this tool versus alternatives. For a health check, explicit guidance about using it for connectivity verification or pre-flight checks would be helpful, but is absent.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description must carry the full behavioral disclosure burden. It mentions line range support, which is useful, but omits details about output format (text vs structured), what specific information git blame returns (author, commit hash, timestamp), or error conditions when the file doesn't exist.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with the core purpose in the first sentence, constraints in the second, and a clear Args section documenting parameters. No words are wasted; every sentence earns its place by conveying essential information not present in the structured schema fields.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given that an output schema exists (per context signals), the description appropriately does not need to explain return values. It adequately covers the 4 parameters and basic operation for a git blame tool. A minor gap is the lack of mention regarding whether the tool requires git to be installed or specific repository states.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description fully compensates by documenting all 4 parameters in the Args section. It adds meaningful semantics such as 'relative to repo root' for file_path and notes the optional nature of line numbers, providing sufficient context for the agent to use the tool correctly.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly identifies the tool as performing 'Git blame' with optional line range support, which is a specific verb+resource combination. However, it does not explicitly differentiate from siblings like 'repo_diff' (which shows changes) or 'repo_log' (which shows history), leaving the agent to infer that blame shows line-level authorship.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides one important constraint ('File path must be within the repository'), but lacks explicit guidance on when to use blame versus alternatives like repo_diff or repo_log. The guideline is implied by the function name rather than explicitly stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses what data is returned (tracking info, ahead/behind counts), but lacks information about side effects, performance characteristics, or error conditions. The 'List' verb implies read-only behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with the primary action upfront, followed by an Args section. The information is dense with minimal waste, though the 'Args:' formatting is slightly informal for an MCP description.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (single parameter) and the existence of an output schema, the description is sufficiently complete. It appropriately previews the key output concepts (tracking info, ahead/behind) without needing to fully describe the schema structure.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema coverage (only 'title' present, no 'description'), the description compensates by documenting the repo_path parameter as 'Path to local git repository.' This provides necessary semantic meaning, though it could be richer (e.g., specifying absolute vs relative paths or validation requirements).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('List all branches') and distinguishes itself from siblings like repo_log, repo_diff, and repo_status by specifying the unique output content (tracking info, last commit, ahead/behind counts). The scope is precisely defined.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives (e.g., when to use repo_status instead) or any prerequisites. There is no 'when to use' or 'when not to use' information present.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It documents valid values for action (list/create/update) and state (open/closed/all), but fails to disclose behavioral traits of the mutating operations—such as whether updates are destructive, if create requires specific permissions, or error conditions when updating non-existent issues.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with a clear purpose statement followed by an Args section. Every line provides necessary constraint information. The docstring format is functional and front-loaded, though slightly less polished than natural language prose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema, the description appropriately focuses on input parameters and their relationships. It adequately covers the multi-modal complexity (list/create/update) by documenting parameter requirements for each mode, though it could mention the existence of the output schema or typical response structures.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description fully compensates by documenting all 6 parameters: repo format ('owner/name'), state options, action values, and conditional requirements ('required for create/update'). It adds critical semantic meaning entirely absent from the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the three supported operations (list, create, update) and the resource (GitHub issues). However, it does not explicitly differentiate from sibling PR tools (gh_pr_create, gh_pr_list) or explain why issues are bundled into one tool while PRs are split across multiple tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The Args section provides implied usage guidance by indicating which parameters are required for specific actions (e.g., 'required for create', 'for create or update'). However, it lacks explicit guidance on when to prefer listing vs. creating, or warnings about when NOT to use this tool (e.g., for PRs).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

mooring MCP server

Copy to your README.md:

Score Badge

mooring MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/seayniclabs/mooring'

If you have feedback or need assistance with the MCP directory API, please join our Discord server