Skip to main content
Glama

Server Quality Checklist

50%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a distinct purpose with clear boundaries. Comment operations (add/edit/get) are well differentiated, issue retrieval is separate from workflow transitions, and connection management stands apart from Jira operations.

    Naming Consistency4/5

    All tools use consistent snake_case formatting. Jira operations follow a clear jira_verb_noun pattern, while connection management tools use simpler verbs (login/logout/list_connections). The distinction between prefixed and unprefixed tools is logical but creates a minor inconsistency in the naming scheme.

    Tool Count5/5

    Ten tools represents an appropriate scope for a Jira integration—sufficient to cover comment management, issue retrieval, workflow transitions, and authentication without bloat or fragmentation.

    Completeness3/5

    The surface covers reading, commenting, and workflow transitions well, but lacks essential write operations for issues themselves—specifically create_issue and update_issue (for fields). This creates a significant functional gap where agents can comment on and move issues but cannot create or modify issue content.

  • Average 2.9/5 across 10 of 10 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 10 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description fails to disclose critical behavioral traits. It does not explain what 'all comments' entails (pagination limits?), valid values for the 'format' parameter (default is 'toon' but alternatives are unexplained), or error conditions (e.g., private issues). The existence of an output schema excuses describing return values, but operational behavior remains undocumented.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The six-word sentence is grammatically efficient and front-loaded, but suffers from under-specification rather than true conciseness. Every word earns its place, yet critical information is missing, making it too terse for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given three parameters with zero schema documentation and no annotations, the description is insufficient. It omits explanation of the 'format' enum values, the scope of 'all' (pagination behavior), and how 'project' relates to 'issue_key'. The presence of an output schema reduces the burden for return value documentation, but parameter and behavioral documentation remain critically incomplete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, requiring the description to compensate, but it fails to do so. It does not explain the 'project' parameter's relationship to 'issue_key', nor does it clarify the mysterious 'format' parameter (defaulting to 'toon') or what valid formatting options are available. Only 'issue' is implicitly referenced.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states a clear verb ('Get') and resource ('comments') with scope ('for a Jira issue'). It implicitly distinguishes from sibling tools like jira_add_comment and jira_edit_comment through the operation type, though it could explicitly clarify when to use this versus jira_get_issue.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus alternatives. For example, it does not clarify whether to use this when you specifically need comment history versus the full issue details available from jira_get_issue, nor does it mention any prerequisites like issue visibility permissions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but fails to disclose critical behavioral traits: it doesn't confirm this is destructive/destructiveHint (though implied by 'Remove'), doesn't specify error behavior if connection doesn't exist, and doesn't mention whether this affects in-flight operations or session validity.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely terse at four words with no redundancy or filler. However, given the complete lack of schema documentation and annotations, this brevity crosses into under-specification rather than efficient communication.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Inadequate for a destructive operation with undocumented parameters. Despite having an output schema (which could clarify success/failure), the description doesn't explain return values, and leaves the critical 'project' parameter unexplained while failing to provide safety warnings expected for state-mutating operations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 0% description coverage for the 'project' parameter, and the description completely fails to compensate—it doesn't mention the parameter at all, nor explain the semantic relationship between the 'project' parameter and the 'saved connection' resource being removed (e.g., whether project is an ID, name, or connection identifier).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States a specific action (Remove) and resource (saved connection), distinguishing it from sibling 'login' (create) and 'list_connections' (read). However, it doesn't specify that this targets Jira/Atlassian connections specifically, leaving slight ambiguity given the mix of Jira-specific and connection-management sibling tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no explicit guidance on when to use versus alternatives, prerequisites (e.g., must be logged in first), or side effects. While it is semantically the inverse of 'login', the description doesn't state this relationship or warn that removal invalidates future operations requiring authentication.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It only notes the wiki markup format requirement; it fails to disclose that this operation overwrites existing content, whether it is destructive/reversible, or error conditions (e.g., invalid comment_id).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The two-sentence structure is efficient with no wasted words. However, given the lack of schema documentation and annotations, the extreme brevity becomes a liability rather than a virtue, as critical context is omitted.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite having an output schema (reducing the need for return value description), the tool requires four identifiers (project, issue, comment, body) with zero schema coverage. The description inadequately explains these inputs or the mutation's side effects, leaving significant gaps for an agent attempting invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, requiring the description to compensate. It only partially addresses the 'body' parameter by noting wiki markup. It fails to explain the other four parameters: the relationship between 'project' and 'issue_key', what constitutes a valid 'comment_id', or the purpose of the 'format' parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Edit') and resource ('existing comment on a Jira issue'), distinguishing it from the sibling 'jira_add_comment'. The mention of 'wiki markup (API v2)' adds necessary technical context about the implementation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While 'Edit an existing' implies the tool is for modification rather than creation, there is no explicit guidance on when to use this versus 'jira_add_comment', nor any mention of prerequisites like permissions or comment ownership.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure but provides minimal information. While 'Get' implies a read-only operation, the description does not confirm lack of side effects, authentication requirements, rate limits, or whether it checks user permissions for the transitions it lists.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single-sentence description is efficiently structured with no redundant words. However, extreme brevity becomes a liability given the lack of schema documentation and annotations, leaving critical information gaps.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While an output schema exists (reducing the need for return value description), the tool remains under-documented. With zero schema coverage, no annotations, and three parameters including an opaque 'format' option, the description fails to provide sufficient context for correct invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, yet the description adds no parameter context. The 'format' parameter (with cryptic default 'toon') and the relationship between 'project' and 'issue_key' are left unexplained, failing to compensate for the undocumented schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Get') and resource ('available transitions') with clear scope ('for a Jira issue'). However, it does not explicitly differentiate from the sibling tool 'jira_transition_issue', which likely performs state changes while this tool retrieves options.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus alternatives. It fails to mention that this retrieves transition metadata while 'jira_transition_issue' executes transitions, or that this should be called before attempting to transition an issue to validate available states.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, yet the description discloses minimal behavioral information. It does not explain the pagination behavior (limit/offset defaults), the mysterious 'format' default value ('toon'), rate limits, or authentication requirements implied by the login/logout siblings.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at two sentences with no filler. However, given the complexity (5 undocumented parameters) and lack of annotations, this brevity results in under-specification rather than efficient communication.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Inadequate for a tool with 5 parameters and zero schema documentation. While an output schema exists (reducing the need to describe return values), the description provides insufficient context for the input parameters, pagination behavior, or JQL syntax requirements.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description fails to compensate adequately. While 'using JQL' implies the syntax for the 'jql' parameter, it leaves four other parameters unexplained: 'project' (key vs name?), 'limit'/'offset' (pagination behavior), and 'format' (allowed values?).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly identifies the action (search), resource (Jira issues), and method (JQL). However, it does not explicitly distinguish this from sibling 'jira_get_issue', which also retrieves issues but by specific identifier rather than query.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use JQL search versus retrieving a specific issue directly via jira_get_issue. No mention of prerequisites, required permissions, or when to prefer this over other discovery methods.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It successfully discloses that secrets/credentials are not exposed in the output ('no secrets shown'), which is valuable security context. However, it omits other behavioral details like pagination, caching, or the structure of the returned connection objects.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with the action front-loaded and a useful parenthetical. However, given the complete lack of parameter documentation, it is inappropriately concise—suffering from under-specification rather than efficient communication.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While the tool has an output schema (reducing the need to describe return values), the description is incomplete because it fails to address the single input parameter at all. For a tool with zero annotations and zero schema coverage, the description should provide more context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage (the 'project' parameter has only a title, no description), the description must compensate but fails entirely. It does not mention the 'project' parameter, explain its purpose (filtering?), its format, or the behavior when null (default).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('List') and resource ('saved connections'). The parenthetical '(no secrets shown)' adds specific behavioral context about the output. It doesn't explicitly distinguish from siblings (e.g., clarifying these are Jira/Database connections vs. the login/logout tools), but the core purpose is clear.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'login' or 'logout', or when to filter by the 'project' parameter. There are no explicit when-to-use or when-not-to-use conditions provided.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It adds value by specifying the return content ('full issue data including fields and status'), which complements the existing output schema. However, it omits behavioral details like error handling (invalid key scenarios), authentication requirements, or rate limiting.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences with no fluff. The first states the operation and target, the second describes the return value. However, given the lack of schema documentation and annotations, this brevity comes at the cost of completeness.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite having an output schema (reducing the need for detailed return value explanation), the description is incomplete due to 0% schema coverage and lack of annotations. With three parameters requiring documentation, the description only implicitly covers one ('issue_key'), leaving significant gaps for an agent trying to construct valid arguments.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, requiring the description to compensate. While 'by key' implies the 'issue_key' parameter, the description fails to explain the 'project' parameter (whether it expects a key or ID) or the 'format' parameter (what valid values exist beyond the default 'json'). This leaves two of three parameters undocumented.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the core action ('Get') and resource ('Jira issue') with the specific identifier ('by key'). It implicitly distinguishes from sibling 'jira_search' (which would retrieve multiple issues) and write operations like 'jira_add_comment'. However, it could explicitly differentiate from related read operations like 'jira_get_transitions'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It fails to mention that 'jira_search' should be used for listing or querying multiple issues, or that 'jira_get_transitions' is needed for workflow status options. No prerequisites or error conditions are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. It adds valuable context that the comment 'body' uses wiki markup and references API v2, but omits mutation side effects, idempotency, or error conditions.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely efficient two-sentence structure. Information is front-loaded with the action, and the second sentence adds critical formatting context without verbosity. No redundant or filler text.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool has an output schema (relieving the description of return value documentation), the description meets minimum viability by explaining the core action and input format. However, with zero schema descriptions, it lacks sufficient detail for an agent to correctly populate all parameters without guessing.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, requiring the description to compensate. It partially compensates by indicating wiki markup applies to the body content, but leaves 'project', 'issue_key', and 'format' parameters completely undocumented with no type hints or examples.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb ('Add') and resource ('comment to a Jira issue'). Implicitly distinguishes from siblings like 'jira_edit_comment' and 'jira_get_comments' through the action verb, though it does not explicitly name alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus alternatives (e.g., when to add vs. edit a comment) or prerequisites (e.g., issue must exist). Agents must infer usage solely from the verb.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Fails to disclose this is a destructive/write operation, requires specific Jira permissions (workflow transition rights), or that it triggers workflow post-functions. 'Transition' implies change but lacks explicit behavioral context.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences, zero waste. Front-loaded with the core action, followed immediately by the critical prerequisite. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Minimum viable for a workflow tool with output schema present. Core function and prerequisite stated, but gaps remain: no mention of permission requirements, workflow constraints, or parameter semantics given the completely undocumented schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 0% description coverage. Description compensates minimally—only implicitly hints at transition_id source via the usage guideline. Project, issue_key, and format parameters remain completely undocumented with no types, formats, or examples provided.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb ('Transition') + resource ('Jira issue') + outcome ('new status'). Distinguishes from jira_get_transitions by implying execution vs. discovery, though it doesn't explicitly characterize this as a state mutation or workflow operation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Excellent prerequisite guidance: explicitly directs users to 'Use jira_get_transitions to discover available transitions' before invoking this tool, establishing clear sequencing with the sibling tool.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses validation behavior (calls /myself) and critical security traits (variable expansion at runtime, never stored resolved). However, missing details on failure behavior, idempotency, or whether saving overwrites existing connections.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste. Purpose and validation front-loaded in first sentence; security guidance follows logically. Every sentence earns its place with specific technical value.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For an authentication tool with 5 parameters and 0% schema coverage, the description covers the critical security aspect well but leaves significant gaps in parameter documentation. Output schema exists so return values need not be explained, but the lack of parameter semantics for 'project' and 'read_only' limits completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 0%, requiring description compensation. It adds essential semantics for 'token' (ENV_VAR pattern and security model) and implies url/email/token are used for auth via the /myself mention. However, 'project' and 'read_only' parameters remain completely undocumented, and the description doesn't explain parameter relationships or formats.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description explicitly states 'Save an Atlassian connection' with specific verb and resource, and mentions validation via '/myself'. Clearly distinguishes from sibling 'logout' (removes connections) and 'list_connections' (reads connections) by specifying the 'save' action.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides critical security guidance on token passing (ENV_VAR vs literal), but lacks explicit guidance on when to use this tool versus siblings or prerequisites (e.g., 'call this before other jira_* tools'). Usage is implied by the name but not stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

a2atlassian MCP server

Copy to your README.md:

Score Badge

a2atlassian MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/agentic-eng/a2atlassian'

If you have feedback or need assistance with the MCP directory API, please join our Discord server