Skip to main content
Glama
hanweg

mcp-discord

by hanweg

Server Quality Checklist

50%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.0.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 15 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'add' implies a mutation, it doesn't specify permissions required, rate limits, whether reactions are reversible, or what happens on failure (e.g., duplicate reactions). For a mutation tool with zero annotation coverage, this is a significant gap in transparency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is appropriately sized and front-loaded, with every word contributing to understanding the core purpose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity of a mutation tool with no annotations and no output schema, the description is incomplete. It doesn't address behavioral aspects like error handling, side effects, or response format, which are crucial for an AI agent to use the tool correctly in a Slack-like context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with clear descriptions for all three parameters (channel_id, message_id, emojis). The description adds no additional parameter semantics beyond what the schema provides, such as format examples or constraints, so it meets the baseline score of 3 for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb ('add') and resource ('multiple reactions to a message'), making the purpose specific and understandable. However, it doesn't explicitly distinguish this tool from its sibling 'add_reaction' (which presumably adds a single reaction), leaving some ambiguity about when to choose one over the other.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'add_reaction' or other message-related tools. It lacks context about prerequisites (e.g., needing message and channel IDs) or exclusions, leaving usage decisions entirely to inference from the tool name and parameters.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Add a reaction' implies a mutation operation, it doesn't specify permission requirements, rate limits, whether reactions are reversible, or what happens on success/failure. For a mutation tool with zero annotation coverage, this leaves significant behavioral gaps.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero wasted words. It's appropriately sized for a simple tool and front-loads the core purpose immediately.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given this is a mutation tool with no annotations and no output schema, the description is incomplete. It doesn't explain what happens after adding a reaction, potential error conditions, or important behavioral constraints. The agent would need to guess about the tool's effects and limitations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The description adds no parameter information beyond what's already in the input schema, which has 100% coverage with clear descriptions for all three parameters. According to scoring rules, when schema_description_coverage is high (>80%), the baseline is 3 even with no param info in the description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Add a reaction') and target ('to a message'), providing a specific verb+resource combination. However, it doesn't distinguish this tool from its sibling 'add_multiple_reactions' or 'remove_reaction', which would require explicit differentiation for a perfect score.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'add_multiple_reactions' or 'remove_reaction'. It also doesn't mention prerequisites, context, or exclusions, leaving the agent with minimal usage direction.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure. 'Add a role to a user' implies a write/mutation operation but doesn't specify required permissions, whether this is reversible (though 'remove_role' sibling suggests it is), rate limits, or what happens on success/failure. For a Discord API mutation tool with zero annotation coverage, this leaves significant behavioral gaps.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is maximally concise - a single clear sentence that states exactly what the tool does. There's zero wasted language, no unnecessary elaboration, and the core purpose is immediately apparent. This is an excellent example of efficient communication.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a Discord role assignment tool with no annotations and no output schema, the description is insufficient. It doesn't cover important contextual elements like required permissions, error conditions, what the tool returns, or how this operation fits within Discord's role hierarchy. The agent would need to guess about many operational aspects.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with all three parameters clearly documented in the schema itself. The description adds no additional parameter context beyond what's already in the schema (role_id, server_id, user_id). This meets the baseline expectation when schema coverage is complete, but provides no extra semantic value.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Add') and target ('a role to a user'), making the purpose immediately understandable. It distinguishes from siblings like 'remove_role' by specifying the opposite operation, though it doesn't explicitly contrast with other user/role management tools. The description avoids tautology by not just restating the tool name.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (like needing admin permissions), when this operation is appropriate versus other role management approaches, or what happens if the role is already assigned. The agent must infer usage from context alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but offers minimal behavioral context. It states the tool creates something but doesn't disclose permissions required, rate limits, whether creation is reversible, what happens on duplicate names, or what the response contains. This leaves significant gaps for a mutation tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, clear sentence with zero wasted words. It's front-loaded with the core purpose and appropriately sized for a straightforward creation tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain what happens after creation, error conditions, or behavioral aspects like permissions. Given the complexity of creating a Discord channel, more context is needed for safe and effective use.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so all parameters are documented in the schema itself. The description adds no additional parameter information beyond what the schema provides, such as format examples or constraints. This meets the baseline for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('create') and resource ('new text channel'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'delete_channel' or 'get_channels' beyond the obvious verb difference, nor does it specify platform context beyond what's implied by the tool name.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided about when to use this tool versus alternatives. The description doesn't mention prerequisites (e.g., needing server permissions), when not to use it (e.g., for voice channels), or relationships with sibling tools like 'get_channels' for checking existing channels first.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. 'Delete a channel' implies a destructive, irreversible mutation, but it doesn't specify consequences (e.g., whether messages are lost, if it's permanent, or if it affects server structure). It also omits information about permissions, rate limits, or error conditions. For a destructive tool with zero annotation coverage, this leaves significant behavioral gaps.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, direct sentence with zero wasted words. It's front-loaded with the core action and resource, making it highly efficient. Every word earns its place, and there's no unnecessary elaboration or redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (destructive mutation with 2 parameters) and lack of annotations or output schema, the description is incomplete. It fails to address critical aspects like the irreversible nature of deletion, permission requirements, or what happens post-deletion. For a tool with significant behavioral implications, the description should provide more context to ensure safe and correct usage.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, with clear documentation for both parameters ('channel_id' and 'reason'). The description adds no additional meaning beyond what the schema provides—it doesn't explain parameter interactions, format specifics, or usage nuances. With high schema coverage, the baseline score of 3 is appropriate as the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Delete a channel' clearly states the verb ('Delete') and resource ('a channel'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'remove_role' or 'moderate_message', but the specificity of 'channel' as the target resource provides reasonable distinction. This is clear but lacks explicit sibling differentiation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., permissions needed), when deletion is appropriate versus archiving or other actions, or how it relates to sibling tools like 'create_text_channel' or 'get_channels'. There's no explicit or implied context for usage decisions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action but lacks details on permissions required, rate limits, pagination, or what the output looks like (e.g., format, fields). This is a significant gap for a tool that likely interacts with an external API like Discord.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is appropriately sized and front-loaded, making it easy for an agent to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of annotations and output schema, the description is incomplete. It doesn't address behavioral aspects like authentication needs, error handling, or return values, which are crucial for an AI agent to use the tool effectively in a real-world context like Discord API interactions.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema description coverage is 100%, with the single parameter 'server_id' clearly documented in the schema. The description adds no additional meaning beyond implying the parameter is needed, so it meets the baseline score of 3 where the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb ('Get') and resource ('list of all channels in a Discord server'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'get_server_info' or 'list_servers', which might also provide channel-related information in different contexts, so it doesn't reach the highest score.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. For example, it doesn't mention if this is the primary method for listing channels compared to other tools or if there are specific scenarios where it's preferred, leaving the agent to infer usage from context alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It states it 'gets information' but doesn't clarify what information is returned, whether it requires permissions, if it's read-only, or any rate limits. This leaves significant gaps for a tool that interacts with an external service like Discord.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it easy to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity of interacting with Discord (an external API with potential auth and rate limits), no annotations, and no output schema, the description is insufficient. It doesn't explain what information is returned, error conditions, or behavioral traits, leaving the agent with significant uncertainty about how to use this tool effectively.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema description coverage is 100%, with the single parameter 'server_id' clearly documented in the schema as 'Discord server (guild) ID'. The description doesn't add any meaning beyond this, such as format examples or sourcing tips, but the baseline score of 3 is appropriate since the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb ('Get') and resource ('information about a Discord server'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'get_channels' or 'get_user_info' that also retrieve Discord data, missing the specificity needed for a perfect score.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. With siblings like 'list_servers' (which might list multiple servers) and 'get_user_info' (for user data), there's no indication of context, prerequisites, or exclusions for selecting this specific tool.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action ('Get information') but fails to describe key traits such as whether this is a read-only operation, potential rate limits, authentication requirements, or what happens if the user ID is invalid. For a tool with zero annotation coverage, this is a significant gap in transparency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded with the core action and resource, making it easy to parse. Every part of the sentence earns its place by clearly conveying the essential function.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of annotations and output schema, the description is incomplete for a tool that likely returns user data. It doesn't specify what information is retrieved (e.g., username, avatar, roles), the response format, or error handling. For a read operation with no structured output documentation, more detail is needed to guide the agent effectively.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, with the 'user_id' parameter clearly documented as 'Discord user ID'. The description does not add any meaning beyond this, as it doesn't elaborate on parameter usage, format, or examples. According to the rules, with high schema coverage, the baseline is 3, which is appropriate here.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb ('Get') and resource ('information about a Discord user'), making the purpose immediately understandable. It distinguishes this from siblings like 'list_members' or 'get_server_info' by focusing on individual user data rather than collections or server metadata. However, it doesn't specify what information is retrieved (e.g., profile, roles, status), keeping it from a perfect score.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance is provided on when to use this tool versus alternatives like 'list_members' or 'get_server_info'. The description implies usage for retrieving data about a specific user, but it lacks context on prerequisites (e.g., needing a user ID) or exclusions (e.g., not for bulk queries). This leaves the agent to infer usage from the tool name alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden for behavioral disclosure. It states the tool fetches a list but doesn't describe what the list contains (e.g., member IDs, names, roles), whether it's paginated, rate limits, authentication requirements, or error conditions. This leaves significant gaps for a tool that likely interacts with external APIs.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized for a simple list operation and front-loaded with the core functionality.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the returned list contains, how members are ordered, whether all members are fetched or just a subset, or any behavioral traits like rate limits. For a tool with external API interactions and no structured output documentation, this is inadequate.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with clear documentation for both parameters ('server_id' and 'limit'), so the baseline is 3. The description adds no additional parameter semantics beyond what the schema provides, such as explaining Discord server ID format or typical limit values.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Get a list') and resource ('members in a server'), making the purpose immediately understandable. However, it doesn't differentiate this tool from potential siblings like 'get_user_info' or 'get_server_info' that might also retrieve member-related data, preventing a perfect score.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_user_info' (which might retrieve individual member details) and 'get_server_info' (which might include member counts), there's no indication of when this list-focused tool is preferred or what prerequisites exist (e.g., needing server access).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the action ('read') but doesn't cover critical aspects like permissions needed, rate limits, pagination behavior, or what 'recent' means (e.g., time-based or count-based). This is inadequate for a read operation in a collaborative environment like Discord.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste. It's front-loaded with the core action and resource, making it easy to parse quickly. Every word earns its place without redundancy or fluff.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what 'recent' entails, the return format, error handling, or how it fits with sibling tools. For a read operation in a complex system like Discord, this leaves significant gaps for an AI agent to operate effectively.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema fully documents both parameters ('channel_id' and 'limit'). The description adds no additional meaning beyond implying 'recent' messages, which doesn't clarify parameter usage or constraints beyond what the schema provides. Baseline 3 is appropriate when the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('read') and resource ('recent messages from a channel'), making the tool's purpose understandable. However, it doesn't differentiate from potential siblings like 'get_channels' or 'list_members' that might also involve reading data, leaving room for ambiguity in a crowded toolset.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_channels' and 'list_members' that might overlap in reading data, there's no indication of context, prerequisites, or exclusions, leaving the agent to guess based on tool names alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action but doesn't mention required permissions, whether it's destructive, rate limits, or what happens on success/failure. For a mutation tool, this leaves significant gaps in understanding its behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, clear sentence with zero wasted words. It's front-loaded with the core action and target, making it highly efficient and easy to parse.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a mutation tool with no annotations and no output schema, the description is insufficient. It lacks information about behavioral traits, error conditions, or what the tool returns, leaving the agent with incomplete context for proper invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds no additional meaning about parameters beyond what's in the schema, meeting the baseline for high coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Remove') and target ('a reaction from a message'), making the purpose immediately understandable. It doesn't differentiate from siblings like 'add_reaction' or 'add_multiple_reactions' beyond the obvious verb difference, but the purpose is unambiguous.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives or in what context it should be invoked. The description merely states what it does without indicating prerequisites, constraints, or relationships to sibling tools like 'add_reaction'.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Remove' implies a destructive mutation, the description doesn't specify whether this action is reversible, what permissions are required, or any side effects (e.g., role removal notifications). For a mutation tool with zero annotation coverage, this is a significant gap in transparency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and target, making it easy to parse quickly. Every word earns its place, achieving optimal conciseness for this simple tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (a destructive mutation with 3 required parameters) and the lack of annotations and output schema, the description is incomplete. It doesn't cover behavioral aspects like permissions, reversibility, or error conditions, nor does it explain the result of the operation. For a mutation tool, this leaves critical gaps for an AI agent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema description coverage is 100%, with all three parameters clearly documented in the input schema (role_id, server_id, user_id). The description adds no additional meaning beyond what the schema provides, such as explaining relationships between parameters or usage examples. Baseline 3 is appropriate when the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Remove') and target ('a role from a user'), which is specific and unambiguous. It distinguishes from sibling tools like 'add_role' by specifying removal rather than addition. However, it doesn't explicitly mention the Discord server context, which is implied but could be more precise.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing proper permissions), when not to use it, or how it relates to sibling tools like 'add_role' or 'list_members'. This leaves the agent with insufficient context for decision-making.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden for behavioral disclosure. While 'send' implies a write operation, it doesn't address permissions needed, rate limits, whether messages can be edited/deleted after sending, or what happens on failure. For a mutation tool with zero annotation coverage, this leaves significant gaps.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero wasted words. It's appropriately sized for a simple tool and front-loads the core functionality immediately.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a mutation tool with no annotations and no output schema, the description is incomplete. It doesn't explain what happens after sending (e.g., success/failure response, message ID returned), behavioral constraints, or error conditions. Given the complexity of sending messages in Discord, more context is needed.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so both parameters are fully documented in the schema itself. The description adds no additional meaning about parameters beyond what the schema already provides, such as format constraints or examples. Baseline 3 is appropriate when schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('send') and target resource ('message to a specific channel'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'moderate_message' or 'read_messages' which also involve messages, leaving some ambiguity about scope.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided about when to use this tool versus alternatives. With siblings like 'moderate_message' and 'read_messages' that also handle messages, the description offers no context about appropriate use cases, prerequisites, or exclusions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions deletion and optional timeout but lacks critical details: whether this action is reversible, what permissions are required, if there are rate limits, or what happens to the user during timeout. For a mutation tool with zero annotation coverage, this is a significant gap.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste—it directly states the core action and optional feature. Every word earns its place, making it highly concise and well-structured.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (a mutation with no annotations and no output schema), the description is insufficient. It does not cover behavioral aspects like permissions, reversibility, or response format, nor does it explain the optional timeout's effects. For a moderation tool, this leaves critical gaps in understanding.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all four parameters (channel_id, message_id, reason, timeout_minutes) with their types and constraints. The description adds no additional meaning beyond what the schema provides, such as explaining the 'reason' parameter's format or the implications of 'timeout_minutes'. Baseline 3 is appropriate when the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Delete a message') and optional additional action ('optionally timeout the user'), which distinguishes it from sibling tools like 'delete_channel' (which deletes channels) or 'remove_role' (which removes roles). The verb+resource combination is precise and unambiguous.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'delete_channel' or 'remove_role', nor does it mention prerequisites (e.g., moderation permissions) or exclusions. It merely states what the tool does without contextual usage information.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes a read-only operation ('Get a list') but lacks details on permissions needed, rate limits, pagination, or error handling. For a tool with zero annotation coverage, this is a significant gap in transparency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, well-structured sentence that efficiently conveys the tool's purpose, scope, and returned details without any redundant or vague language. It is front-loaded with the core action and includes only essential information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is adequate for basic understanding but incomplete for operational use. It lacks behavioral context like rate limits or permissions, which is important even for simple tools, especially with no annotations to compensate.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description appropriately does not discuss parameters, earning a baseline score of 4 for not adding unnecessary information beyond the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb ('Get a list') and resource ('Discord servers the bot has access to'), specifying the scope ('all') and listing key details returned ('name, id, member count, and creation date'). It distinguishes from siblings like 'get_server_info' (which likely fetches details for a single server) by emphasizing it returns a list of all servers.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage context by stating it lists 'all Discord servers the bot has access to,' suggesting it should be used for broad overviews rather than specific server queries. However, it does not explicitly mention when not to use it or name alternatives like 'get_server_info' for single-server details, leaving some ambiguity.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

mcp-discord MCP server

Copy to your README.md:

Score Badge

mcp-discord MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hanweg/mcp-discord'

If you have feedback or need assistance with the MCP directory API, please join our Discord server