smithery-ai-slack
Server Details
Enable interaction with Slack workspaces. Supports subscribing to Slack events through Resources.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- smithery-ai/mcp-servers
- GitHub Stars
- 95
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
7 toolsslack_add_reactionBInspect
Add a reaction emoji to a message
| Name | Required | Description | Default |
|---|---|---|---|
| reaction | Yes | The name of the emoji reaction (without ::) | |
| timestamp | Yes | The timestamp of the message to react to | |
| channel_id | Yes | The ID of the channel containing the message |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It fails to disclose that this is a state-mutating operation, whether it's idempotent (adding same reaction twice), error conditions (invalid timestamp), or required permissions (Slack scopes).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 6 words with zero waste and clear front-loading. However, 'appropriately sized' is debatable given the lack of behavioral transparency and guidelines that should have been included.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a state-modifying tool with no annotations and no output schema, the description is insufficient. It omits success indicators, error scenarios, side effects, and permission requirements necessary for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. The description mentions 'reaction emoji' and 'message' but adds no semantic context beyond what the schema already provides (e.g., no examples of emoji format, no explanation of timestamp format).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Add') with clear resource ('reaction emoji') and target ('message'). It clearly distinguishes from siblings like slack_post_message (creating new messages) and the various slack_get_* tools (read-only operations).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus posting a message with emoji, or prerequisites like obtaining the message timestamp first. No mention of rate limits or permission requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
slack_get_channel_historyCInspect
Get recent messages from a channel
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of messages to retrieve (default 10) | |
| channel_id | Yes | The ID of the channel |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden but offers minimal context. It implies a read-only operation but fails to define 'recent' (time window), mention pagination behavior, rate limits, or required authentication scopes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no redundancy or filler. However, the extreme brevity borders on under-specification given the lack of annotations and output schema—it could benefit from one additional sentence on scope or pagination without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 2-parameter read operation with complete input schema coverage, but lacks any description of return values or message format. Given the absence of an output schema, the description should at least indicate what data structure or fields are returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both channel_id and limit parameters. The description adds no semantic details beyond the schema (e.g., no format guidance for channel_id, no max limit warnings), warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves messages from a channel using specific verbs ('Get') and resource ('messages'). While it doesn't explicitly contrast with slack_get_thread_replies, the phrase 'from a channel' provides implicit scope differentiation from thread-specific operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus alternatives like slack_get_thread_replies (for threaded conversations) or slack_list_channels (to find channel_id first). No prerequisites or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
slack_get_thread_repliesCInspect
Get all replies in a message thread
| Name | Required | Description | Default |
|---|---|---|---|
| thread_ts | Yes | The timestamp of the parent message in the format '1234567890.123456'. Timestamps in the format without the period can be converted by adding the period such that 6 numbers come after it. | |
| channel_id | Yes | The ID of the channel containing the thread |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies read-only, the description fails to clarify pagination behavior (despite claiming 'all' replies), rate limits, whether the parent message is included, or error handling for invalid thread timestamps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that immediately states the tool's function. While appropriately front-loaded, it borders on underspecification—lacking the contextual padding that would make it truly complete without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with 100% schema coverage, the description is minimally viable but incomplete. Given the lack of output schema or annotations, it should ideally describe the return structure (e.g., list of messages) or note that thread_ts must reference a parent message, not just any message.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both channel_id and thread_ts well-documented in the schema (including the timestamp format). The description adds no additional parameter semantics, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('replies in a message thread'), clearly distinguishing this from sibling tools like slack_get_channel_history by specifying 'thread'. However, it misses a perfect score by not explicitly contrasting with the channel history tool or mentioning the Slack context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., when to use this instead of get_channel_history), nor does it mention prerequisites like obtaining the thread_ts from a parent message. It states what it does but not when to do it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
slack_get_user_profileBInspect
Get detailed profile information for a specific user
| Name | Required | Description | Default |
|---|---|---|---|
| user_id | Yes | The ID of the user |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies a read-only operation, the description does not explicitly confirm safety, mention error handling (e.g., user not found), rate limits, or what 'detailed' profile information encompasses.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the action verb. There is no redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool without an output schema, the description is minimally adequate. It states the core function but could benefit from mentioning what profile fields are returned or that this requires a valid user identifier.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage with the 'user_id' parameter already described as 'The ID of the user'. The description aligns with this by mentioning 'for a specific user', but does not add additional semantic context such as where to obtain the ID or its format, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Get') and resource ('detailed profile information'), and implicitly distinguishes from sibling 'slack_get_users' by specifying 'specific user' versus a list. However, it does not explicitly reference sibling tools to clarify when to use this over the list variant.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'slack_get_users', nor does it mention prerequisites such as obtaining the user_id from other operations first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
slack_get_usersBInspect
Get a list of all users in the workspace with their basic profile information
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of users to return (default 100, max 200) | |
| cursor | No | Pagination cursor for next page of results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'basic profile information' hinting at data scope, but fails to disclose pagination behavior (despite the cursor parameter), rate limits, whether the operation is read-only, or if the list includes app/bot users. This leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence (12 words) that front-loads the action ('Get a list'). There is no redundant or wasted text, making it appropriately concise for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list retrieval tool with 100% schema coverage and no output schema, the description is minimally adequate. It partially compensates for the missing output schema by mentioning 'basic profile information', but should explicitly note pagination behavior given the cursor parameter exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no specific parameter guidance, but with 100% schema description coverage ('Maximum number of users...', 'Pagination cursor...'), the baseline score of 3 is appropriate. The description doesn't need to repeat the schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Get') and specifies the resource ('all users in the workspace') and scope ('basic profile information'). It implicitly distinguishes from sibling 'slack_get_user_profile' by emphasizing 'all users' and 'basic' info, though it could be more explicit about the distinction between bulk/basic retrieval versus single-user detailed lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus 'slack_get_user_profile' or other user-related operations. It lacks explicit when-to-use criteria, prerequisites (like required scopes), or exclusions (e.g., whether it includes deactivated users or bots).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
slack_list_channelsBInspect
List public or pre-defined channels in the workspace with pagination
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of channels to return (default 100, max 200) | |
| cursor | No | Pagination cursor for next page of results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It successfully discloses pagination behavior and scope limitations ('public or pre-defined' channels only). However, it omits auth requirements, rate limits, and what occurs when no cursor is provided (initial page behavior).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 11 words is efficiently front-loaded with the action and resource. No redundant or filler text. However, extreme brevity limits the amount of guidance that can be conveyed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimum viable for a 2-parameter list operation. Given zero annotations and no output schema, the description adequately covers the core function but lacks return value description, error behaviors, or pagination workflow details that would help an agent handle the response correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description mentions 'pagination' which conceptually maps to the cursor parameter, but adds no syntax details, examples, or constraints beyond what the schema already documents for limit and cursor.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('List') and resource ('public or pre-defined channels') with scope ('in the workspace'). The 'public or pre-defined' qualifier helps distinguish scope, though it could explicitly differentiate from slack_get_channel_history (which retrieves messages, not channel lists).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance provided. No mention of alternatives like slack_get_channel_history for message retrieval or slack_get_users for user discovery. Agent must infer usage solely from the verb and resource name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
slack_post_messageAInspect
Post a new message to a Slack channel or reply to a thread
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The message text to post | |
| thread_ts | No | Optional. The timestamp of the parent message to reply to in the format '1234567890.123456'. When provided, the message will be posted as a reply to the thread. | |
| channel_id | Yes | The ID of the channel to post to |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. While it states the core action, it fails to disclose mutation characteristics: no mention of rate limits, visibility permissions, error behavior, or whether posts are irreversible. For a write operation, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 9 words. Every word earns its place: action verb first, resource identified, dual modes covered (channel vs thread). No redundancy or structural waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimum viable for complexity. The input schema is comprehensive (100% coverage), but the description omits operational context expected for a mutation tool lacking annotations—specifically auth requirements, success/failure behavior, and side effects. Just adequate given the rich schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with detailed descriptions for all parameters including thread_ts format. The description reinforces the thread reply use case but adds no semantic details (e.g., validation rules, text length limits) beyond what the schema already provides. Baseline 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Post', 'reply') and clearly identifies the resource ('message') and targets ('Slack channel', 'thread'). It effectively distinguishes from siblings like slack_get_channel_history (read vs write) and slack_add_reaction (message vs reaction).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning the dual capability (new message vs thread reply), which hints at when to populate thread_ts. However, it lacks explicit guidance on when NOT to use this (e.g., vs slack_get_thread_replies for reading) or prerequisites like required bot permissions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!