uluesky
Server Details
Bluesky MCP — wraps the AT Protocol API
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-bluesky
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 12 of 12 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, with clear separation between Bluesky API operations (e.g., get_feed, get_posts, search_posts) and memory management (remember, recall, forget). However, get_feed and get_posts could potentially be confused as both retrieve post content, though their different parameters (feed vs user feed) provide some differentiation.
The naming follows a consistent verb_noun pattern throughout (e.g., get_feed, search_posts, resolve_handle), with all tools using snake_case. The only minor deviation is discover_tools, which uses 'discover' rather than 'get' or 'search', but it still follows the same structural pattern.
With 12 tools, the count is well-scoped for a Bluesky API server. It provides comprehensive coverage of core Bluesky operations (profiles, posts, follows, feeds) plus memory management utilities, without feeling bloated or incomplete.
The toolset covers most essential Bluesky operations including profile lookup, post retrieval, feed access, and handle resolution. Minor gaps include the inability to create or interact with posts (liking, reposting, replying) and no user authentication management tools, but the available tools support robust read-only workflows.
Available Tools
13 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a query tool that interprets natural language, selects data sources automatically, and returns results. However, it doesn't mention potential limitations like rate limits, authentication needs, or error handling, leaving some behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core functionality, followed by practical guidance and examples. Every sentence earns its place: the first explains the purpose, the second describes the mechanism, the third provides usage guidance, and the examples illustrate application. No wasted words, and the structure flows logically from general to specific.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (natural language processing to select data sources) and lack of annotations or output schema, the description is reasonably complete. It covers the purpose, usage, and behavioral approach, though it could benefit from mentioning response formats or error cases. The examples help contextualize, but some operational details remain implicit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'question' well-documented in the schema as 'Your question or request in natural language.' The description adds minimal value beyond this, only reinforcing that questions should be in 'plain English' without providing additional syntax or format details. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'), distinguishing it from sibling tools like search_posts or get_profile that require specific parameters and schemas.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides clear alternatives by implication (use other tools when you know specific schemas) and includes examples like 'What is the US trade deficit with China?' to illustrate appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns 'the most relevant tools with names and descriptions' and has a search function, but lacks details on behavioral traits like rate limits, authentication needs, error handling, or pagination. The description adds some context but doesn't fully compensate for the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core function, and the second provides crucial usage guidance. Every sentence earns its place with no wasted words, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search function with 2 parameters) and no output schema, the description is mostly complete. It explains the purpose, usage context, and return format ('tools with names and descriptions'), but could benefit from mentioning output structure details or limitations. However, it adequately covers the essentials for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description mentions 'by describing what you need' which aligns with the 'query' parameter, but adds no additional semantic context beyond what the schema provides. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resources ('by describing what you need'), distinguishing it from sibling tools which focus on social media operations (get_feed, get_followers, etc.). It explicitly identifies the target resource as the tool catalog, making its function unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear when-to-use criteria (large tool catalog, initial discovery) and distinguishes it from alternatives by positioning it as a preliminary search tool rather than a direct data retrieval tool like the siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a deletion operation, implying mutation/destructive behavior, but doesn't clarify permissions needed, whether deletion is permanent/reversible, error handling (e.g., if key doesn't exist), or side effects. The description adds minimal context beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action ('Delete') and resource ('stored memory'), making it immediately scannable and appropriately sized for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description is inadequate. It doesn't address critical context like what 'delete' entails (permanent removal?), authentication requirements, error responses, or return values. Given the mutation nature and lack of structured coverage, more behavioral detail is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'key' documented as 'Memory key to delete'. The description adds no additional meaning beyond this—it doesn't explain key format, constraints, or examples. Baseline 3 is appropriate since the schema already fully describes the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and resource ('stored memory by key'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'recall' or 'remember', but the verb 'delete' strongly implies destructive removal versus retrieval or storage operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing memory key), exclusions, or relationships to sibling tools like 'recall' (which likely retrieves memories) or 'remember' (which likely stores them).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_feedBInspect
Get posts from a Bluesky feed (e.g., "discover", "what's-hot"). Returns recent posts with authors, timestamps, and engagement counts.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of posts (1-100, default 20) | |
| feed_uri | No | AT URI of the feed generator (default: whats-hot) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds minimal context: the '[Public]' tag hints at accessibility, and the default feed is specified. However, it lacks details on critical behaviors such as rate limits, authentication needs, error handling, or the structure of returned posts, which are essential for a tool that retrieves data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, efficiently conveying the core purpose and default behavior in a single sentence. However, it could be slightly more structured by separating the public tag from the functional description, but overall, it avoids unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete for a tool that retrieves feed data. It fails to explain what the output looks like (e.g., post format, pagination), any dependencies, or error cases, leaving significant gaps in understanding how to effectively use the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting both parameters (limit and feed_uri) with defaults and constraints. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline score of 3 for adequate but not enhanced coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get posts') and resource ('from a Bluesky feed'), making the purpose understandable. However, it doesn't explicitly differentiate this tool from sibling tools like 'get_posts' or 'search_posts', which appear to retrieve similar content, so it misses the highest score for sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning the default feed ('discover/whats-hot'), suggesting this tool is for fetching feed posts rather than other types of content. However, it provides no explicit guidance on when to use this versus alternatives like 'get_posts' or 'search_posts', leaving the context somewhat vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_followersCInspect
Get a user's followers on Bluesky by handle. Returns follower profiles including handles, display names, bios, and follower counts.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of followers (1-100, default 50) | |
| handle | Yes | Bluesky handle |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It only states the action ('Get a user's followers') without mentioning permissions, rate limits, pagination, or the format of returned data. The '[Public]' prefix hints at accessibility but is vague and insufficient for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single, front-loaded sentence that states the core purpose without any wasted words. It efficiently communicates the essential action, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., list of followers, metadata), error conditions, or behavioral traits like rate limits. For a tool with two parameters and no structured output, more context is needed to guide effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting both parameters ('handle' and 'limit') with details like data types, constraints, and defaults. The description adds no additional parameter information beyond what the schema provides, so it meets the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('a user's followers'), making it immediately understandable. However, it doesn't differentiate this from sibling tools like 'get_follows' or 'get_profile', which likely retrieve related but different user data, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_follows' or 'get_profile'. It lacks any context about prerequisites, exclusions, or specific use cases, leaving the agent to infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_followsCInspect
Get accounts a Bluesky user follows by handle. Returns followed profiles with handles, display names, bios, and descriptions.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of follows (1-100, default 50) | |
| handle | Yes | Bluesky handle |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds minimal context: '[Public]' hints at accessibility but doesn't clarify rate limits, authentication needs, pagination, or response format. For a read operation with no annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond its basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single, front-loaded sentence with no wasted words. Every element ('[Public]', 'Get accounts that a user follows') contributes directly to understanding the tool's purpose and scope, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete for effective tool use. It misses critical details like return format (e.g., list structure, fields included), error handling, and how it differs from sibling tools. While concise, it does not compensate for the absence of structured behavioral or output information, leaving the agent under-informed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema fully documents both parameters ('handle' and 'limit') with descriptions and constraints. The description does not add any meaning beyond what the schema provides, such as explaining parameter interactions or usage examples. This meets the baseline score when the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('accounts that a user follows'), making the purpose unambiguous. However, it does not explicitly differentiate this tool from its sibling 'get_followers', which retrieves the inverse relationship. The '[Public]' prefix adds context about access but doesn't fully distinguish functionality from similar tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'get_followers' (for followers instead of follows) or 'get_profile' (which might include follow data). The description implies usage for retrieving follow relationships but offers no explicit when/when-not instructions or prerequisites, leaving the agent to infer context from tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_postsCInspect
Fetch recent posts from a Bluesky user's timeline. Returns post text, timestamps, likes, reposts, reply counts, and threaded replies.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of posts (1-100, default 20) | |
| handle | Yes | Bluesky handle |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read operation ('Get') and public access ('[Public]'), but lacks details on rate limits, authentication needs, pagination, error handling, or what 'recent' means (e.g., time frame). This is inadequate for a tool with potential API constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It is front-loaded with key information ('[Public] Get recent posts') and appropriately sized for its purpose, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It fails to explain behavioral traits like rate limits or authentication, and doesn't describe return values (e.g., post format, pagination). For a tool fetching user data with siblings, more context is needed to guide effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema fully documents both parameters ('handle' and 'limit'). The description adds no additional parameter semantics beyond implying the 'handle' is for a Bluesky user, which is already clear from the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get recent posts') and target resource ('from a Bluesky user's feed'), with the '[Public]' prefix indicating access scope. However, it doesn't differentiate this tool from sibling tools like 'get_feed' or 'search_posts', which likely serve similar purposes but with different filtering or scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as 'get_feed' or 'search_posts'. It mentions a specific context ('Bluesky user's feed') but lacks explicit when/when-not instructions or prerequisites, leaving the agent to infer usage based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_profileAInspect
Look up a Bluesky user's profile by handle (e.g., "alice.bsky.social"). Returns display name, bio, follower/following counts, avatar, and verification status.
| Name | Required | Description | Default |
|---|---|---|---|
| handle | Yes | Bluesky handle (e.g., alice.bsky.social) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the public access nature ('[Public]') and specifies the input format (handle with example), but doesn't describe behavioral traits like rate limits, error conditions, authentication needs, or what data the profile contains. It adds some context but leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads key information (public access, action, resource, constraint) with zero wasted words. Every element earns its place, making it highly scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with 1 parameter and no output schema, the description is adequate but incomplete. It doesn't explain what a 'profile' contains or the response format, which would help the agent understand the tool's output. Given the lack of annotations and output schema, more context would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single 'handle' parameter with its type and example. The description adds no additional parameter semantics beyond what's in the schema, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get'), resource ('Bluesky user profile'), and key constraint ('by handle'), distinguishing it from siblings like get_feed or get_posts which target different resources. The inclusion of '[Public]' further clarifies access scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to retrieve a user profile by handle) but doesn't explicitly mention when not to use it or name alternatives like resolve_handle (which might convert handles to DIDs). The context is sufficient but lacks explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_threadCInspect
Fetch a post thread by URI. Returns the parent post and all replies in conversation order with timestamps, authors, and engagement data.
| Name | Required | Description | Default |
|---|---|---|---|
| post_uri | Yes | AT URI of the post (at://did/app.bsky.feed.post/rkey) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool is '[Public]', implying accessibility, but doesn't explain what this means operationally (e.g., authentication requirements, rate limits, or data sensitivity). It mentions retrieving a 'post thread' but doesn't describe the return format, error handling, or any side effects. For a tool with no annotations, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single sentence that directly states the tool's purpose. It is front-loaded with the key information ('[Public] Get a post thread by AT URI') and contains no unnecessary words or redundancy. Every part of the sentence serves a clear purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a retrieval tool with no annotations and no output schema), the description is incomplete. It lacks details on what a 'post thread' entails, the return format, error conditions, or usage context relative to siblings. The '[Public]' hint is vague without further explanation. For a tool that likely returns structured data, more guidance is needed to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'post_uri' fully documented in the schema. The description adds no additional meaning beyond what the schema provides (e.g., it doesn't clarify thread-specific aspects of the URI or provide examples). With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and the resource ('a post thread by AT URI'), making the purpose understandable. However, it doesn't explicitly differentiate this from sibling tools like 'get_posts' or 'get_feed', which appear to retrieve similar content. The description is specific but lacks sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_posts' or 'search_posts'. It mentions retrieving a 'post thread' but doesn't clarify what constitutes a thread or when this is preferred over other retrieval methods. No exclusions or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the dual functionality (retrieve by key vs. list all) and persistence across sessions, which is valuable. However, it doesn't mention error handling (e.g., what happens if key doesn't exist), performance characteristics, or format of returned data, leaving some behavioral aspects unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each serve distinct purposes: the first explains the dual functionality, and the second provides usage context. There's zero wasted language, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 1 parameter, 100% schema coverage, and no output schema, the description provides good contextual completeness. It explains the tool's purpose, usage scenarios, and parameter semantics effectively. The main gap is the lack of output format details, but given the tool's relative simplicity, this is a minor omission.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, so the baseline is 3. The description adds meaningful context by explaining the semantic effect of omitting the key parameter ('omit to list all keys'), which clarifies the tool's dual behavior beyond what the schema's technical description provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from sibling tools like 'remember' (which stores) and 'forget' (which deletes) by focusing on retrieval operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool: 'to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter ('omit key to list all keys'), giving clear operational instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it explains persistence differences (authenticated users get persistent memory vs. anonymous sessions lasting 24 hours) and the cross-tool context capability. However, it doesn't mention potential limitations like storage size constraints or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two focused sentences that each earn their place: the first states the core functionality, the second adds important behavioral context about persistence. No wasted words, and it's front-loaded with the primary purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with no annotations and no output schema, the description provides good context about what the tool does and its persistence behavior. However, it doesn't explain what happens on success/failure or return values, which would be helpful given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents both parameters. The description doesn't add meaningful parameter semantics beyond what's in the schema (which provides examples and clear descriptions), so it meets the baseline but doesn't enhance understanding of the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Store') and resource ('key-value pair in your session memory'), and distinguishes it from sibling tools like 'recall' (which presumably retrieves stored data). It explicitly mentions what gets stored and where.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but doesn't explicitly state when not to use it or name alternatives among siblings (though 'recall' is implied as complementary). It gives practical examples but lacks explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_handleBInspect
Convert a Bluesky handle to its DID (decentralized identifier). Returns the DID for programmatic account lookups.
| Name | Required | Description | Default |
|---|---|---|---|
| handle | Yes | Bluesky handle to resolve |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool is '[Public]', hinting at accessibility, but lacks details on rate limits, error conditions, response format, or whether it's read-only or has side effects. This leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single sentence that front-loads the key information ('[Public]' and the core function). There is no wasted language, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what a DID is, the return format, potential errors, or usage constraints. For a tool with no structured behavioral data, this leaves the agent under-informed about how to effectively use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'handle' parameter well-documented. The description adds no additional parameter semantics beyond what the schema provides, such as handle format examples or validation rules, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Resolve') and target resource ('Bluesky handle to a DID'), distinguishing it from sibling tools that fetch feeds, followers, posts, profiles, or search content. It precisely defines the tool's function without redundancy.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While it implies usage for handle resolution, it doesn't specify contexts like user lookup or authentication, nor does it mention any sibling tools as alternatives for related tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_postsAInspect
Search Bluesky posts by keyword or phrase. Returns matching posts with author handles, timestamps, engagement metrics, and content.Requires bsky_handle and bsky_app_password in the gateway URL query params.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results (1-100, default 25) | |
| query | Yes | Search query |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context by specifying authentication requirements ('[Auth required]' and details about bsky_handle and bsky_app_password) and implies it's a read operation (searching), though it doesn't mention rate limits, error handling, or response format. This provides useful behavioral information beyond basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with two sentences that efficiently convey key information: authentication requirements and the tool's purpose. Every sentence earns its place by providing essential details without unnecessary elaboration or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search operation with authentication), no annotations, and no output schema, the description is somewhat complete but has gaps. It covers authentication and purpose well but lacks details on response format, error cases, or behavioral constraints like rate limits, which would be helpful for an AI agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents both parameters ('limit' and 'query') fully. The description does not add any additional meaning or details about the parameters beyond what the schema provides, such as search syntax or examples. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search Bluesky posts by keyword') and identifies the resource ('Bluesky posts'), making the purpose explicit. It distinguishes this tool from siblings like 'get_posts' or 'get_feed' by specifying it's for keyword-based searching rather than retrieval by other criteria.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Search Bluesky posts by keyword') and mentions authentication requirements, but it does not explicitly state when not to use it or name specific alternatives among the sibling tools. This gives good guidance but lacks explicit exclusions or comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!