Skip to main content
Glama

uluesky

Server Details

Bluesky MCP — wraps the AT Protocol API

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-bluesky
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.2/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes targeting different Bluesky resources (feeds, followers, posts, profiles, threads, handles), but 'get_feed' and 'get_posts' could cause some confusion since both retrieve posts—one from algorithmic feeds and one from user feeds. The descriptions help clarify, but the overlap exists.

Naming Consistency5/5

All tools follow a consistent 'verb_noun' pattern with 'get_' or 'resolve_' prefixes, using snake_case uniformly. This predictable naming makes it easy for agents to understand and select tools without confusion.

Tool Count5/5

With 8 tools, this server is well-scoped for interacting with Bluesky's social features. Each tool serves a clear purpose (e.g., retrieving data on feeds, followers, posts, profiles), and none seem redundant or excessive for the domain.

Completeness3/5

The toolset covers read operations well for Bluesky's public data (feeds, followers, posts, profiles, threads, handles), but lacks write or interaction capabilities (e.g., posting, liking, following). This is a notable gap for a social media domain, though agents can still perform many retrieval tasks.

Available Tools

8 tools
get_feedBInspect

[Public] Get posts from a Bluesky feed (default: discover/whats-hot)

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of posts (1-100, default 20)
feed_uriNoAT URI of the feed generator (default: whats-hot)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds minimal context: the '[Public]' tag hints at accessibility, and the default feed is specified. However, it lacks details on critical behaviors such as rate limits, authentication needs, error handling, or the structure of returned posts, which are essential for a tool that retrieves data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded, efficiently conveying the core purpose and default behavior in a single sentence. However, it could be slightly more structured by separating the public tag from the functional description, but overall, it avoids unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete for a tool that retrieves feed data. It fails to explain what the output looks like (e.g., post format, pagination), any dependencies, or error cases, leaving significant gaps in understanding how to effectively use the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting both parameters (limit and feed_uri) with defaults and constraints. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline score of 3 for adequate but not enhanced coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get posts') and resource ('from a Bluesky feed'), making the purpose understandable. However, it doesn't explicitly differentiate this tool from sibling tools like 'get_posts' or 'search_posts', which appear to retrieve similar content, so it misses the highest score for sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning the default feed ('discover/whats-hot'), suggesting this tool is for fetching feed posts rather than other types of content. However, it provides no explicit guidance on when to use this versus alternatives like 'get_posts' or 'search_posts', leaving the context somewhat vague.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_followersCInspect

[Public] Get a user's followers

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of followers (1-100, default 50)
handleYesBluesky handle
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It only states the action ('Get a user's followers') without mentioning permissions, rate limits, pagination, or the format of returned data. The '[Public]' prefix hints at accessibility but is vague and insufficient for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single, front-loaded sentence that states the core purpose without any wasted words. It efficiently communicates the essential action, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., list of followers, metadata), error conditions, or behavioral traits like rate limits. For a tool with two parameters and no structured output, more context is needed to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting both parameters ('handle' and 'limit') with details like data types, constraints, and defaults. The description adds no additional parameter information beyond what the schema provides, so it meets the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('a user's followers'), making it immediately understandable. However, it doesn't differentiate this from sibling tools like 'get_follows' or 'get_profile', which likely retrieve related but different user data, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_follows' or 'get_profile'. It lacks any context about prerequisites, exclusions, or specific use cases, leaving the agent to infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_followsCInspect

[Public] Get accounts that a user follows

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of follows (1-100, default 50)
handleYesBluesky handle
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds minimal context: '[Public]' hints at accessibility but doesn't clarify rate limits, authentication needs, pagination, or response format. For a read operation with no annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond its basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single, front-loaded sentence with no wasted words. Every element ('[Public]', 'Get accounts that a user follows') contributes directly to understanding the tool's purpose and scope, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete for effective tool use. It misses critical details like return format (e.g., list structure, fields included), error handling, and how it differs from sibling tools. While concise, it does not compensate for the absence of structured behavioral or output information, leaving the agent under-informed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents both parameters ('handle' and 'limit') with descriptions and constraints. The description does not add any meaning beyond what the schema provides, such as explaining parameter interactions or usage examples. This meets the baseline score when the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('accounts that a user follows'), making the purpose unambiguous. However, it does not explicitly differentiate this tool from its sibling 'get_followers', which retrieves the inverse relationship. The '[Public]' prefix adds context about access but doesn't fully distinguish functionality from similar tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'get_followers' (for followers instead of follows) or 'get_profile' (which might include follow data). The description implies usage for retrieving follow relationships but offers no explicit when/when-not instructions or prerequisites, leaving the agent to infer context from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_postsCInspect

[Public] Get recent posts from a Bluesky user's feed

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of posts (1-100, default 20)
handleYesBluesky handle
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read operation ('Get') and public access ('[Public]'), but lacks details on rate limits, authentication needs, pagination, error handling, or what 'recent' means (e.g., time frame). This is inadequate for a tool with potential API constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It is front-loaded with key information ('[Public] Get recent posts') and appropriately sized for its purpose, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It fails to explain behavioral traits like rate limits or authentication, and doesn't describe return values (e.g., post format, pagination). For a tool fetching user data with siblings, more context is needed to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents both parameters ('handle' and 'limit'). The description adds no additional parameter semantics beyond implying the 'handle' is for a Bluesky user, which is already clear from the schema. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get recent posts') and target resource ('from a Bluesky user's feed'), with the '[Public]' prefix indicating access scope. However, it doesn't differentiate this tool from sibling tools like 'get_feed' or 'search_posts', which likely serve similar purposes but with different filtering or scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'get_feed' or 'search_posts'. It mentions a specific context ('Bluesky user's feed') but lacks explicit when/when-not instructions or prerequisites, leaving the agent to infer usage based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_profileAInspect

[Public] Get a Bluesky user profile by handle (e.g., "alice.bsky.social")

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesBluesky handle (e.g., alice.bsky.social)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the public access nature ('[Public]') and specifies the input format (handle with example), but doesn't describe behavioral traits like rate limits, error conditions, authentication needs, or what data the profile contains. It adds some context but leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads key information (public access, action, resource, constraint) with zero wasted words. Every element earns its place, making it highly scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with 1 parameter and no output schema, the description is adequate but incomplete. It doesn't explain what a 'profile' contains or the response format, which would help the agent understand the tool's output. Given the lack of annotations and output schema, more context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single 'handle' parameter with its type and example. The description adds no additional parameter semantics beyond what's in the schema, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'), resource ('Bluesky user profile'), and key constraint ('by handle'), distinguishing it from siblings like get_feed or get_posts which target different resources. The inclusion of '[Public]' further clarifies access scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to retrieve a user profile by handle) but doesn't explicitly mention when not to use it or name alternatives like resolve_handle (which might convert handles to DIDs). The context is sufficient but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_threadCInspect

[Public] Get a post thread by AT URI

ParametersJSON Schema
NameRequiredDescriptionDefault
post_uriYesAT URI of the post (at://did/app.bsky.feed.post/rkey)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool is '[Public]', implying accessibility, but doesn't explain what this means operationally (e.g., authentication requirements, rate limits, or data sensitivity). It mentions retrieving a 'post thread' but doesn't describe the return format, error handling, or any side effects. For a tool with no annotations, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single sentence that directly states the tool's purpose. It is front-loaded with the key information ('[Public] Get a post thread by AT URI') and contains no unnecessary words or redundancy. Every part of the sentence serves a clear purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a retrieval tool with no annotations and no output schema), the description is incomplete. It lacks details on what a 'post thread' entails, the return format, error conditions, or usage context relative to siblings. The '[Public]' hint is vague without further explanation. For a tool that likely returns structured data, more guidance is needed to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'post_uri' fully documented in the schema. The description adds no additional meaning beyond what the schema provides (e.g., it doesn't clarify thread-specific aspects of the URI or provide examples). With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and the resource ('a post thread by AT URI'), making the purpose understandable. However, it doesn't explicitly differentiate this from sibling tools like 'get_posts' or 'get_feed', which appear to retrieve similar content. The description is specific but lacks sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_posts' or 'search_posts'. It mentions retrieving a 'post thread' but doesn't clarify what constitutes a thread or when this is preferred over other retrieval methods. No exclusions or prerequisites are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_handleBInspect

[Public] Resolve a Bluesky handle to a DID

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesBluesky handle to resolve
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool is '[Public]', hinting at accessibility, but lacks details on rate limits, error conditions, response format, or whether it's read-only or has side effects. This leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single sentence that front-loads the key information ('[Public]' and the core function). There is no wasted language, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what a DID is, the return format, potential errors, or usage constraints. For a tool with no structured behavioral data, this leaves the agent under-informed about how to effectively use it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'handle' parameter well-documented. The description adds no additional parameter semantics beyond what the schema provides, such as handle format examples or validation rules, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Resolve') and target resource ('Bluesky handle to a DID'), distinguishing it from sibling tools that fetch feeds, followers, posts, profiles, or search content. It precisely defines the tool's function without redundancy.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it implies usage for handle resolution, it doesn't specify contexts like user lookup or authentication, nor does it mention any sibling tools as alternatives for related tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_postsAInspect

[Auth required] Search Bluesky posts by keyword. Requires bsky_handle and bsky_app_password in the gateway URL query params.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results (1-100, default 25)
queryYesSearch query
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context by specifying authentication requirements ('[Auth required]' and details about bsky_handle and bsky_app_password) and implies it's a read operation (searching), though it doesn't mention rate limits, error handling, or response format. This provides useful behavioral information beyond basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey key information: authentication requirements and the tool's purpose. Every sentence earns its place by providing essential details without unnecessary elaboration or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search operation with authentication), no annotations, and no output schema, the description is somewhat complete but has gaps. It covers authentication and purpose well but lacks details on response format, error cases, or behavioral constraints like rate limits, which would be helpful for an AI agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents both parameters ('limit' and 'query') fully. The description does not add any additional meaning or details about the parameters beyond what the schema provides, such as search syntax or examples. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search Bluesky posts by keyword') and identifies the resource ('Bluesky posts'), making the purpose explicit. It distinguishes this tool from siblings like 'get_posts' or 'get_feed' by specifying it's for keyword-based searching rather than retrieval by other criteria.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Search Bluesky posts by keyword') and mentions authentication requirements, but it does not explicitly state when not to use it or name specific alternatives among the sibling tools. This gives good guidance but lacks explicit exclusions or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.