Skip to main content
Glama
Ownership verified

Server Details

Manage Threads from your AI assistant

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

23 tools
create_postAInspect

Create a new post or thread. A thread is multiple posts linked together. Supports text content, topic tags, images (via public URL), auto-repost configuration, and scheduling. If scheduleAt is not provided, the post is saved as a draft. IMPORTANT: Always send scheduleAt as a local time WITHOUT a timezone offset (e.g. '2025-03-15T09:00:00', never with Z or +00:00). The server automatically converts it to UTC using the user's configured timezone.

ParametersJSON Schema
NameRequiredDescriptionDefault
postsYesArray of posts. Single item for a post, multiple for a thread
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
autoRepostNoOptional auto-repost configuration. If omitted, the user's default auto-repost setting is applied.
providerIdYesThe provider ID (from list_providers) to post to
scheduleAtNoLocal datetime WITHOUT timezone offset (e.g. '2025-03-15T09:00:00'). Do NOT append Z or any offset — the server converts to UTC using the user's timezone. If omitted, saves as draft.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds substantial operational context beyond annotations: explains server-side timezone conversion logic (local time → UTC), draft persistence behavior, and media processing workflow (server downloads/validates URLs). Annotations only indicate write/non-destructive nature; description explains what actually happens during creation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five tightly constructed sentences with zero redundancy. Front-loaded with core purpose, followed by feature summary, draft logic, and critical timezone warning. Every sentence conveys essential information not implied by the tool name alone.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complex nested schema (media arrays, auto-repost objects, scheduling logic) and 100% schema coverage, the description appropriately focuses on workflow implications (draft vs. scheduled) and critical constraints (timezone format) rather than repeating parameter documentation. No output schema exists, but the description need not explain return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds value by emphasizing the critical timezone constraint ('IMPORTANT: Always send scheduleAt as a local time WITHOUT a timezone offset'), reinforcing the schema requirement with urgency. Also maps high-level features (topic tags, auto-repost) to parameter groups.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb+resource ('Create a new post or thread') and immediately clarifies the thread/post distinction ('A thread is multiple posts linked together'). Effectively differentiates from siblings like edit_post, delete_thread, and list_drafts by defining its creation-specific scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explains key usage states: omitting scheduleAt saves as draft, while providing it schedules the post. Clarifies thread construction via the posts array. Lacks explicit naming of alternatives (e.g., 'use edit_post to modify'), but provides clear behavioral context for when to expect draft vs. published outcomes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_threadA
Destructive
Inspect

Delete a thread and all its posts. This action cannot be undone.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
threadIdYesThe thread ID to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare destructiveHint=true, the description adds valuable behavioral context: the cascading effect ('all its posts') and explicit irreversibility ('cannot be undone'). This complements the structured metadata effectively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. Critical information (action, scope, irreversibility) is front-loaded and immediately accessible.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive operation with two well-documented parameters and clear annotations, the description is appropriately complete. It covers the scope of deletion and safety warnings without needing to describe return values (no output schema present).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents both threadId and teamId. The description does not add parameter-specific semantics beyond what the schema provides, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the specific action ('Delete') and resource ('a thread and all its posts'), clearly distinguishing this from sibling tools like edit_thread or get_thread.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage constraints by warning 'This action cannot be undone,' alerting the agent to the permanent nature of the operation. However, it lacks explicit guidance on when to prefer this over alternatives or specific prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

edit_postAInspect

Edit the text content, topic, or media of a single post by its ID. Only posts with status READY (drafts or scheduled) can be edited. Use get_thread first to retrieve post IDs.

ParametersJSON Schema
NameRequiredDescriptionDefault
mediaNoReplace all media on this post. Provide public URLs — the server downloads, validates, and uploads them. Omit to keep existing media unchanged.
topicNoNew topic tag for the post. Pass null to remove the topic.
postIdYesThe post ID to edit (from get_thread)
contentNoNew text content for the post
spoilerRangesNoReplace text spoiler ranges (Threads only). Max 10 per post. Omit to keep existing spoilers unchanged. Pass empty array to remove all spoilers.
isSpoilerMediaNoSet media spoiler flag (Threads only). true = all media blurred, false = remove spoiler. Omit to keep unchanged.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-destructive write operation (readOnlyHint: false, destructiveHint: false). The description adds valuable behavioral context not in annotations: the READY status requirement and the dependency on get_thread for ID retrieval. It does not mention rate limits, auth requirements, or error states if editing non-READY posts, which keeps it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence establishes purpose and scope; second sentence provides critical constraints and prerequisites. Information density is high with no filler or redundant statements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description sufficiently covers operational constraints and workflow prerequisites for a mutation tool. It omits description of return values or error handling (e.g., what happens if an invalid postId is provided), which would be necessary for a perfect completeness score, but the existing context is robust.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all four parameters (postId, content, topic, media) including format constraints and behavioral notes (e.g., media replacement behavior). The description mentions these fields exist but adds no semantic information beyond what the schema already provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (Edit), resource (post), and exact fields modifiable (text content, topic, media), along with the identification method (by its ID). The scope 'single post' effectively distinguishes this from sibling tool 'edit_thread' and 'create_post' without being verbose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent guidance: explicitly states the status constraint ('Only posts with status READY') qualifying which posts are eligible, and provides a clear prerequisite workflow ('Use get_thread first to retrieve post IDs'). This prevents misuse and establishes the correct sequence of operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

edit_threadAInspect

Edit a thread by adding, removing, or reordering its posts. Provide the full desired list of posts — existing posts (with id) are updated, new posts (without id) are created, and existing posts not in the list are removed. Only threads where all posts have status READY can be edited.

ParametersJSON Schema
NameRequiredDescriptionDefault
postsYesThe full ordered list of posts for the thread. Order determines postOrder (0-indexed).
threadIdYesThe thread ID to edit
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial behavioral details beyond annotations: the reconciliation logic (ids present=update, absent=create, omitted=remove) and the READY status prerequisite. Annotations only indicate it's a write operation (readOnlyHint: false), while the description explains the complex update semantics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: sentence 1 states purpose, sentence 2 explains the critical reconciliation pattern, and sentence 3 states the status constraint. Information density is high and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of thread reconciliation and 100% schema coverage, the description adequately covers the input contract and behavioral expectations. Minor gap: no output schema exists and the description doesn't mention return values or success indicators.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100%, the description adds essential semantic meaning for the 'posts' parameter: it clarifies that this must be the complete desired list (not a partial update) and explains the id field's dual purpose as a discriminator between update and create operations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Edit a thread by adding, removing, or reordering its posts') and distinguishes from siblings like edit_post (single post) and delete_thread by emphasizing the bulk full-list replacement pattern.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear constraints on when the tool can be used ('Only threads where all posts have status READY can be edited') and explains the full-list replacement requirement, though it does not explicitly name sibling alternatives like edit_post for single-post updates.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_consistencyA
Read-only
Inspect

Get posting consistency data for the past 365 days. Returns a heatmap-style array of daily post counts, total posts, and number of days with posts.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
providerUserIdYesThe provider user ID (from list_providers)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true. Description adds valuable behavioral context: fixed 365-day lookback window, specific return structure (heatmap array with total posts and active days count), and read-only nature ('Get'). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence establishes operation and temporal scope; second sentence details return format. Every word earns its place. No redundancy with structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description compensates by detailing return structure (heatmap array, specific metrics). Input requirements are fully covered by schema. Minor gap: could explicitly state that providerUserId requires prior list_providers call, though this is documented in schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with complete descriptions for teamId (optional workspace logic) and providerUserId (source from list_providers). Description does not add parameter-specific details beyond schema, which is appropriate given complete schema coverage. Baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with resource 'posting consistency data' and includes scope modifier 'past 365 days'. It distinguishes from sibling analytics tools (get_post_analytics, get_follower_growth) by specifying the heatmap-style temporal aggregation format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context by describing the return format (heatmap array of daily counts), which helps agents recognize when they need granular temporal consistency data. However, lacks explicit when-to-use guidance versus sibling metric tools like get_metric_timeseries or get_post_analytics.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_recapA
Read-only
Inspect

Get yesterday's daily recap for a social account. Returns new followers gained and number of posts published.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
providerUserIdYesThe provider user ID (from list_providers)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, and the description adds valuable behavioral context: the temporal constraint ('yesterday' only, not configurable), and specific return payload details (new followers count and posts published count). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. First sentence establishes the action and scope, second details the return values. Perfectly front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the description compensates effectively by specifying the two key metrics returned (followers and posts). Combined with the 100% schema coverage and read-only annotation, this provides sufficient context for invocation, though error conditions or empty result behavior could be noted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with teamId and providerUserId fully documented in the schema. The description does not add parameter-specific semantics, but with complete schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with clear resource 'daily recap' and temporal scope 'yesterday' for a 'social account'. It distinguishes from siblings like get_follower_growth or get_metric_timeseries by specifying the daily summary nature and specific metrics returned (followers, posts).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage through the 'yesterday' constraint and specific return values (followers gained, posts published), suggesting when to use this versus time-series or real-time analytics tools. However, it lacks explicit guidance on when to prefer this over sibling tools like get_follower_growth or get_metric_timeseries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_follower_growthB
Read-only
Inspect

Get follower count over time for a social account. Returns daily follower count data points.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesEnd date in ISO 8601 format
fromYesStart date in ISO 8601 format
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
providerUserIdYesThe provider user ID (from list_providers)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotation declares readOnlyHint=true, and the description aligns with this by using 'Get.' The description adds valuable behavioral context beyond the annotation by specifying the data granularity ('daily') and structure ('data points'), which helps the agent understand what to expect. However, it lacks details on rate limits, data retention windows, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste: the first establishes the operation and resource, the second describes the return value. It is appropriately front-loaded with the most critical information (what it gets) first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and readOnlyHint annotation, the description adequately covers the tool's purpose. It compensates for the missing output schema by describing the return value as 'daily follower count data points,' giving the agent sufficient context to understand the tool's utility without needing explicit output schema documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'social account' (mapping to providerUserId) and implies the temporal range (from/to) via 'over time' and 'daily,' but does not add semantic meaning beyond what the schema already provides for teamId or clarify parameter interactions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'follower count over time' and specifies the return format as 'daily follower count data points.' It distinguishes itself from siblings like get_live_metrics (historical vs. live) and get_metric_timeseries (specific follower metric vs. general metrics), though it doesn't explicitly name these alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like get_metric_timeseries or get_live_metrics. While 'over time' implies historical analysis, there are no when-not-to-use conditions, prerequisites (e.g., needing valid providerUserId from list_providers), or recommendations for date range limits.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_follow_up_templatesA
Read-only
Inspect

Get all follow-up (auto-plug) templates for a specific provider. These are reusable templates that define what reply to attach to a post after it reaches engagement thresholds.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
userProviderIdYesThe provider ID (from list_providers, field 'id') to get templates for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true. Description adds crucial behavioral context: these templates trigger based on 'engagement thresholds' and define reply attachment logic. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence defines the operation and resource; second explains the business logic (reusable templates for engagement-based replies). Well front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Sufficient for a read-only list tool. Explains the domain concept (auto-plug/engagement thresholds) that would otherwise be opaque. No output schema exists, but description adequately characterizes the returned resource type.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions (including the list_providers reference for userProviderId). Description mentions 'specific provider' aligning with the required parameter, but adds no syntax beyond the schema. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'follow-up templates' + scope 'for a specific provider'. Uniquely identifies the resource among siblings (distinct from get_thread_follow_up or get_user_settings) and clarifies domain terminology '(auto-plug)'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context by explaining what templates do (attach replies after engagement thresholds), helping agents understand when auto-plug functionality is needed. However, lacks explicit 'when not to use' guidance or comparison to sibling get_thread_follow_up.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_live_metricsA
Read-only
Inspect

Get live engagement metrics for a social account over a date range. Returns views, likes, replies, reposts, and quotes with percentage change vs. the previous period. Works with both Threads and Bluesky.

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd date in ISO 8601 format. Defaults to now
fromNoStart date in ISO 8601 format. Defaults to 7 days ago
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
providerUserIdYesThe provider user ID (from list_providers, field providerUserId)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the readOnlyHint annotation, the description adds valuable behavioral context: it specifies the exact metrics returned, notes the percentage change calculation versus the previous period, and clarifies platform support (Threads/Bluesky). It does not contradict the read-only annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three dense sentences with zero waste. It is front-loaded with the action ('Get live engagement metrics...'), follows with return value specifics, and ends with platform constraints. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description appropriately details the return values (specific metrics plus percentage change). With 100% parameter coverage in the schema and read-only annotations present, the description provides sufficient context for invocation, though it could mention pagination or rate limiting behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured schema already documents all four parameters (including the source of providerUserId from list_providers and date formats). The description mentions 'date range' but adds no semantic meaning beyond what the schema already provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'live engagement metrics' with specific metrics listed (views, likes, replies, reposts, quotes) and supported platforms (Threads/Bluesky). However, it does not explicitly differentiate from similar analytics siblings like 'get_metric_timeseries' or 'get_post_analytics'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'get_post_analytics' or 'get_metric_timeseries'. It does not specify prerequisites (e.g., needing to use list_providers first) or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_metric_timeseriesA
Read-only
Inspect

Get time-series data points for a specific metric. Available metrics: VIEWS, LIKES, REPLIES, REPOSTS, QUOTES, ENGAGEMENT_RATE, FOLLOWERS_COUNT. Returns daily data points.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesEnd date in ISO 8601 format
fromYesStart date in ISO 8601 format
metricYesThe metric to retrieve
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
providerUserIdYesThe provider user ID (from list_providers)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true. The description adds 'Returns daily data points' which clarifies temporal granularity/aggregation level not evident in annotations or parameter names. Does not disclose pagination, rate limits, or max date ranges.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. Front-loaded with the core action ('Get time-series data points'), followed by available options and return format. Efficient and scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description partially compensates by stating 'Returns daily data points'. All 5 input parameters are well-documented in schema. Adequate for the tool's complexity, though could specify data structure (array vs object).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description reiterates the metric enum values which aids quick scanning but adds no semantic depth beyond schema descriptions (e.g., no examples of ISO date formats or cross-parameter constraints).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb+resource ('Get time-series data points for a specific metric') and enumerates available metrics (VIEWS, LIKES, etc.) to define scope. However, it does not differentiate from sibling get_follower_growth despite FOLLOWERS_COUNT being an available metric here.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists available metrics which implies when to select this tool (when needing these specific metrics), but provides no explicit guidance on when to use this vs siblings like get_post_analytics (per-post) or get_follower_growth (dedicated follower trends).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_post_analyticsA
Read-only
Inspect

Get analytics for individual posts published within a date range. Returns engagement metrics (views, likes, replies, reposts, quotes, engagement rate) for each post.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesEnd date in ISO 8601 format
fromYesStart date in ISO 8601 format
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
providerUserIdYesThe provider user ID (from list_providers)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Aligns with readOnlyHint annotation by using 'Get' and describing safe data retrieval. Adds valuable behavioral context by enumerating specific returned metrics (views, likes, replies, reposts, quotes, engagement rate) since no output schema exists. Does not mention rate limits or pagination behavior for accounts with many posts.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero redundancy. First sentence establishes scope (individual posts, date range), second specifies return values. Every word earns its place; metrics list is specific and actionable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by detailing the six engagement metrics returned. Covers the essential contract for a read-only analytics fetch. Minor gap regarding pagination behavior if querying large date ranges with many posts.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, parameters are already well-documented (ISO 8601 dates, providerUserId source, optional teamId). Description mentions 'date range' which maps to from/to parameters but does not add significant semantic context beyond what the schema already provides. Baseline score appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves analytics for individual posts with specific date range scoping. Lists exact engagement metrics returned. Lacks explicit differentiation from sibling analytics tools like get_live_metrics or get_daily_recap, which would help agents choose between the multiple analytics endpoints available.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context via 'published within a date range,' suggesting historical analysis use cases. However, provides no explicit guidance on when to prefer this over siblings like get_metric_timeseries (aggregated data) or get_live_metrics (real-time data), nor mentions prerequisites like obtaining providerUserId from list_providers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_recommendationsA
Read-only
Inspect

Get AI-generated posting recommendations based on your analytics and performance patterns. Returns best posting times and content suggestions.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
providerUserIdYesThe provider user ID (from list_providers)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Consistent with readOnlyHint: true annotation (uses 'Get' and 'Returns'). Adds valuable behavioral context not in annotations: specifies recommendations are 'AI-generated' (indicating probabilistic/machine learning nature) and reveals data sources ('analytics and performance patterns'). Discloses return types (posting times, content suggestions) which compensates for missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence establishes the core function and data basis; second sentence specifies return values. Front-loaded with essential information and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately complete for a 2-parameter read-only tool. Since no output schema exists, the description helpfully specifies what the tool returns (best times, content suggestions). Could improve by noting that recommendations are personalized to the specific providerUserId, though this is implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both parameters including the 'personal' string option for teamId and the list_providers reference for providerUserId. The description adds no parameter-specific guidance, meeting the baseline expectation for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') and resource ('AI-generated posting recommendations') with specific scope ('based on your analytics'). Implicitly distinguishes from sibling analytics tools like get_post_analytics by emphasizing 'AI-generated' recommendations and actionable outputs ('best posting times') rather than raw data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context by specifying it returns 'best posting times and content suggestions,' hinting this is for content planning optimization. However, lacks explicit guidance on when to use this versus get_post_analytics or get_consistency, and doesn't mention prerequisites like requiring a valid providerUserId from list_providers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_subscriptionA
Read-only
Inspect

Get the current user's subscription and plan information. Returns plan type, remaining posts, account limits, and team limits.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, so the description appropriately focuses on adding return value semantics. It discloses exactly what data structures are returned (plan type, remaining posts, account/team limits) since no output schema exists, adding valuable behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes the operation, second enumerates return values. No filler words or redundant explanations. Perfectly sized for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter read operation without output schema, the description is complete. It compensates for missing output schema by detailing return fields, and the readOnly annotation covers the safety profile. No additional context needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the optional teamId parameter, the schema fully documents the parameter behavior including the 'personal' workspace option. The description adds no parameter-specific information, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with clear resource 'current user's subscription and plan information'. It distinguishes from siblings like get_user_settings or list_teams by specifying subscription-specific return fields (plan type, remaining posts, limits).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it lacks explicit 'when-not' statements, the description provides clear context by enumerating return values (plan type, remaining posts, account limits), making it obvious this is for billing/quota checks versus general settings or content retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_threadA
Read-only
Inspect

Get a specific thread by its thread ID. Returns all posts in the thread with their content, media, scheduling info, and analytics.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
threadIdYesThe thread ID to retrieve
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true, the description adds valuable behavioral context about the response payload: it specifies that all posts are returned with content, media, scheduling info, and analytics. This compensates for the missing output schema. It does not mention rate limits or error conditions, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes the retrieval action and key identifier, second details the return payload structure. Every word earns its place; no redundant phrases or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description appropriately details the return structure (posts with content, media, analytics). However, it omits any mention of the workspace/team scoping behavior described in the teamId parameter, which is relevant contextual information for multi-tenant usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'thread ID' confirming the required parameter's role, but adds no semantic details beyond the schema regarding the optional teamId parameter or the special 'personal' workspace string value. It meets but does not exceed the schema's documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with resource 'thread' and clarifies the lookup method 'by its thread ID'. It clearly distinguishes from sibling list operations (list_posts, list_drafts) by emphasizing retrieval of a specific entity rather than a collection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'specific thread by its thread ID', suggesting this is for targeted retrieval when an ID is known. However, it lacks explicit when-to-use guidance or named alternatives (e.g., when to use list_posts vs. this tool) and omits prerequisites like needing the thread ID beforehand.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_thread_follow_upB
Read-only
Inspect

Get the follow-up (auto-plug) configuration for a specific thread.

ParametersJSON Schema
NameRequiredDescriptionDefault
threadIdYesThe thread ID to get follow-up for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, so safety is covered. The description adds valuable domain context by clarifying that 'follow-up' refers to 'auto-plug' functionality, which helps the agent understand the business logic. However, it omits error handling (e.g., invalid threadId) and response structure details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, efficient sentence with zero waste. Front-loaded with verb 'Get'. Parenthetical '(auto-plug)' packs domain meaning into minimal space without disrupting flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple read-only tool with one required parameter. Given the readOnlyHint annotation and clear schema, the description covers the essential intent. Could be improved by noting the relationship to set_thread_follow_up, but sufficient for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('The thread ID to get follow-up for'), establishing baseline 3. The description mentions 'for a specific thread' which aligns with the parameter but doesn't add syntax details, validation rules, or format examples beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get') and resource ('follow-up (auto-plug) configuration') clearly. The parenthetical '(auto-plug)' provides helpful domain context. Distinguishes from sibling set_thread_follow_up implicitly via 'Get' vs 'Set', though it doesn't explicitly name the alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus alternatives like get_thread (general thread data) or set_thread_follow_up (to modify settings). No prerequisites or exclusion criteria mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_user_settingsA
Read-only
Inspect

Get the current user's settings including timezone, date format, auto-repost configuration, and reminder preferences.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description complements the readOnlyHint annotation by detailing exactly what data is returned (timezone, date format, etc.), adding valuable context about the payload structure. It does not contradict the read-only annotation and correctly implies a safe read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the action and follows with specific examples. No words are wasted, and the structure effectively communicates both the operation and its scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (no parameters, simple getter), the description adequately compensates for the missing output schema by listing the specific settings fields returned. It appropriately omits error handling details given the straightforward nature of the operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters in the input schema, there are no parameter semantics to explain beyond the baseline. The description appropriately focuses on the return value semantics rather than inventing parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('current user's settings'), and explicitly enumerates the setting types (timezone, date format, auto-repost, reminders) to distinguish it clearly from sibling analytics and content management tools like get_post_analytics or create_post.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying it retrieves the 'current' user's settings, suggesting authentication requirements, but provides no explicit guidance on when to use this versus similar configuration tools like get_subscription or list_teams, nor mentions any prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_draftsA
Read-only
Inspect

List all draft posts (posts that have not been scheduled yet). Returns threads with their posts and content.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the readOnlyHint annotation, the description adds valuable behavioral context by specifying the return structure ('Returns threads with their posts and content'), indicating that drafts are organized as threads rather than flat post lists. No contradictions with the read-only annotation exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero redundancy. It is front-loaded with the core purpose (listing drafts) and immediately follows with the return value description. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single optional parameter, read-only list operation) and absence of an output schema, the description appropriately compensates by describing the return structure ('threads with their posts and content'). It sufficiently covers the essential behavior for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'teamId' parameter, the schema adequately documents the input requirements. The description does not add additional parameter semantics, which is acceptable given the complete schema documentation, meeting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List') and resource ('draft posts'), with a precise scope definition ('posts that have not been scheduled yet'). This effectively distinguishes the tool from sibling 'list_posts' by defining the specific state of posts being retrieved.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description defines what constitutes a draft post ('not been scheduled yet'), providing implicit context for when to use the tool, it lacks explicit guidance on when to choose this over 'list_posts' or other sibling tools. No alternative tools or exclusion criteria are named.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_postsA
Read-only
Inspect

List scheduled and published posts within a date range. Returns threads with their posts, scheduled time, provider info, and analytics. IMPORTANT: Always send from/to as local times WITHOUT a timezone offset (e.g. '2025-03-01T00:00:00'). The server converts to UTC using the user's configured timezone.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesEnd date as local datetime WITHOUT timezone offset (e.g. '2025-03-31T23:59:59'). Do NOT append Z or any offset.
fromYesStart date as local datetime WITHOUT timezone offset (e.g. '2025-03-01T00:00:00'). Do NOT append Z or any offset.
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
providerIdNoOptional provider ID (from list_providers, field 'id'). If provided, only returns posts for this provider.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations indicate readOnlyHint=true, the description adds crucial behavioral details: the return structure ('threads with their posts, scheduled time, provider info, and analytics') and server-side timezone conversion logic ('server converts to UTC using the user's configured timezone') not present in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose/scope first, return values second, critical formatting instruction third. The 'IMPORTANT' flag appropriately emphasizes a non-obvious parameter requirement without bloating the text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter read-only listing tool with no output schema, the description adequately covers return value structure and critical input formatting. It could improve by mentioning pagination behavior if applicable, but otherwise covers the essential operational contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by consolidating the critical timezone formatting requirement ('WITHOUT a timezone offset') and explaining the server-side UTC conversion behavior, which is not detailed in the individual parameter schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('List') and clear resource scope ('scheduled and published posts within a date range'), distinguishing it from sibling tools like get_thread (specific retrieval) and list_drafts (unpublished content).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context that this tool operates 'within a date range' for batch retrieval, implying its use case vs. single-entity fetch tools. However, it lacks explicit 'when not to use' guidance or named alternatives like get_thread for specific post retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_providersA
Read-only
Inspect

List all connected social media accounts (Threads and Bluesky) for the authenticated user. Returns provider ID, platform, username, display name, and profile picture for each account.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, but the description adds critical behavioral context: supported platforms (Threads and Bluesky) and compensates for the missing output schema by detailing return fields (provider ID, username, display name, profile picture).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences: first establishes action and scope, second details return values. Zero redundancy, front-loaded with the verb, every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple listing tool with one optional parameter, the description is complete. It compensates effectively for the lack of output schema by enumerating returned fields, providing sufficient information for agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the teamId parameter, the schema carries the full burden. The description does not mention the parameter, but given the high schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'List' with clear resource 'connected social media accounts' and specifies platforms 'Threads and Bluesky'. It effectively distinguishes from siblings like list_posts or list_teams by focusing on provider/account connections rather than content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it lacks explicit 'when-not-to-use' or named alternatives, the description provides clear context by specifying it returns 'provider ID, platform, username' etc., signaling to use this when account connection metadata is needed versus content (list_posts) or analytics (get_post_analytics).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_teamsA
Read-only
Inspect

List all teams the authenticated user belongs to, including their role and which team is currently active. Use the team ID as the teamId parameter in other tools. To access the personal workspace (no team), pass teamId "personal".

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The readOnlyHint annotation confirms the safe read operation, while the description adds valuable behavioral context about the 'personal' workspace string convention and the specific data returned (roles, active status). It does not contradict the read-only annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences deliver maximum information density: the first defines scope and return values, while the second provides critical integration details about the personal workspace. No extraneous content is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter read-only listing tool without output schema, the description is complete. It documents the return structure (teams with roles and active flags), explains the personal workspace edge case, and describes how to use results with sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero input parameters, the baseline score applies. The description appropriately focuses on output semantics rather than inventing parameter documentation, mentioning teamId only as context for consuming the results in other tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'List all teams the authenticated user belongs to' with specific details about returned data (role, active status). It clearly distinguishes this team-listing function from content-oriented siblings like create_post or list_posts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear integration guidance by explaining to 'Use the team ID as the teamId parameter in other tools' and documents the special 'personal' workspace convention. While it lacks explicit 'when not to use' alternatives, it effectively contextualizes how to consume the output.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_time_slotsA
Read-only
Inspect

List all configured posting time slots for a provider. Time slots define the preferred times when posts should be scheduled. Each slot has an hour, minute, and array of weekdays (0=Sunday, 6=Saturday).

ParametersJSON Schema
NameRequiredDescriptionDefault
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
providerIdYesThe provider ID (from list_providers, field 'id')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish readOnlyHint=true, and the description adds valuable domain context: time slots define preferred scheduling times and the return structure includes hour, minute, and weekday arrays with the 0=Sunday mapping. This compensates for the missing output schema by previewing the data structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficiently structured sentences: purpose first, domain definition second, and data structure third. No redundant words or filler. The critical information (what it retrieves and what the data looks like) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately compensates by describing the time slot object structure (hour, minute, weekdays). However, it doesn't clarify if the return is an array or wrapped object, and doesn't mention pagination or empty result handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both providerId (referencing list_providers) and teamId (personal workspace option). The description mentions 'for a provider' aligning with providerId but adds no syntax, format, or usage details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb 'List' and resource 'posting time slots' with scope 'for a provider'. It effectively distinguishes this from sibling list operations (list_posts, list_providers, list_drafts) by specifying it retrieves configuration data rather than content or analytics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description explains what time slots are (preferred posting times), it lacks explicit guidance on when to invoke this tool versus alternatives, or prerequisites for using the data. The relationship to posting workflows (e.g., 'use before scheduling posts') is implied but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reschedule_threadAInspect

Reschedule a thread to a new date/time. Updates all posts in the thread. IMPORTANT: Always send scheduledAt as a local time WITHOUT a timezone offset. The server converts to UTC using the user's configured timezone.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamIdNoOptional workspace/team ID. If not provided, uses the active workspace from user settings. Pass "personal" to use the personal workspace (no team).
threadIdYesThe thread ID to reschedule
scheduledAtYesLocal datetime WITHOUT timezone offset (e.g. '2025-03-15T09:00:00'). Do NOT append Z or any offset — the server converts to UTC using the user's timezone.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral context beyond annotations: discloses cascading effect ('Updates all posts in the thread') and critical server-side timezone conversion logic ('server converts to UTC using the user's configured timezone'). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, perfectly front-loaded. First states purpose, second clarifies scope/impact, third highlights critical technical requirement. Zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a mutation tool without output schema. Covers the essential behavioral gotcha (timezone handling) and cascading effects. Lacks return value documentation, but per rules this is acceptable when no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds emphasis on the critical timezone constraint ('IMPORTANT: Always send...') which reinforces the schema description for scheduledAt, helping prevent invocation errors. Does not mention teamId semantics, but schema covers this adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Reschedule') + resource ('thread') + scope ('Updates all posts in the thread'). Distinguishes from sibling edit_thread (content vs. timing) and get_thread (read-only).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through 'Reschedule' but lacks explicit when-to-use vs. alternatives (e.g., when to use this vs. edit_thread for timing changes) and lacks prerequisites or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_thread_follow_upAInspect

Set or update the follow-up (auto-plug) for a thread. The follow-up is a reply that gets posted after the original post reaches engagement thresholds. IMPORTANT: This creates an AutoPlug record (not a DefaultAutoPlug template). The autoPlugId on posts references this record.

ParametersJSON Schema
NameRequiredDescriptionDefault
textNoThe follow-up reply text content
mediaNoOptional media for the follow-up. Provide public URLs — the server downloads, validates, and uploads them.
imagesNoOptional image URLs for the follow-up
threadIdYesThe thread ID to set follow-up for
isEnabledYesWhether the follow-up is enabled
numberOfLikesNoNumber of likes threshold
numberOfRepliesNoNumber of replies threshold
numberOfRepostsNoNumber of reposts threshold
delayTimeMinutesNoDelay in minutes before posting follow-up
numberOfLikesEnabledNoWhether the likes threshold trigger is enabled
numberOfRepliesEnabledNoWhether the replies threshold trigger is enabled
numberOfRepostsEnabledNoWhether the reposts threshold trigger is enabled
delayTimeMinutesEnabledNoWhether the time delay trigger is enabled
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint: false, destructiveHint: false), the description adds valuable behavioral context: it explains the trigger mechanism ('engagement thresholds'), the data model impact ('creates an AutoPlug record'), and the relationship to posts ('autoPlugId on posts references this'). It does not contradict the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with zero waste. The first states the action, the second explains the business logic, and the third provides critical data model clarification. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (13 parameters, multiple trigger types with enable flags) and lack of output schema, the description provides adequate conceptual completeness by explaining the auto-plug mechanism and record type. It could optionally note that at least one trigger should be enabled, but the schema coverage handles individual parameter semantics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds semantic value by explaining that parameters configure a 'reply' (text/media) that triggers after 'engagement thresholds' (numberOfLikes/Replies/Reposts) and can have 'delay' (delayTimeMinutes), providing conceptual glue for the 13 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Set or update') and resource ('follow-up/auto-plug for a thread'), and defines what a follow-up is ('reply that gets posted after... engagement thresholds'). It explicitly distinguishes from sibling template tools by clarifying it creates an 'AutoPlug record (not a DefaultAutoPlug template).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description effectively differentiates this from template management by stating it creates an AutoPlug record, not a DefaultAutoPlug template. However, it misses explicit guidance on when to use the sibling 'get_thread_follow_up' for retrieval versus this tool for modification, though the distinction is somewhat implied by the naming.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources