QiQ Social
Server Details
Remote MCP server implementing the Streamable HTTP transport with 25 tools for AI assistants. Enables programmatic management of multi-platform content publishing — create posts, run automations, manage RSS feeds, generate hashtags, search images, and publish to 13+ platforms (Instagram, LinkedIn, X, WordPress, etc.). Authenticated via Bearer token.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.3/5 across 25 of 25 tools scored.
Most tools have distinct purposes, but some potential confusion exists between 'create_automation' and 'run_automation' (both involve automation execution), and 'text_assist' and 'generate_hashtags' (both involve content generation). The descriptions help clarify these distinctions, but the overlap could cause occasional misselection.
All tools follow a consistent verb_noun naming pattern (e.g., create_automation, list_posts, update_post). The verbs are clear and predictable, with no mixing of conventions like camelCase or inconsistent styles, making the tool set highly readable and organized.
With 25 tools, the count feels heavy for a social media automation server, bordering on excessive. While the domain is broad, some tools might be consolidated (e.g., list operations could be grouped), and the high number could overwhelm agents, though it's not extreme.
The tool set provides comprehensive coverage for social media automation, including CRUD operations for automations, posts, and sources, along with publishing, content assistance, and workspace management. There are no obvious gaps; agents can handle the full lifecycle from creation to publishing and monitoring.
Available Tools
25 toolscreate_automationBInspect
Create a new automation. Types: write_social, write_blog, rss_social, rss_blog, rss_digest_blog. Schedule uses cron syntax (e.g. '0 9 * * 1' for Mondays at 9am).
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Automation type | |
| title | No | Automation title | |
| topic | No | Topic for write-type automations (required for write_social, write_blog) | |
| prompt | No | Custom prompt/instructions for content generation | |
| schedule | No | Cron schedule expression (e.g. '0 9 * * 1') | |
| channel_ids | No | Channel IDs to publish to | |
| auto_publish | No | Auto-publish generated content (default false) | |
| workspace_id | Yes | Workspace ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. While it explains cron syntax interpretation ('Mondays at 9am'), it fails to disclose critical behavioral traits: whether creation triggers immediate execution, what persistent side effects occur, error conditions (e.g., invalid topic combinations), or the relationship to manual run_automation triggers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at two sentences plus one fragment. Front-loaded with the core action ('Create a new automation'), followed immediately by type enumeration and schedule syntax guidance. Zero redundancy or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% schema coverage for 8 parameters, the description meets minimum viability but leaves significant gaps for a complex scheduling tool. Missing: execution model (does it run immediately on creation?), interaction patterns with channel_ids, and guidance on the content generation lifecycle (auto_publish implications).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds interpretive value by explaining the cron example ('Mondays at 9am'), but merely enumerates the type enum values which are already explicit in the schema. Does not clarify conditional requirements (e.g., topic being required only for write-type automations) beyond what the schema states.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Create') and resource ('automation') clearly. Listing the five specific automation types (write_social, rss_digest_blog, etc.) helps distinguish this from sibling tools like create_post or create_source, though it doesn't explicitly differentiate from update_automation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lists valid automation types but provides no explicit guidance on when to use this tool versus alternatives like run_automation (immediate execution) or update_automation (modifying existing). No mention of prerequisites such as requiring configured channels or RSS sources before invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_postBInspect
Create a draft post for a specific channel. Content structure depends on the channel placement.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Post content | |
| channel_id | Yes | Channel ID to publish to | |
| workspace_id | Yes | Workspace ID | |
| automation_id | No | Link to an automation (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that this creates a 'draft' (not immediately published) and that 'content structure depends on the channel placement' (important behavioral trait). However, it lacks details on side effects, persistence, authentication requirements, or return values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two sentences with zero waste. The first establishes the core purpose and the second provides essential context about the variable content structure. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex nested schema with channel-dependent fields, the description adequately flags the content variability. However, with no output schema provided, the description fails to mention what the tool returns (e.g., post ID, confirmation) or success behavior, leaving a gap in the agent's ability to handle the response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema description coverage (baseline 3), the description adds crucial context that 'content structure depends on the channel placement'. This explains the heterogeneous nature of the content object (which contains Discord-specific fields like 'embed' and blog-specific fields like 'tags'), helping the agent understand why certain fields exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a 'draft post' (specific verb + resource) and mentions it targets a 'specific channel'. The inclusion of 'draft' helps distinguish this from the sibling 'publish_post' tool, though it doesn't explicitly contrast the two.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like 'publish_post' or 'update_post'. While 'draft' implies it's for pre-publication saving, there are no explicit when/when-not conditions or prerequisites stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_sourceBInspect
Add an RSS feed source to a workspace for use with rss-type automations
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | RSS feed URL | |
| workspace_id | Yes | Workspace ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only states the basic creation action. It omits critical behavioral details: whether the operation is idempotent, if the RSS feed is validated during creation, immediate side effects (fetching?), or failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single front-loaded sentence efficiently conveys verb, resource, destination, and purpose without redundancy. Every clause earns its place in guiding tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 2-parameter creation tool with complete schema coverage. However, given zero annotations and no output schema, it lacks necessary behavioral context about what 'add' entails (verification, immediate sync, error states) that would make it fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% ('Workspace ID', 'RSS feed URL'), establishing baseline 3. The description reinforces these semantics by mentioning 'workspace' and 'RSS feed source' but adds no additional constraints, format examples, or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Add' with clear resource 'RSS feed source' and scope 'to a workspace'. It effectively distinguishes from sibling tools like create_post or create_automation by specifying the RSS context, though 'source' could be more explicitly defined against list_sources or delete_source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides implied context ('for use with rss-type automations') suggesting when to use it, but lacks explicit guidance on prerequisites (e.g., whether to call validate_rss first), error conditions, or when NOT to use it versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_automationBInspect
Delete an automation and its outputs
| Name | Required | Description | Default |
|---|---|---|---|
| automation_id | Yes | Automation ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It adds valuable behavioral detail that 'outputs' are also deleted (cascading behavior), but fails to disclose irreversibility, permission requirements, or side effects on active executions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse (5 words) but efficient. No wasted text, though arguably too brief for a destructive operation. Single sentence is front-loaded with the critical action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the tool has only one parameter (low complexity), the lack of annotations and output schema combined with the destructive nature means the description should include safety warnings or return value details, which are absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (automation_id is documented), establishing baseline 3. The description adds no parameter semantics beyond the schema, nor does it explain ID format or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Delete) and resource (automation), and distinguishes from sibling delete tools (delete_post, delete_source) by targeting 'automation'. The addition of 'and its outputs' clarifies scope beyond just the configuration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus update_automation, run_automation, or alternatives. No mention of prerequisites (e.g., automation state) or warnings about data loss.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_postAInspect
Delete a post. Only works for draft posts (status NONE).
| Name | Required | Description | Default |
|---|---|---|---|
| post_id | Yes | Post ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully documents the draft-status precondition, but fails to disclose critical traits for a destructive operation: whether deletion is permanent (hard delete vs. soft delete), error behavior when called on non-draft posts, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first states the action, the second states the critical constraint. Every word earns its place and the constraint is appropriately front-loaded immediately after the action verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a single-parameter destructive operation with no output schema and no annotations, the description provides the essential draft-only constraint but is missing permanence warnings and error handling details expected for delete operations. Adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'post_id' parameter, the schema already fully documents the input. The description does not add additional semantic details (e.g., where to find the ID, format requirements), which aligns with the baseline score of 3 for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (delete) and resource (post). The constraint 'Only works for draft posts' effectively distinguishes this from general deletion expectations and clarifies scope limitations, though it could explicitly differentiate from sibling delete_automation/delete_source tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage constraints stating it 'Only works for draft posts (status NONE),' which clearly defines when to use the tool. However, it lacks guidance on what to do if the user needs to remove a published post (e.g., whether to use update_post first or if deletion is impossible).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sourceCInspect
Remove an RSS feed source
| Name | Required | Description | Default |
|---|---|---|---|
| source_id | Yes | Source ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It only states the action ('Remove') but fails to specify if the deletion is permanent, what happens to associated posts/data, or error conditions (e.g., if source_id doesn't exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (4 words) and front-loaded with the action. While efficient, it borders on under-specification for a destructive operation; an additional clause regarding permanence or side effects would improve appropriateness without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a destructive operation with no output schema and no safety annotations, the description is insufficient. It omits critical context such as whether deletion is irreversible, impacts on existing posts created from the source, or success/failure indicators.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with the 'source_id' parameter already described as 'Source ID'. The description adds no additional semantic context about the parameter format or how to obtain valid source IDs, meriting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Remove') and specific resource ('RSS feed source'), which distinguishes it from sibling deletion tools like delete_automation or delete_post. However, it lacks the exemplary specificity of noting scope or constraints that would earn a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites (e.g., whether dependent automations must be deleted first) or warnings about the deletion scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_hashtagsCInspect
Generate platform-optimized hashtags for content
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of hashtags (default 10) | |
| content | Yes | Post content to generate hashtags for | |
| platform | Yes | Target platform |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to explain what 'platform-optimized' means (e.g., different strategies/counts per platform), whether the operation is deterministic, rate limits, or what format the results take (array vs object).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficient with no wasted words, front-loading the action and resource. However, given the lack of annotations and output schema, it is arguably undersized—missing critical behavioral context that would require a second sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the schema is fully documented, the description lacks necessary context given the absence of annotations and output schema. It fails to disclose the return format (hashtag strings vs metadata objects), mutation safety, or whether results are cached/regeneratable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents all parameters (content, platform, count). The description maps loosely to these ('content', 'platform-optimized') but adds no semantic depth beyond the schema descriptions, such as explaining platform-specific behavior or default values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action ('Generate') and resource ('hashtags'), with qualifying context ('platform-optimized', 'for content'). It distinguishes from siblings like text_assist or create_post by specifying the hashtag generation use case, though it doesn't elaborate on what 'platform-optimized' entails.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., text_assist for general caption help), nor does it mention prerequisites like having content drafted first or constraints on platform availability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_automationCInspect
Get detailed information about a specific automation
| Name | Required | Description | Default |
|---|---|---|---|
| automation_id | Yes | Automation ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description fails to clarify whether this operation triggers the automation execution or merely retrieves metadata (critical given the presence of run_automation). It does not disclose safety characteristics, rate limits, or what constitutes 'detailed information'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence of seven words with no redundancy. However, it may be overly terse given the lack of annotations and output schema, sacrificing necessary context for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description identifies the resource but fails to characterize the return value or response structure (no output schema exists to compensate). For a retrieval tool among many sibling automation operations, it lacks differentiation regarding what specific data is retrieved versus what other tools provide.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage for automation_id, documenting it as 'Automation ID'. The description adds no additional semantic context about the parameter's format or source, meeting the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get detailed information about a specific automation' clearly identifies the verb (Get), resource (automation), and scope (specific/detailed). It implicitly distinguishes from list_automations by specifying 'specific' rather than plural, though it does not explicitly name the sibling alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus list_automations or run_automation. It omits prerequisites such as needing to obtain the automation_id from a list operation first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_postBInspect
Get detailed information about a specific post including its content and metrics
| Name | Required | Description | Default |
|---|---|---|---|
| post_id | Yes | Post ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return value categories (content and metrics) which compensates partially for missing output schema, but omits operational details like error handling, caching behavior, or idempotency characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently packs purpose, resource, and return value hints. No filler words. Front-loaded with action verb. Appropriate length for tool complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 1-parameter retrieval tool without output schema, the description is adequate. It hints at return structure (content, metrics) and covers the core purpose. Could be improved with error condition notes, but sufficient for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (post_id: 'Post ID'), establishing baseline 3. Description implies the parameter identifies a specific resource but adds no semantic detail about the parameter format, validation rules, or sourcing of the ID beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Get) and resource (post) with scope (detailed information, content, metrics). Implies singular retrieval versus list_posts, but does not explicitly differentiate from sibling operations like update_post or publish_post.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives (e.g., list_posts for searching, get_automation for workflow status). No mention of prerequisites or conditions where this call would fail (e.g., invalid post_id).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_subscription_statusBInspect
Check the current subscription plan and usage limits
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Check' implies a read-only operation, the description doesn't explicitly confirm it is safe/non-destructive, doesn't mention rate limiting, caching behavior, or what format the usage limits are returned in (JSON structure, units, etc.).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, efficient sentence that immediately states the function without filler words. Every word earns its place in describing the tool's scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (zero parameters, no nested objects, simple conceptual domain), the description provides sufficient context for an agent to understand what information will be returned. While an output schema would be ideal, the terms 'subscription plan' and 'usage limits' are sufficiently self-explanatory for this domain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per evaluation guidelines, tools with zero parameters receive a baseline score of 4, as there are no parameter semantics to clarify beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource being accessed (subscription plan and usage limits) using the verb 'Check'. It effectively distinguishes itself from sibling content-management tools (create_post, list_automations, etc.) by indicating this is an account/billing status operation rather than a content operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to invoke this tool versus alternatives, nor are prerequisites mentioned. For example, it doesn't indicate whether this should be called before write operations to validate limits, or if it's purely for display purposes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_assetsCInspect
List media assets in a workspace's library
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 50) | |
| label_id | No | Filter by label | |
| workspace_id | Yes | Workspace ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure, yet it mentions nothing about pagination behavior, return format, rate limits, or authorization requirements. The term 'List' implies a read-only operation, but safety traits are not confirmed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficiently worded without redundancy, but given the absence of annotations and output schema, the description is inappropriately brief. It front-loads the action but fails to earn a higher score due to insufficient information density for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema or annotations to provide behavioral context, the description should explain what constitutes a media asset, what fields are returned, or how pagination behaves. As written, it leaves critical gaps for an agent attempting to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, establishing a baseline of 3. The description mentions 'workspace's library', reinforcing the workspace_id parameter, but adds no clarifying context about the label_id filter or limit pagination beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a clear verb ('List') and resource ('media assets'), and scopes it to 'a workspace's library'. However, it does not explicitly differentiate from sibling list tools like list_posts or list_sources, leaving ambiguity about what constitutes a 'media asset' in this system.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., list_posts for published content), nor are there prerequisites mentioned beyond the implicit workspace_id requirement. The agent must infer usage from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_automationsBInspect
List automations in a workspace. Returns automation configs with their output channels.
| Name | Required | Description | Default |
|---|---|---|---|
| workspace_id | Yes | Workspace ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It partially satisfies this by describing the return payload ('automation configs with their output channels'), but fails to mention safety characteristics (read-only nature), pagination behavior, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first states the operation and scope; the second states the return value. Information is front-loaded and appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter input and lack of output schema, the description is reasonably complete. It compensates for the missing output schema by describing what the tool returns. Could be improved with pagination or filtering limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'workspace_id' fully documented in the schema. The description maps the parameter to the operation ('in a workspace'), confirming the relationship, but adds no additional semantic detail (e.g., format requirements, where to find the ID) beyond the schema baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the specific action (List) and resource (automations) clearly, including scope (in a workspace). The mention of 'automation configs with their output channels' adds valuable specificity about the resource structure. However, it does not explicitly differentiate from sibling get_automation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like get_automation (single retrieval) or run_automation (execution). No prerequisites or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_channelsBInspect
List available publishing channels in a workspace. Channels represent specific destinations like a Facebook page, Telegram group, or WordPress blog.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | No | Filter by platform (e.g. 'instagram', 'linkedin', 'x') | |
| workspace_id | Yes | Workspace ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the conceptual model (what channels represent) but provides no operational details such as pagination behavior, rate limits, permission requirements, or error conditions. The agent cannot determine if this is read-only or what the return structure looks like.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero redundancy. The first front-loads the action and resource, while the second provides essential domain context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter listing tool with complete schema documentation, the description adequately covers the domain concept by clarifying that 'channels' means publishing destinations. However, given the lack of annotations and output schema, it omits operational context like pagination, filtering behavior, or return value structure that would help the agent handle responses.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting both workspace_id and platform parameters. The description mentions 'in a workspace' which loosely references the required parameter, but adds no syntax details, format constraints, or usage examples beyond what the schema already provides. Baseline 3 is appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (List) and resource (publishing channels) with scope (in a workspace). The second sentence provides concrete examples (Facebook page, Telegram group) that distinguish 'channels' from similar concepts like 'connections' or 'workspaces'. However, it doesn't explicitly differentiate from the sibling 'list_connections' tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like 'list_connections' or 'list_workspaces'. While the term 'publishing channels' implies use in content distribution workflows (supported by siblings like 'publish_post'), there are no explicit when-to-use or when-not-to-use instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_connectionsBInspect
List platform connections (OAuth integrations) in a workspace
| Name | Required | Description | Default |
|---|---|---|---|
| workspace_id | Yes | Workspace ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions 'OAuth integrations' to contextualize the resource type, it fails to disclose safety properties (read-only vs destructive), permission requirements, rate limits, pagination behavior, or return format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that is front-loaded with the action verb ('List') and contains zero redundant words. Every word serves to specify the resource or scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 required parameter, no nested objects, no output schema), the description is minimally adequate. It successfully clarifies that 'connections' means OAuth integrations, which is essential context, but lacks details about what the response contains or pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'workspace_id' parameter, the baseline is 3. The description maps to this parameter with 'in a workspace' but adds no additional semantic detail, validation rules, or format guidance beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('List'), clarifies that 'connections' refers to 'platform connections (OAuth integrations)', and scopes the operation to 'a workspace'. This distinguishes it from sibling list tools (list_automations, list_posts, etc.) by specifying the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool does but provides no guidance on when to use it versus alternatives (e.g., when to use list_connections vs list_sources), nor does it mention prerequisites or conditions for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_labelsBInspect
List labels in a workspace for organizing content
| Name | Required | Description | Default |
|---|---|---|---|
| workspace_id | Yes | Workspace ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. It fails to indicate whether this is a read-only operation, whether results are paginated, what the return format contains, or any rate limiting concerns. The agent has no signal about side effects or safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the verb ('List') and wastes no words. Every component serves a purpose: the action, the resource, the scope, and the functional context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, 100% schema coverage) and lack of output schema, the description is minimally viable—it explains what the tool does at a basic level. However, without annotations or an output schema, it should disclose the read-only nature and return structure to be truly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents the workspace_id parameter as 'Workspace ID'. The description mentions 'in a workspace' which aligns with the parameter but adds no additional semantic detail about valid ID formats, how to obtain workspace IDs, or constraints. This meets the baseline expectation when the schema is fully self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List labels') and scope ('in a workspace'), distinguishing it from sibling tools like list_posts or list_assets. The phrase 'for organizing content' provides helpful context about the label function, though it could be more specific about what content types are organized.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites (beyond implying a workspace context) or common use cases. There is no mention of related tools like create_post or list_workspaces that might be used in conjunction with this.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_postsAInspect
List posts in a workspace with optional filters. Status values: NONE (draft), PENDING, RUNNING, SUCCESS, FAILURE.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 50) | |
| status | No | Filter by post status | |
| workspace_id | Yes | Workspace ID | |
| automation_id | No | Filter by automation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable enum semantics (mapping NONE to 'draft'), but fails to disclose safety properties (read-only vs. mutation), pagination behavior beyond the limit parameter, or auth requirements implied by workspace scoping.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficiently structured sentences with zero waste. The first front-loads the core purpose, while the second provides critical parameter semantics. Every sentence earns its place with high information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple parameter structure (4 flat parameters, 100% schema coverage) and absence of output schema, the description adequately covers the tool's function. However, it could improve by mentioning default sorting behavior or confirming the read-only nature of the operation given the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema description coverage (baseline 3), the description adds meaningful value by explaining business logic: specifically that status value 'NONE' corresponds to 'draft' status. This semantic mapping aids agent comprehension beyond raw enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists posts within a workspace scope and mentions optional filters. It uses specific verbs ('List') and resource ('posts'), distinguishing it from sibling 'get_post' through pluralization and filter mention, though it could explicitly contrast with single-item retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_post' or 'list_automations'. It lacks prerequisites (e.g., workspace_id requirements) and exclusion criteria for when not to use this endpoint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sourcesBInspect
List RSS feed sources in a workspace
| Name | Required | Description | Default |
|---|---|---|---|
| workspace_id | Yes | Workspace ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It implies a read-only operation but fails to disclose pagination behavior, return format, rate limits, or whether deleted/archived sources are included. Lacks critical behavioral context for a data retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded with action verb and immediately comprehensible. Appropriate length for the information conveyed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple single-parameter list operation with full schema coverage, but minimal given lack of annotations and output schema. Does not describe what fields/attributes are returned for each source, which would help compensate for missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'Workspace ID' clearly defined. The description phrases 'in a workspace' which aligns with the workspace_id parameter, but adds no semantic value beyond the schema description (no format details, examples, or validation rules). Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('List'), resource ('RSS feed sources'), and scope ('in a workspace'). It effectively distinguishes from siblings like list_posts, list_automations, and list_channels by specifying 'RSS feed sources' rather than generic 'sources'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., validate_rss or get_subscription_status). No mention of prerequisites beyond the implicit workspace context. Usage must be inferred from the resource name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_workspacesAInspect
List all workspaces the authenticated user has access to
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable auth context ('authenticated user') defining the scope boundary, but fails to disclose read-only nature, pagination behavior, or return structure that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action verb 'List', zero redundancy. Every word serves to define scope (all workspaces) and authorization boundary (authenticated user).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter tool but lacks description of return values (workspace IDs, names, etc.) which would be helpful given no output schema exists. Missing explicit mention that this is typically an entry point for discovering workspaces before using other tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. According to scoring rules, 0 params establishes a baseline of 4. The description appropriately requires no additional parameter explanation since there are none to describe.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'List' with clear resource 'workspaces' and scope 'the authenticated user has access to'. It clearly distinguishes from sibling list_* tools (list_posts, list_automations, etc.) by specifying the workspace domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by referencing 'authenticated user', suggesting it's a discovery tool for available workspaces. However, it lacks explicit guidance on when to use this versus operating directly with workspace IDs, or how it relates to the workflow with other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
media_assistCInspect
Search for stock images or generate AI images for posts
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query or image description | |
| action | Yes | Search stock photos or generate with AI | |
| workspace_id | Yes | Workspace ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and description omits critical behavioral details: return format (URLs vs asset IDs), whether generated images persist to list_assets, rate limits, or cost implications of AI generation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is efficient and front-loaded with key verbs. However, extreme brevity contributes to informational gaps given the lack of supporting annotations or output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Absent output schema and annotations, the description fails to disclose what the tool returns (image metadata, binary data, IDs) or side effects (workspace asset creation). Insufficient for safe invocation of a generation-capable tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing baseline. Description reinforces the action parameter's dual modes but adds no syntax guidance, example queries, or semantic constraints beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific actions (search/generate) and resource (images) with context (for posts). Clearly distinguishes from text_assist (text generation) and create_post (post creation) siblings by focusing on visual media acquisition.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lists the two available modes (search vs generate) but provides no guidance on selection criteria between them, or when to use this tool versus list_assets for retrieving existing workspace media.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
publish_postAInspect
Publish a draft post to its platform. The post will be sent to the channel it was created for.
| Name | Required | Description | Default |
|---|---|---|---|
| post_id | Yes | Post ID to publish |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable routing context ('sent to the channel it was created for') but omits mutation details like whether the draft is consumed, if the operation is reversible, or error conditions for invalid post states.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the core action, and the second adds essential routing context without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema and simple structure, the description is nearly complete. It could be improved by clarifying the state transition (draft to published), but adequately covers the essential invocation context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage describing 'post_id' as 'Post ID to publish,' the description adds semantic value by contextualizing that this should be a draft post ID, not just any post ID.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb (publish) and resource (draft post), clearly distinguishing this from sibling tools like create_post (which creates drafts) and update_post (which modifies existing posts).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying 'draft post,' suggesting this tool is for publishing existing drafts rather than creating new content. However, it lacks explicit when-to-use guidance or named alternatives (e.g., 'use create_post first to create a draft').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
run_automationAInspect
Trigger content generation for an automation immediately. Creates posts based on the automation config.
| Name | Required | Description | Default |
|---|---|---|---|
| automation_id | Yes | Automation ID to run |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the mutation side effect ('Creates posts'), but lacks details on return values, error conditions, idempotency, authentication requirements, or what happens if the automation is already running.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero redundancy. Front-loaded with the action verb 'Trigger' and immediately clarifies the outcome ('Creates posts'). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter mutation tool without output schema, the description adequately covers core functionality. However, it could be improved by indicating what the tool returns (e.g., created post IDs, success status) or common error scenarios given the 'additionalProperties: false' constraint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the single automation_id parameter is documented in the schema). The description implies the parameter by referencing 'an automation' but adds no additional semantics like ID format, validation rules, or where to obtain valid IDs. Baseline 3 is appropriate given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Trigger', 'Creates') and clearly identifies the resource (automation/content generation). It distinguishes from siblings like create_automation (configures vs executes) and create_post (manual vs automation-driven) by specifying 'based on the automation config'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The word 'immediately' implies urgency or manual execution versus scheduled runs, but there are no explicit when-to-use guidelines, prerequisites (e.g., automation must exist), or comparisons to alternatives like create_post for manual content creation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
text_assistAInspect
AI-powered text enhancement. Actions: proofread (fix grammar/spelling), rephrase (rewrite), generate (create new text).
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Input text to process | |
| style | No | Writing style (default Auto) | |
| action | Yes | Type of text assistance | |
| context | No | Additional context for generation | |
| workspace_id | Yes | Workspace ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, placing full burden on the description. While it notes the 'AI-powered' nature, it fails to disclose critical behavioral traits: whether the tool is read-only or creates persistent resources (relevant given the required workspace_id), if it consumes credits/rate limits, or what format the response takes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally concise with two front-loaded sentences. The first establishes the tool's domain (AI text enhancement), and the second enumerates the specific actions. No redundancy or unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description adequately covers the tool's functional purpose but leaves gaps regarding behavioral context (persistence, safety, return structure). For a 5-parameter tool in a workspace context, it meets minimum viability but could disclose whether results are temporary or saved.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by explaining the semantics of each 'action' enum value (proofread fixes grammar, rephrase rewrites, etc.), which helps the agent select the correct action parameter beyond the raw enum names in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'AI-powered text enhancement' and explicitly lists the three specific actions (proofread, rephrase, generate) with parenthetical explanations. It effectively distinguishes from sibling 'media_assist' by specifying 'text' and from content creation tools like 'create_post' by focusing on enhancement rather than publishing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what each action does (e.g., 'fix grammar/spelling' for proofread), implying when to use each mode. However, it lacks explicit guidance on when to choose this tool versus siblings like 'media_assist' or 'create_post', or prerequisites like workspace selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_automationCInspect
Update an existing automation's settings
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | New title | |
| topic | No | New topic | |
| prompt | No | New prompt/instructions | |
| schedule | No | New cron schedule | |
| auto_publish | No | Auto-publish setting | |
| automation_id | Yes | Automation ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It fails to specify whether omitted fields are preserved or cleared, whether the automation triggers immediately upon update, or if this operation is idempotent. 'Update' implies mutation but lacks critical operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence with no redundancy. However, it is arguably too minimal for a mutation tool with 6 parameters—one additional sentence covering behavioral expectations or prerequisites would improve utility without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 6-parameter mutation operation with no output schema and no annotations, the description is insufficient. It omits: partial update behavior (PATCH semantics), validation rules (e.g., cron format validation), side effects (automation restart), and success/failure indicators.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 6 parameters (title, topic, prompt, schedule, auto_publish, automation_id). The description adds no additional semantic meaning beyond the schema's 'New X' pattern, warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Update) and resource (existing automation's settings), and the word 'existing' distinguishes it from create_automation. However, it lacks the specificity of listing which settings can be updated (unlike the schema which details title, topic, prompt, etc.), keeping it from a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus run_automation (execution vs configuration) or prerequisites like needing to obtain automation_id via list_automations first. No mention of partial vs full update semantics.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_postBInspect
Update a draft post's content before publishing
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Updated content | |
| post_id | Yes | Post ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. While it mentions 'draft' status, it fails to disclose mutation safety, idempotency, what happens if post_id is invalid, or whether updates are reversible. For a write operation, this is insufficient behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. The constraint 'before publishing' is front-loaded and immediately clarifies the tool's position in the workflow. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex nested content object (media, embeds, tags) and lack of output schema/annotations, the description is minimally adequate but leaves gaps. It doesn't explain return values, error states, or validate the draft-only constraint behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (post_id: 'Post ID', content: 'Updated content'), establishing a baseline of 3. The description mentions 'content' but adds no semantic detail about the complex nested structure (media arrays, embeds, tags) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Update), resource (draft post), and scope (content before publishing). It effectively distinguishes this from publish_post and implies it's for drafts only, though it could explicitly mention it doesn't work on published posts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'before publishing' implies a workflow sequence (use this before publish_post), but lacks explicit guidance on when NOT to use it (e.g., 'do not use on published posts') and doesn't name alternatives like create_post or specify prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_rssAInspect
Validate an RSS feed URL and get feed metadata before adding it as a source
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | RSS feed URL to validate |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but discloses minimal behavioral specifics. It mentions metadata retrieval but fails to describe what constitutes a validation failure, whether this performs a live HTTP request, or what format the metadata takes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single 12-word sentence that is tightly focused and front-loaded with the action verb. Every clause earns its place by conveying both the operation and the workflow context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema), the description is sufficiently complete for tool selection. It mentions the return of 'feed metadata', though specifying example metadata fields would further improve completeness given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single 'url' parameter, establishing a baseline of 3. The description confirms this should be an 'RSS feed URL' but adds no additional semantic constraints (e.g., protocol requirements, common feed paths) beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Validate') + resource ('RSS feed URL') and clearly distinguishes this tool from the sibling 'create_source' by positioning it as a pre-check ('before adding it as a source').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'before adding it as a source' provides clear workflow context, implicitly directing users to call this prior to 'create_source'. However, it lacks explicit 'when-not-to-use' guidance or mention of alternatives like direct source creation without validation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!