WoopSocial Social Media MCP
Server Details
Post to social media (Facebook, Instagram, LinkedIn, Pinterest, YouTube, TikTok, X/Twitter and more)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- WoopSocial/mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 21 of 21 tools scored. Lowest: 2.4/5.
Each tool targets a distinct resource and action, with clearly differentiated purposes. For example, media_list retrieves existing media, while media_uploads_* handles upload sessions, and posts_validate validates without creating. No two tools are ambiguous.
All tool names follow a consistent verb_noun pattern in snake_case, such as posts_create, media_delete, and webhooks_list_endpoints. Compound resources like media_uploads are treated uniformly, and verb choices are consistent across the set.
With 21 tools, the server is slightly above the typical 3-15 range, but the tool count is justified by the breadth of functionality: health, media, OAuth, posts, projects, social accounts, and webhooks. Each tool has a clear role, and none feel superfluous.
The tool set covers core operations for posts, projects, media, social accounts, and webhooks, but notable gaps exist: there are no update tools for posts or projects, and no direct tool to disconnect a social account. These omissions could require agents to use workarounds.
Available Tools
21 toolshealth_getHealth checkARead-onlyInspect
Returns a minimal response proving the WoopSocial API is reachable.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and open-world behavior. Description adds context that response is minimal, reinforcing non-destructive nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, zero waste. Front-loads purpose effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple with no parameters, no output schema. Description fully covers what it does and when it's used.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist; schema coverage is 100%. Description does not need to add param info. Baseline 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns a minimal response proving API reachability. Specific verb+resource, distinct from all sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied use is to verify API connectivity. No explicit when-not or alternatives, but with zero parameters and a clear goal, guidance is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
media_deleteDelete mediaADestructiveIdempotentInspect
Deletes one media item by ID.
| Name | Required | Description | Default |
|---|---|---|---|
| mediaId | Yes | Media identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide destructiveHint and idempotentHint. The description adds no extra behavioral details beyond reaffirming deletion. It neither contradicts nor enriches the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence conveys the entire purpose efficiently. No extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a one-parameter destructive tool, the description is minimal but adequate. It omits details about response or error handling, but given the simplicity, it meets minimal completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a basic description for mediaId. The tool description does not add additional semantic meaning beyond the schema, meeting baseline expectations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'deletes', the resource 'media item', and specifies the identifier. It distinguishes from sibling tools like posts_delete by focusing on media.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidelines provided on when to use this tool versus alternatives, nor any prerequisites or exclusions. The agent gets no context about selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
media_listList mediaARead-onlyInspect
Returns media library items that belong to the API key's organization.
When projectId is provided, only media in that project is returned.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of items to return. | |
| cursor | No | Opaque cursor for the next page of results. | |
| mediaId | No | Filter to one or more media items. Provide a comma-separated list of media IDs. | |
| projectId | No | Filter to resources that belong to a specific project. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare 'readOnlyHint=true' and 'openWorldHint=true', so no contradiction. The description adds that results are scoped to the organization and can be filtered by projectId, but does not disclose other behavioral traits like pagination details or error conditions beyond schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. The main purpose is front-loaded, and conditional logic is clearly separated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given annotations and a fully described schema, the description outlines the basic behavior. However, it omits details about pagination (cursor usage, default limit), ordering, or error handling. It is adequate but not comprehensive for a list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the input schema already describes all parameters. The description redundantly explains the 'projectId' filter ('When projectId is provided…'), adding no new meaning beyond the schema's description. Thus baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns media library items belonging to the organization, with a specific verb ('Returns') and resource ('media library items'). It distinguishes from siblings like 'media_delete' and 'media_uploads_*' which perform different operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives such as 'health_get', 'media_delete', or other list tools. It only mentions a conditional behavior for 'projectId' but does not advise on when to call this tool or when not to.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
media_uploads_complete_sessionComplete media upload sessionAInspect
Call this to finalize the upload started by calling /media/upload-sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| uploadSessionId | Yes | Media upload session identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate non-destructive and open-world. Description adds little beyond stating it finalizes the upload, but no behavior conflicts.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence with a link to API reference. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple with one parameter and no output schema. Description is sufficient, and the link provides additional context if needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage for the single parameter. Description does not add extra meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action ('finalize the upload') and the resource ('upload session'), and distinguishes from sibling tools like create and get session.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly indicates when to call: after starting a session via the referenced start session tool. No when-not guidance, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
media_uploads_create_sessionStart media upload sessionAInspect
This endpoint can be used to upload both smaller and larger files (up to 5GB) in a chunked manner. Calling this creates an upload session and returns presigned URLs for uploading the file in parts.
Upload the file in partCount parts, using the matching
parts[n].uploadUrl for each part number.
Every part except the last must be exactly partSizeInBytes bytes. The
last part may be smaller. After all parts have been uploaded, call
/media/upload-sessions/{uploadSessionId}/complete to finalize the upload
and create the media item.
| Name | Required | Description | Default |
|---|---|---|---|
| projectId | Yes | Project that will own the uploaded media. | |
| fileSizeInBytes | Yes | Total size of the file that will be uploaded. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide openWorldHint and destructiveHint. The description adds valuable detail about the chunked upload process, file size limit, and presigned URLs, but could mention rate limits or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with front-loaded purpose, but contains some procedural detail that could be streamlined without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the workflow, including the follow-up step, but it could explicitly mention that the response includes an uploadSessionId and parts array for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds context about how fileSizeInBytes relates to part sizes but does not significantly enhance parameter understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it creates an upload session and returns presigned URLs for chunked upload, distinguishing it from sibling tools like complete_session and get_session.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to start an upload) and references the follow-up step, but lacks explicit exclusions or comparisons with other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
media_uploads_get_sessionGet media upload session statusCRead-onlyInspect
Get media upload session status
| Name | Required | Description | Default |
|---|---|---|---|
| uploadSessionId | Yes | Media upload session identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true, indicating this is a safe read operation. The description adds no further behavioral context beyond what annotations provide, so it neither helps nor contradicts.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at one sentence, which is efficient but could benefit from slightly more detail about the status information returned.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with a single parameter, the description is adequate but missing information about what the status response includes, given no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description of the single parameter 'uploadSessionId'. The tool description does not add any additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Tautological: description restates name/title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, such as when to check status vs completing or creating a session.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
oauth_create_authorizationGenerate OAuth URLAInspect
Generates a browser authorization URL for connecting a new social account to a project.
This endpoint is useful for multi-user integrations where your application lets your own users, clients, or brands connect their social accounts to WoopSocial without giving them access to your WoopSocial account.
A common flow is:
Create or select a WoopSocial project for your user, client, or brand.
Call this endpoint from your backend with that
projectId, the targetplatform, and aredirectUrlin your application.Open the returned
urlin your user's browser.After OAuth completes, WoopSocial redirects the browser back to
redirectUrlwith result query parameters.Use
projectIdandsocialAccountIdsfrom the redirect, or callGET /social-accounts?projectId=..., to store or confirm the connected account in your application.
When redirectUrl is provided, the browser is redirected back to that URL after
the OAuth callback is handled.
For Facebook, WoopSocial shows a page-selection screen after authorization
because Facebook may return more pages than the user appeared to select in
the Facebook dialog in cases where the user has authorized with WoopSocial previously. The selected pages are connected to the single
projectId from this request, then WoopSocial redirects back to
redirectUrl when one was provided.
When redirectUrl is provided, WoopSocial appends these query parameters on success:
status=successprojectId: the project identifier from the requestplatform: the connected social platformsocialAccountIds: comma-separated connected social account identifiers. This may contain one or more IDs depending on the platform OAuth flow.
When redirectUrl is provided, WoopSocial appends these query parameters on failure:
status=errorprojectId: the project identifier from the requestplatform: the requested social platformerror: an OAuth callback error code
If the OAuth callback state is missing or expired, WoopSocial cannot safely
determine the original redirectUrl, so the callback returns an HTTP error
instead of redirecting.
The redirect never includes OAuth tokens or credentials.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | Identifies which social media platform this data structure targets. | |
| projectId | Yes | Project identifier. | |
| redirectUrl | No | Optional URL in your application to return the browser to after OAuth completes. Use this for multi-user integrations where your users connect their own social accounts through your app. WoopSocial appends OAuth result query parameters to this URL so your app can finish the connection flow. The redirect does not include tokens or credentials. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the `openWorldHint: true` and `destructiveHint: false` annotations, the description details redirect behavior, success/failure parameters, state expiry, and no token exposure. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear purpose sentence followed by detailed steps. It is slightly long but every sentence adds value, and it is front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters and no output schema, the description fully explains the OAuth flow, redirect handling, and what to do with results, making it complete for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds meaningful context for `redirectUrl` (usage in flow) and clarifies the role of `platform` and `projectId` in the OAuth process, raising the score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates a browser authorization URL for connecting social accounts, using specific verbs and identifying the resource. It stands out among siblings as the only OAuth-related tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear common flow (steps 1-5) and indicates the tool is for multi-user integrations. While it doesn't explicitly state when not to use it, the context is sufficient for an agent to decide to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
posts_createCreate postAInspect
Creates a (parent) post with one or more social account (child) posts.
All referenced social accounts must belong to the same project.
The request is validated atomically. If any social account fails validation, the post is not created.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Post content expressed as thread items. The array exists for future thread support. Currently exactly one item is supported. | |
| schedule | Yes | When the post should be published. | |
| socialAccounts | Yes | Social account targets for this post. All referenced social accounts must belong to the same project. Duplicate `socialAccountId` values are invalid. | |
| autoDeleteMediaAfterPublish | No | When true, all media referenced by this post is automatically deleted from the media library after all social account deliveries for the post have published successfully. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds atomic validation behavior beyond the annotations (destructiveHint=false, openWorldHint=true). It does not, however, disclose return values, side effects, or prerequisites like media uploads, leaving some behavioral gaps. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three concise sentences, each serving a distinct purpose: what the tool does, a key constraint, and a behavioral note. No unnecessary words, well-structured for quick parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the tool's complexity (many platforms, required media uploads, detailed schedule options), the description omits return values, prerequisites (e.g., social accounts must exist, media must be uploaded), and error handling beyond validation failure. The openWorldHint suggests broad applicability, but the description leaves the agent underinformed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes all parameters in detail. The tool description repeats the same-project constraint and adds atomic validation—a small but useful addition. This warrants a baseline of 3, as it adds limited new meaning beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a post with social-account targets, distinguishing it from siblings like posts_get (retrieval) or posts_delete (deletion). It also mentions the constraint of same project and atomic validation, leaving no ambiguity about the core function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context by mentioning atomic validation and the constraint that all social accounts must belong to the same project, but it does not explicitly guide when to use this tool over siblings like posts_validate (which validates without creating) or other creation tools. The agent must infer usage from context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
posts_deleteDelete postADestructiveIdempotentInspect
Deletes one scheduled post by post ID.
A post can only be deleted when all of its social account deliveries are
still NOT_STARTED.
| Name | Required | Description | Default |
|---|---|---|---|
| postId | Yes | Post identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds value beyond the destructiveHint and idempotentHint annotations by specifying the constraint that all deliveries must be NOT_STARTED. This clarifies a behavioral boundary not captured by annotations alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at two sentences, with the main action front-loaded. Every sentence adds necessary context, and there is no wasted wording.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple parameter set and presence of annotations, the description covers the essential condition. However, it does not explain behavior when the condition fails (e.g., error message) or mention the return value, leaving some gaps for a complete understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the description adds no extra meaning to the postId parameter beyond what the schema already provides ('Post identifier.'). Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool deletes one scheduled post by ID, using a specific verb and resource. It distinguishes from siblings like posts_create and posts_get by focusing on deletion and mentioning 'scheduled post', which implies a conditional context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a key condition (deliveries must be NOT_STARTED) for when deletion is allowed, but does not explicitly state when to use this tool over alternatives or when not to use it. No comparison with siblings is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
posts_getGet postBRead-onlyInspect
Returns one post with its social account posts inline.
| Name | Required | Description | Default |
|---|---|---|---|
| postId | Yes | Post identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the description adds the behavioral detail of returning social account posts inline. No contradiction; however, beyond that, no additional traits are disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that conveys core functionality with no extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, and the description offers minimal return value detail ('one post with its social account posts inline'). For a simple get-by-ID operation, this is acceptable but leaves the agent without structure expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter (postId) fully described. The description does not add meaning beyond the schema, so baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns one post with its inline social account posts, using a specific verb and resource, and distinguishes from sibling tools like posts_create or posts_delete.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives such as social_account_posts_list or posts_validate. The description implies single-post retrieval by ID but does not document exclusions or context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
posts_validateValidate postARead-onlyInspect
Validates a post without creating one.
This endpoint applies the same validation rules as POST /posts,
including social account resolution, platform-specific validation, and
media validation for referenced media library items.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Post content expressed as thread items. The array exists for future thread support. Currently exactly one item is supported. | |
| schedule | Yes | When the post should be published. | |
| socialAccounts | Yes | Social account targets for this post. All referenced social accounts must belong to the same project. Duplicate `socialAccountId` values are invalid. | |
| autoDeleteMediaAfterPublish | No | When true, all media referenced by this post is automatically deleted from the media library after all social account deliveries for the post have published successfully. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context beyond annotations, mentioning social-account resolution, platform-specific validation, and media validation. It aligns with the `readOnlyHint` annotation, confirming no side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loads the core purpose, and contains no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description omits return format or error behavior, which is important for a validation tool with no output schema. It is adequate but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds no additional parameter-level meaning beyond the high-level validation rules mentioned. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool validates a post without creating one, and specifies it applies the same rules as `POST /posts`, distinguishing it from `posts_create`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for pre-validation but does not explicitly state when to use it versus `posts_create` or when not to use it. No alternatives or exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
projects_createCreate projectAInspect
Creates a project in the API key's organization.
Use projects to isolate social accounts, media, and posts for a specific user, client, or brand in your application.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Human-readable project name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explains the purpose and usage context beyond the annotations. The annotations already indicate non-destructiveness (destructiveHint: false) and open world hint, and the description adds value by clarifying the isolation scenario.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences: the first states the purpose, and the second provides usage context. There is no unnecessary text or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (1 required parameter, no output schema), the description is complete. It explains what the tool does and why you would use it, and it distinguishes from sibling tools effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for the single parameter 'name'. The description does not add additional semantic information beyond what the schema already provides, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Creates a project') and the resource ('in the API key's organization'). It also distinguishes from siblings by explaining the use case of isolating social accounts, media, and posts for a user, client, or brand.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context for when to use this tool (to isolate social accounts, media, and posts). However, it does not explicitly state when not to use it or mention alternative tools, which would improve guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
projects_deleteDelete projectADestructiveIdempotentInspect
Permanently deletes one project and its related data, including connected social accounts, OAuth grants, media references, and posts.
| Name | Required | Description | Default |
|---|---|---|---|
| projectId | Yes | Project identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide destructiveHint and idempotentHint. The description adds context that deletion is permanent and cascades to social accounts, OAuth grants, media, and posts. This goes beyond annotations, though it could clarify the idempotent behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads the core action and its effects, with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a low-complexity tool with one parameter and no output schema, the description covers the main effects and consequences. It could mention the response or success indication, but the provided information is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a description for projectId. The description does not add any additional meaning or formatting details for the parameter, so it meets the baseline without enhancement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'deletes' and the resource 'project', specifying it is permanent and includes related data. It distinguishes this tool from siblings like posts_delete, which deletes only posts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for deleting a project but does not explicitly state when to use or avoid this tool, nor does it mention alternatives. The destructiveHint annotation hints at caution, but explicit guidance is missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
projects_listList projectsARead-onlyInspect
Returns projects that belong to the same organization as the API key. A project corresponds to a "Business Profile" in the UI.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the readOnlyHint annotation, the description adds that results are scoped to the API key's organization and clarifies the UI equivalent, providing useful behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two succinct sentences, front-loaded with essential information, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description fully covers what the tool does and its context, making it complete for selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With no parameters, the baseline is 4. The description adds value by explaining the scope and UI mapping, which goes beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool returns projects belonging to the same organization as the API key, and maps projects to 'Business Profile' in the UI, making its function unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing projects but does not explicitly state when to use versus sibling tools like projects_create or projects_delete. However, it is clear in context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
social_account_posts_listList social account postsBRead-onlyInspect
Returns standalone social account posts for the API key's organization.
Each item includes the content, schedule and delivery status for a given social account target.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of items to return. | |
| cursor | No | Opaque cursor for the next page of results. | |
| postId | No | Filter to one or more parent posts. Provide a comma-separated list of post IDs. | |
| platform | No | Filter to one or more social platforms. Provide a comma-separated list of platform values. | |
| projectId | No | Filter to one or more projects. Provide a comma-separated list of project IDs. | |
| deliveryStatus | No | Filter to one or more delivery statuses. Provide a comma-separated list of delivery status values. | |
| socialAccountId | No | Filter to one or more connected social accounts. Provide a comma-separated list of social account IDs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint and openWorldHint. Description adds that items include materialized content and schedule, but does not disclose other behaviors like pagination or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Directly conveys the purpose and content of results.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with no output schema, the description covers key result contents. Minor omission: no explanation of 'standalone' or pagination details, but schema covers parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are well-documented. Description does not add extra meaning beyond the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns 'standalone social account posts' and item contents, distinguishing it from general post tools. However, it does not explicitly differentiate from sibling tools like posts_get.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, or when not to use it. Only states it returns posts for the organization's API key.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
social_accounts_get_platform_inputsGet platform-specific input optionsARead-onlyInspect
Returns platform-specific input options for a connected social account.
Use this endpoint to discover valid values for flattened post-target
fields such as pinterestBoardId and privacyLevel.
| Name | Required | Description | Default |
|---|---|---|---|
| socialAccountId | Yes | Connected social account identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and openWorldHint=true. The description adds value by explaining the exact purpose (returning valid values for fields like pinterestBoardId and privacyLevel), which goes beyond the annotations without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences, front-loaded with the main purpose and then a usage hint. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input (one required parameter) and no output schema, the description sufficiently explains what the tool does and why it might be used. It could mention that options depend on the platform of the account, but current coverage is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already provides 100% coverage with a clear description for the single parameter 'socialAccountId'. The description does not add additional parameter semantics but adequately compensates by explaining the output context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The title and description clearly state the tool returns platform-specific input options for a social account, using specific verb ('get') and resource ('platform input options'). It distinguishes itself from siblings like 'social_accounts_list' and 'posts_create' by focusing on discovering valid field values.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a direct use case: 'Use this endpoint to discover valid values for flattened post-target fields'. It does not explicitly exclude cases or mention alternatives, but the context is clear enough for an AI agent to understand when to invoke it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
social_accounts_listList social accountsARead-onlyInspect
Returns social accounts that can currently be used for publishing.
When projectId is provided, only social accounts usable within that project are returned.
| Name | Required | Description | Default |
|---|---|---|---|
| projectId | No | Filter to resources that belong to a specific project. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint and openWorldHint. The description adds useful behavioral context: it returns only accounts that can currently be used for publishing, and only those usable within a project when projectId is given. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the main purpose and include a conditional usage note. No unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description is adequate for a simple list tool with one optional parameter, but it does not describe the return structure (e.g., array of objects). Without an output schema, this omission could leave an agent uncertain about the response format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, but the tool description adds value by clarifying that projectId filters to accounts usable within that project, not just those belonging to it. This extra context improves semantic understanding beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Returns social accounts that can currently be used for publishing,' providing a specific verb and resource. It implies a filtering condition (usable for publishing) that distinguishes it from other social account tools, but does not explicitly contrast with sibling list tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context on when to use the projectId parameter, stating that it filters to accounts usable in that project. However, it does not give guidance on when to choose this tool over alternatives like social_accounts_get_platform_inputs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
webhooks_create_endpointRegister webhook endpointAInspect
Registers a URL to receive webhook events for the specified event types.
The response includes a signingSecret (base64-encoded, 32 bytes) that is
returned once only. Use it to verify the X-Woop-Signature header on
incoming webhook payloads.
Signature format: t=<unix>,v1=<hmac-sha256-hex>
Signed payload: <unix>.<json-body>
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The HTTPS URL to POST webhook events to. | |
| eventTypes | Yes | Webhook event types this endpoint should receive. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (destructiveHint: false) indicate safety, and the description adds critical behavioral traits: signingSecret returned once, signature format, and signed payload structure. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two paragraphs, front-loaded with main action, and efficient use of bullet-like formatting for signature details. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description explains the response includes a signingSecret and its usage. Satisfactory for a simple creation tool, though it could mention any validation or immediate activation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so the schema already documents the parameters. The description does not add new parameter-level information beyond what is in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'registers' and the resource 'URL to receive webhook events', distinguishing it from sibling tools like webhooks_delete_endpoint and webhooks_list_endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage context, notably that the signingSecret is returned once only and how to use it for verification, but does not explicitly mention when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
webhooks_delete_endpointDelete webhook endpointCDestructiveIdempotentInspect
Delete webhook endpoint
| Name | Required | Description | Default |
|---|---|---|---|
| webhookId | Yes | Webhook endpoint identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide destructiveHint=true, idempotentHint=true, openWorldHint=true. The description adds no further behavioral context, such as what gets destroyed, reversibility, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. However, it could include more context while remaining concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple delete operation with one parameter and no output schema, the description is minimally adequate. It lacks behavioral context and usage guidance, but the purpose is clear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for webhookId. The description adds no extra meaning beyond the schema, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Tautological: description restates name/title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use or not use this tool. No mention of prerequisites, alternatives, or conditions. The destructive hint hints at caution, but the description does not elaborate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
webhooks_list_endpointsList webhook endpointsARead-onlyInspect
Returns all registered webhook endpoints for the organization. The signingSecret field is omitted.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true, making the read-only behavior clear. The description adds value by disclosing that the signingSecret field is omitted from the response, which is a behavioral trait not captured by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single clear sentence that is front-loaded with the main action. No unnecessary words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there are no parameters, annotations are present, and there is no output schema, the description adequately explains the tool's purpose and a key detail about the output. However, it does not describe the format of the returned data or any pagination, which could be added for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has no parameters and schema coverage is 100%, so the baseline is 4. The description does not need to add parameter meaning, and it provides no additional parameter information beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns all registered webhook endpoints for the organization and explicitly mentions the omission of the signingSecret field. It distinguishes itself from sibling tools like webhooks_create_endpoint and webhooks_delete_endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is used to list all endpoints but does not provide explicit guidance on when to use it versus other tools or any conditions. No exclusions or contextual hints are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
social_account_posts_deleteDelete social account postBDeletes one social account post by ID.
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true and idempotentHint=true. The description adds 'Deletes one... by ID', which merely repeats the obvious purpose. It fails to disclose any additional behavioral traits such as whether the deletion is permanent, cascading effects, or required permissions. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no superfluous language. Every word is necessary and the description is appropriately sized for a simple deletion operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with one parameter and no output schema. The description covers the basic purpose but lacks context on effects (e.g., permanent deletion), error handling (e.g., what if ID not found), or any required permissions. Adequate for a minimal definition.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100% with a clear description for the single required parameter. The description does not add any semantic value beyond what the schema already provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Deletes') and the resource ('one social account post by ID'), but does not differentiate from the sibling tool 'posts_delete' which deletes a general post. This lack of distinction may cause confusion about which tool to use for social account posts versus regular posts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'posts_delete' or 'social_account_posts_list'. There is no mention of prerequisites, error conditions, or when not to use it. The agent has no context for appropriate invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.