postcopilot-mcp
Server Details
Threads tools for AI — generate viral posts, download videos, export profiles
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolspostcopilot_download_videoDownload Threads VideoAInspect
Download a video from a Threads post URL. Returns direct video download URLs (no watermark). The URL must be a Threads post containing a video (e.g. https://www.threads.com/@user/post/ABC123).
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The Threads post URL containing the video |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable behavioral context beyond schema: 'Returns direct video download URLs (no watermark)' reveals output format and quality attributes. Does not mention rate limits, auth requirements, or error handling for invalid URLs, but covers core behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each earning its place: (1) core action, (2) return value/quality, (3) input requirements with example. No redundancy or filler. Well front-loaded with action first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without output schema, description adequately compensates by explaining return values ('direct video download URLs'). No annotations to reference, but description covers essential behavior. Minor gap regarding error handling or authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with 'url' parameter documented. Description adds semantic value by providing example format 'https://www.threads.com/@user/post/ABC123', which helps agents understand expected URL structure beyond the generic schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Download' with clear resource 'video from a Threads post URL'. Distinct from siblings: export_profile (exports data), generate_post (creates content), and read_guide (retrieves documentation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear input constraint: 'The URL must be a Threads post containing a video'. Implicitly defines when to use (when you have a Threads video URL). Lacks explicit 'when not to use' or named alternatives, but the constraint is specific enough to prevent misuse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
postcopilot_export_profileExport Threads Profile PostsAInspect
Export all posts from a Threads user profile. Provide the profile URL (e.g. https://www.threads.com/@username) and get structured post data including text, likes, replies, reposts, media, and timestamps.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The Threads profile URL (e.g. https://www.threads.com/@username) | |
| mode | No | Export mode: 'fast' (HTTP only, ~5 posts), 'full' (browser, more posts), 'auto' (tries fast, falls back to full) | auto |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the data structure returned (text, likes, replies, etc.) which compensates for the missing output schema, but fails to mention behavioral traits like the export mode limitations (~5 posts in 'fast' mode) or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste: the first states the action and resource, while the second details the input requirement and output structure. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description appropriately details the returned data fields (text, likes, media, timestamps). With 100% schema coverage and only two parameters, this is sufficient, though mentioning the trade-offs between export modes would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description mentions the URL parameter with an example (redundant with the schema) but does not add semantic meaning beyond the schema for the 'mode' parameter or explain parameter interactions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Export') with a clear resource ('posts from a Threads user profile') and distinguishes itself from siblings like download_video (media-focused) and generate_post (content creation) by emphasizing data extraction from existing profiles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by detailing the required input (profile URL) and output (structured post data), but lacks explicit guidance on when to use this versus siblings or when not to use it (e.g., for private profiles).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
postcopilot_generate_postGenerate Threads PostAInspect
Generate a viral Threads post using a fine-tuned AI model. Provide a topic or idea and get a ready-to-post caption. Returns the generated text.
| Name | Required | Description | Default |
|---|---|---|---|
| model | No | AI model to use: 'gpt' (fine-tuned GPT, default) or 'llama' (Together AI Llama) | gpt |
| message | Yes | The topic, idea, or prompt for the Threads post |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes the use of a 'fine-tuned AI model' and states it 'Returns the generated text', compensating for the lack of output schema. However, it omits important details like rate limits, costs, idempotency, or whether the operation has side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three efficient sentences with zero waste: the first defines the action and method, the second explains the input/output flow, and the third clarifies the return value. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 simple parameters, 100% schema coverage) and lack of output schema, the description adequately compensates by stating what the tool returns ('generated text'). It covers the essential information needed for invocation, though it could note whether the generation is deterministic or has associated costs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds slight semantic context by referring to the 'message' parameter as a 'topic or idea', reinforcing its purpose, but does not elaborate on the implications of choosing between 'gpt' and 'llama' models beyond the schema's enum descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Generate[s] a viral Threads post using a fine-tuned AI model', providing a specific verb, resource, and method. It clearly distinguishes from siblings (download_video, export_profile, read_guide) through the unique action of content generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance ('Provide a topic or idea and get a ready-to-post caption') indicating required input, but lacks explicit when-to-use guidance or comparison to alternatives. It does not clarify when to choose between the available AI models (gpt vs llama).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
postcopilot_read_guideRead PostCopilot GuideAInspect
Read a PostCopilot blog post / guide about Threads. Returns the full text content. Use postcopilot://blog/catalog resource first to see available guides, or provide a topic to search.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | Yes | Topic to search for (e.g. "viral", "video download", "export followers", "analytics") or a blog slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses 'Returns the full text content' which is valuable given no output schema exists. However, missing other behavioral traits like error handling (what if topic not found?), idempotency, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences. First sentence front-loads purpose and return value; second provides workflow prerequisites. No redundancy or extraneous text. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no annotations and no output schema, the description is reasonably complete. It compensates for missing output schema by stating 'Returns the full text content' and references the related catalog resource URI. Could improve by describing error cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('Topic to search for... or a blog slug'). Description mentions 'provide a topic to search' which aligns with schema but doesn't add semantic detail beyond schema definitions. Baseline 3 appropriate when schema is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Read') and resource ('PostCopilot blog post / guide about Threads'). Explicitly distinguishes from siblings (download_video, export_profile, generate_post) by focusing on content retrieval rather than media downloading, data exporting, or content generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'Use postcopilot://blog/catalog resource first to see available guides, or provide a topic to search.' This establishes the prerequisite resource and the alternative input method. Lacks explicit 'when not to use' guidance comparing to siblings, but the workflow is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!