Skip to main content
Glama
dinwal
by dinwal

Server Quality Checklist

42%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v1.0.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 9 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. Missing critical behavioral details: output format (URL, base64, file ID?), synchronous vs async behavior, API limits, content policy restrictions, and whether results persist in the RecurPost ecosystem.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence, front-loaded with action verb, no redundant words. Efficiently communicates core purpose, though minimalism contributes to lack of necessary behavioral context.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For an image generation tool with no output schema and no annotations, the description should disclose return behavior, side effects, and storage implications. Currently insufficient for an agent to predict outcomes, handle errors, or integrate the result into subsequent workflow steps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with prompt_text adequately described. Description mentions 'text description' which aligns with the parameter, but adds no additional syntax constraints, prompt engineering guidance, or formatting examples beyond what the schema provides. Baseline 3 appropriate.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states the verb (Generate), resource (image), and method (using RecurPost AI). Implicitly distinguishes from sibling generate_content_with_ai by specifying 'image' versus generic 'content', though lacks explicit scope constraints or differentiating guidance.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus generate_content_with_ai, prerequisites for image generation, or how to handle results. Fails to mention if generated images are automatically stored in the library or require separate ingestion.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full disclosure burden. It fails to mention what the tool returns (likely auth tokens or session identifiers), failure modes for invalid credentials, rate limiting, or whether this establishes state for subsequent calls. For an authentication-critical tool, this is a significant gap.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is front-loaded with the action verb and contains no redundant or wasteful language. However, given the tool's importance and lack of supporting annotations, the description could be appropriately longer without losing conciseness.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Without an output schema or annotations for an authentication tool, the description should explain the return values (tokens, expiration) and side effects (session establishment). It omits these critical details, leaving agents uncertain about how to handle the output or subsequent authentication state.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With zero parameters in the input schema (schema coverage 100%), the baseline score applies. The description does not need to parameter details, and the schema is trivially complete.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a clear verb ('Verify') and resource ('RecurPost API credentials'), and combined with the tool name 'user_login', distinguishes this authentication tool from content management siblings like 'add_content_in_library' or 'social_account_list'. However, it lacks specificity about what verification entails (e.g., establishing a session vs. just checking validity).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    There is no guidance on when to use this tool versus sibling authentication tools like 'connect_social_account_urls', nor does it state that this should be called first before other operations or what prerequisites are needed. The description implies functionality but not usage context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Zero annotations provided, so description carries full burden of behavioral disclosure. It fails to mention side effects (e.g., content becomes public), reversibility, error conditions, timezone handling for scheduling, or rate limits. 'Post' implies publication but lacks critical safety/behavioral context for a publishing tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence is efficient and front-loaded with no waste words. However, given the tool's high complexity (39 parameters), it may be excessively brief rather than appropriately concise, though technically well-structured.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Inadequate for a 39-parameter multi-platform tool with no output schema. The description omits the extensive platform-specific capabilities (Facebook/Instagram/TikTok/YouTube variations), media handling complexities, and library vs. publishing distinction that are critical for successful invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. The description hints at the schedule_date_time parameter's optionality via 'immediately or schedule', but adds no syntax details, validation rules, or guidance on platform-specific overrides (fb_message, in_post_type, etc.) beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States clear verbs (Post, schedule) and resource (content on social account) and distinguishes temporal modes. However, it does not explicitly differentiate from sibling 'add_content_in_library' or mention the multi-platform nature evident in the 39 parameters.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implied usage guidance by contrasting immediate posting versus scheduling. However, it lacks explicit when-not-to-use conditions, prerequisites (e.g., obtaining account ID first), or guidance on choosing between this and the library-related sibling tool.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It fails to disclose whether this creates server-side state (pending connections), URL expiration behavior, or specific platform requirements. Only states it returns URLs.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence, front-loaded with verb, zero waste. Appropriate length for the tool's simplicity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    No output schema exists, so description should ideally elaborate on the URL format, quantity, or usage instructions. It mentions URLs but lacks critical details expected when return structure is undefined.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters present. Per scoring rules, this establishes a baseline of 4 with no deductions needed. The description does not need to compensate for missing parameter documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb 'Get' and resource 'URLs to connect new social media accounts', but lacks specificity about what type of URLs (OAuth, authentication) and doesn't explicitly differentiate from the sibling 'social_account_list'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use this tool versus 'social_account_list' or prerequisites for account connection. The 'new' qualifier provides minimal implicit guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Beyond implying read-only via 'List', it omits pagination behavior, caching policies, rate limits, or return structure details needed for safe invocation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence with zero waste. Verb, resource, and key output field (IDs) are front-loaded and dense with information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a zero-parameter tool, but lacks output specification (no output schema exists) or context on what constitutes a 'content library'. Should indicate what data structures are returned beyond just IDs.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has zero parameters, triggering baseline score of 4 per rubric. Description appropriately makes no parameter claims that would require documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb 'List' and resource 'content libraries with their IDs'. Distinguishes from sibling 'social_account_list' via resource naming, though could explicitly clarify relationship to 'add_content_in_library'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use versus siblings, prerequisites (e.g., authentication requirements), or that results are needed for 'add_content_in_library'. Lacks 'when-not' or alternative suggestions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full disclosure burden for this 47-parameter write operation. It fails to mention idempotency, validation behavior, side effects, or what occurs after adding content (e.g., immediate scheduling vs. draft state).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Perfectly concise at 9 words. Every term serves a purpose: action ('Add'), object ('post'), destination ('content library'), and use-case qualifier ('recurring/evergreen posting'). No redundancy or fluff.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the high schema richness (100% coverage, 11 enums) and 47 parameters, the description is minimally adequate. However, for a complex multi-platform content tool with no output schema and no annotations, it lacks explanation of the library queuing system and platform override logic.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with comprehensive platform-specific field descriptions, so the description doesn't need to repeat parameter details. However, it adds no semantic context about how the platform-specific overrides (fb_message, tw_message, etc.) interact with the default message, keeping it at the baseline.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb ('Add'), resource ('post'), and destination ('content library'). The phrase 'recurring/evergreen posting' effectively distinguishes this from the sibling tool 'post_content' (which implies immediate posting) though it doesn't explicitly name alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage context ('recurring/evergreen') suggesting when to use this versus one-time posting, but lacks explicit guidance on prerequisites (e.g., needing a library ID from library_list) or when NOT to use it.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. It adds valuable behavioral context by specifying 'connected' (implying prior authorization is required) and 'with their IDs' (indicating the return payload includes identifiers). However, it lacks details on pagination, error conditions, or output structure that annotations would typically cover.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with no redundant words. It front-loads the action verb and immediately follows with the resource and return value details.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (zero parameters, no nested objects) and absence of an output schema, the description adequately covers the basic operation by mentioning the returned IDs. However, it could improve by describing the return structure (array vs object) or what constitutes a 'connected' account state.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With zero parameters, the baseline score is appropriately set to 4. The description does not need to compensate for missing schema documentation since the empty input schema is self-explanatory.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a clear verb ('List') and resource ('connected social media accounts'), and specifies the key returned field ('IDs'). It implicitly distinguishes from sibling 'connect_social_account_urls' (which adds connections) and 'library_list' (which lists different content), though it could explicitly contrast these.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives, prerequisites (e.g., requiring prior authentication), or when not to use it (e.g., if looking for unconnected accounts).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. It successfully communicates the 100-post limit constraint (critical for pagination planning). However, lacks safety profile (read-only implied but not stated), error conditions, and return value structure given no output schema exists.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely efficient single sentence of 11 words. Front-loaded with action 'Get posting history', followed by target scope, ending with critical constraint. No repetition, tautology, or filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a 4-parameter read tool: mentions key constraint (100 posts). However, with no output schema provided, description should ideally characterize return data structure (post objects, metrics, etc.) which is absent. Missing behavioral details like date range defaults when optional params omitted.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100% (all 4 parameters well-documented). Description does not explicitly document parameters but mentions the 100-post constraint which contextualizes the id parameter's purpose. Baseline 3 appropriate since schema does heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb 'Get' + resource 'posting history' + scope 'social account'. The parenthetical '(max 100 posts per request)' clarifies volume constraints. Distinguishes from siblings like post_content (creates), social_account_list (lists accounts), and library_list (lists library items) by targeting retrieved historical post data.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implied usage is clear (retrieve historical posts), but lacks explicit 'when to use' guidance, prerequisites, or named alternatives. The max 100 posts constraint provides implicit usage guidance regarding pagination needs, preventing a lower score.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full disclosure burden. It successfully explains the conversational state management (multi-turn via ai_id), but omits safety-critical details such as rate limits, whether generation is billable, or if content is automatically persisted versus just returned.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficiently constructed sentences. The first front-loads the core purpose, while the second adds the essential multi-turn context. Zero duplication of schema details or redundant filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the schema's rich parameter documentation (100% coverage) and the tool's moderate complexity, the description covers the essential behavioral context (AI generation, conversation continuity). It effectively compensates for the missing output schema by implying the conversational cycle through parameter descriptions, though it could explicitly mention that the response contains new ai_id/chat_progress values for subsequent calls.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the schema has 100% description coverage (baseline 3), the description adds valuable semantic context by framing ai_id and chat_progress within the 'multi-turn conversation' workflow, helping the agent understand these parameters form a conversation state protocol.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action (Generate), resource (social media content text), and mechanism (using RecurPost AI). It distinguishes from the sibling 'generate_image_with_ai' by explicitly specifying 'text' content.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides implicit usage guidance by explaining that ai_id enables 'multi-turn conversations,' indicating when to populate that parameter. However, it lacks explicit guidance on when to use this tool versus siblings like 'generate_image_with_ai' or prerequisites like authentication.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

recurpost-mcp MCP server

Copy to your README.md:

Score Badge

recurpost-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dinwal/recurpost-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server