VibeMarketing
Server Details
VibeMarketing (https://vibemarketing.ninja/mcp) is a directory service that catalogs and provides information about various MCP (Model Context Protocol) servers. It serves as a centralized resource where users can discover different MCP servers and their capabilities. Examples of servers listed in the directory include Sequential Thinking MCP (for dynamic problem-solving through structured thought sequences) and Memory MCP (a knowledge graph-based persistent memory system).
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 7 of 7 tools scored. Lowest: 2.6/5.
Each tool has a clearly distinct purpose with no overlap: bulk_schedule_posts handles batch operations, schedule_post handles single posts, delete_post and update_post manage scheduled posts, get_accounts and get_all_posts retrieve different data types, and get_subscription_status provides account information. An agent can easily distinguish between these tools.
All tool names follow a consistent verb_noun pattern with snake_case (e.g., bulk_schedule_posts, delete_post, get_accounts). The verbs are appropriate and predictable (get, schedule, update, delete), making the set highly readable and uniform.
With 7 tools, this server is well-scoped for social media marketing management. Each tool serves a clear purpose (scheduling, retrieval, deletion, updates, account management, and subscription checks), and none feel redundant or unnecessary for the domain.
The tool set covers core social media marketing workflows: creating, updating, deleting, and retrieving posts, along with account and subscription management. A minor gap is the inability to manage published posts (only scheduled ones), but agents can work around this limitation, and overall coverage is strong for the domain.
Available Tools
7 toolsbulk_schedule_postsAInspect
Schedule multiple social media posts at once (up to 50 posts per batch). More efficient than scheduling posts individually.
| Name | Required | Description | Default |
|---|---|---|---|
| posts | Yes | Array of posts to schedule |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the batch size limit (50) and efficiency claim, but lacks critical behavioral disclosure for a batch mutation operation: failure modes (partial vs. total failure), atomicity guarantees, side effects, or persistence semantics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are tightly constructed with zero waste. The first sentence front-loads the core functionality and primary constraint; the second provides comparative context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage, the description adequately covers the basic operation. However, with no annotations, no output schema, and a complex batch mutation behavior, it lacks completeness around operational semantics (error handling, validation behavior) that would be necessary for robust agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'up to 50 posts' which aligns with the schema's maxItems constraint, but adds no additional semantic meaning, syntax guidance, or examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states specific verb 'Schedule', resource 'social media posts', and scope 'multiple... at once (up to 50 posts per batch)'. The phrase 'More efficient than scheduling posts individually' effectively distinguishes this from the sibling tool 'schedule_post'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The efficiency comparison implies this should be used for batch operations versus individual scheduling, but this guidance is implicit rather than explicit. It does not state explicit prerequisites (e.g., valid accountIds from get_accounts) or when to avoid this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_postAInspect
Delete a scheduled post (cannot delete published posts)
| Name | Required | Description | Default |
|---|---|---|---|
| postId | Yes | ID of the post to delete | |
| platform | Yes | Platform the post is on |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the critical scope limitation (scheduled vs published) but omits other important behavioral traits like error handling, irreversibility warnings, or authentication requirements for destructive operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. The core action is front-loaded ('Delete a scheduled post') and the constraint follows immediately, earning its place as critical behavioral context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 parameters, no nesting, 100% schema coverage) and absence of output schema, the description provides adequate coverage for a simple delete operation. However, for a destructive mutation tool with no annotations, it should ideally include behavioral context like irreversibility or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing complete documentation for postId and platform parameters. The description adds no additional parameter semantics (syntax details, format examples, or enum clarification), warranting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (delete) and specific resource (scheduled post), and distinguishes scope from general deletion via the parenthetical constraint '(cannot delete published posts)'. However, it does not explicitly differentiate from sibling update_post or clarify when to prefer delete over update.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides an explicit when-not-to-use constraint ('cannot delete published posts'), but lacks guidance on alternatives or what action to take if the post is already published. No explicit comparison to sibling tools is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_accountsBInspect
Get all connected social media accounts across all platforms
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of accounts to return per platform | |
| platform | No | Filter by platform or get all platforms | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. 'Get' implies read-only and 'connected' implies filtered by authorization status, but lacks details on return format, pagination behavior, error states if no accounts exist, or whether credentials are returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence (9 words) with no redundant content. Front-loaded with verb, every word earns its place—'connected' signals auth requirement, 'across all platforms' signals scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for the simple 2-parameter input structure with complete schema coverage, but lacks description of return values since no output schema exists. Does not explain what constitutes 'connected' or what account data fields are returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both limit and platform have descriptions), establishing baseline 3. Description adds 'across all platforms' reinforcing the platform parameter's default behavior, but adds no syntax details, examples, or validation rules beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (Get) and resource (connected social media accounts) with scope (across all platforms). Distinguishes from sibling post-management tools (schedule_post, delete_post, etc.) by focusing on account connections rather than content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use versus siblings like get_all_posts or prerequisites for retrieving accounts. While implied this should be called before posting, the description provides no when-to-use or when-not-to-use clauses.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_all_postsCInspect
Get all posts across all platforms
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of posts to return | |
| status | No | Filter by post status (use "any" to get all statuses) | |
| platform | No | Filter by platform or get all platforms | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full disclosure burden. Fails to mention: safe read-only nature (implied by 'Get' but not explicit), default sorting order, pagination behavior (despite limit parameter), response payload structure, or rate limits. Does not clarify what 'all' encompasses when no filters applied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at five words. Information density per sentence is high, with verb-target-front-loaded structure. However, brevity comes at cost of usefulness—description is so minimal it functions as a label rather than guidance. No waste, but also no substantive content beyond tool name expansion.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Severely incomplete given tool complexity: lacks output schema, lacks annotations, has multiple filtering parameters (status, platform) that description ignores, and implements pagination (limit) without describing result set characteristics. For a cross-platform content retrieval tool, omits critical context about data scope and retrieval behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description adds no semantic value beyond schema—does not explain relationships between parameters (e.g., combining status='draft' with platform='all'), enum value implications, or why limit defaults to 20. Relies entirely on structured schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States basic verb and resource ('Get all posts') but remains vague about what constitutes 'posts' in this context. The phrase 'across all platforms' is imprecise since the platform parameter allows filtering to specific platforms (x/linkedin), suggesting the tool always returns cross-platform data when it actually supports scoped queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus sibling tools like get_accounts (which presumably retrieves account metadata rather than post content) or how it relates to update_post/delete_post. Lacks prerequisites, pagination guidance, or workflow positioning despite being a data retrieval tool with filtering capabilities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_subscription_statusBInspect
Get current subscription status and usage
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden yet discloses no behavioral traits beyond the tautological 'Get'. Fails to specify what 'usage' encompasses (API calls, storage, seats), return format, error cases for inactive subscriptions, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely brief (6 words) and front-loaded with no redundancy. Efficiently conveys core intent without waste, though terseness contributes to under-specification in other dimensions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for low complexity (zero-input read operation) but lacks output description critical given absence of output_schema and annotations. Does not clarify what data structure represents 'status and usage', leaving agent unprepared for response handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters with 100% schema coverage (vacuously true). Baseline 4 applies per rules for zero-parameter tools; no parameter description is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Get' and resource 'subscription status and usage'. Distinct from post-management siblings (schedule_post, delete_post, etc.) and get_accounts, though 'usage' remains vague regarding what metrics are returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use versus alternatives, prerequisites, or conditions. While distinct from post-related siblings by domain, agent receives no signal about when subscription checks are appropriate (e.g., before billing operations, when quota errors occur).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
schedule_postCInspect
Schedule a new social media post
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Post content | |
| platform | Yes | Social media platform | |
| accountId | Yes | Account ID to post from | |
| mediaUrls | No | Optional media URLs | |
| scheduledFor | Yes | When to publish the post |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full behavioral disclosure burden. Fails to mention side effects (creates a scheduled job), timezone handling for scheduledFor, idempotency, or error conditions (e.g., past dates, invalid accountIds).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse at 5 words. While front-loaded with the verb, it underserves the tool's complexity (5 parameters, scheduling logic) and lacks annotations to compensate. Too brief to be informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Insufficient given tool complexity. No output schema, no annotations, and minimal description leaves gaps regarding return values, error handling, and scheduling confirmation behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, satisfying baseline. Description adds no parameter-specific context (e.g., doesn't mention platform constraints 'x'/'linkedin', media URL requirements, or UUID format for accountId).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Schedule' and resource 'social media post', and 'new' distinguishes from sibling update_post. However, lacks explicit differentiation from bulk_schedule_posts regarding single vs. bulk operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this single-post scheduler versus bulk_schedule_posts. No mention of prerequisites like obtaining valid accountId from get_accounts first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_postAInspect
Update a scheduled post (cannot update published posts)
| Name | Required | Description | Default |
|---|---|---|---|
| postId | Yes | ID of the post to update | |
| content | No | New content for the post (optional) | |
| platform | Yes | Platform the post is on | |
| mediaUrls | No | New media URLs (optional) | |
| scheduledFor | No | New scheduled time in ISO 8601 format (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Zero annotations provided, so description carries full burden. Reveals critical state constraint (scheduled-only) and implies mutation behavior. However, lacks disclosure of partial update semantics (PATCH-style), idempotency, error behavior when targeting published posts, or what determines success vs failure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with resource and constraint front-loaded. Parenthetical constraint is compact and high-value. Zero redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With comprehensive schema coverage (100%) and simple flat structure, description appropriately focuses on domain constraint (scheduled-only restriction) rather than parameter docs. Could improve by noting partial update behavior or response nature, but adequately complete for invocation given schema richness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in structured form. Description adds no syntax or format details beyond schema, earning baseline score. Required-vs-optional logic is clear in schema, not repeated in description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Update') + resource ('scheduled post'). The parenthetical constraint explicitly distinguishes this from sibling operations like delete_post and implicitly from schedule_post by specifying the target state (scheduled vs published).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit constraint on when NOT to use ('cannot update published posts'). Clear implication that tool only works on scheduled posts, though doesn't explicitly name the alternative workflow for published posts (likely requires recreate or specific publish-focused tool).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!