EzBiz Social Media Analytics
Server Details
AI-powered social media intelligence: profile analysis, engagement scoring, and trend detection.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 6 of 6 tools scored.
Each tool targets a distinct aspect of social media analytics: profile analysis, competitor comparison, content planning, trend detection, hashtag research, and engagement scoring. No overlap in purpose.
All tool names follow a consistent verb_noun pattern with snake_case (e.g., analyze_profile, detect_trends). No mixing of conventions or styles.
Six tools cover the core functions of social media analytics without being too few or too many. Each tool serves a clear, non-redundant purpose.
The tool set covers major analysis areas but lacks a dedicated reporting or export tool. Minor gap; core workflows are well-supported.
Available Tools
6 toolsanalyze_profileAInspect
Analyze a social media profile or brand presence — posting patterns, content themes, audience indicators, and growth recommendations.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | No | Social media platform to analyze | |
| username | Yes | Social media username or handle (e.g., '@hubspot') | |
| business_name | No | Business name for broader cross-platform search |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden of behavioral disclosure. It mentions analytical outputs (posting patterns, themes, audience indicators, recommendations) but omits details on side effects, auth requirements, rate limits, or whether the tool modifies data. The description is partially transparent but could be improved.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that effectively communicates the tool's primary function and outputs. It is concise but could be better structured with bullet points or separate sentences for clarity. Every word serves a purpose, though the listing of aspects feels slightly packed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description provides a reasonable overview of outputs (posting patterns, content themes, audience indicators, growth recommendations). However, it lacks details on the structure or format of these outputs, leaving some ambiguity about what the agent can expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all three parameters. The description does not elaborate on parameters, but the schema already provides sufficient meaning. The baseline score of 3 is appropriate as the description adds no additional semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: analyzing a social media profile or brand presence, specifying distinct aspects like posting patterns, content themes, audience indicators, and growth recommendations. It distinguishes itself from sibling tools such as competitor_benchmarks (comparison) and content_calendar (planning).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for general profile analysis but does not explicitly state when to use this tool versus alternatives, nor does it provide exclusionary criteria or context prerequisites. This leaves the agent to infer based on the tool name and sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
competitor_benchmarksAInspect
🔒 [Pro] Benchmark your social media against competitors — side-by-side comparison of engagement, content strategy, audience growth, and competitive gaps.
| Name | Required | Description | Default |
|---|---|---|---|
| brand | Yes | Your brand name | |
| platform | No | Platform to focus on (analyzes all if omitted) | |
| competitors | Yes | Comma-separated competitor names (e.g., 'Nike,Adidas,Puma') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It adds behavioral context: the output is a side-by-side comparison covering specific metrics, and the '[Pro]' prefix indicates it's a paid feature (important for agent awareness). It does not detail any potential side effects or data freshness, but for a read-only analytics tool, this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys both the lock icon (pro feature) and the tool's function with no wasted words. It is appropriately front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% schema coverage and no output schema, the description explains the tool's output (comparison of specific metrics) but does not detail return format or data interpretation. For a benchmark tool, this is reasonably complete; a 5 would require explicit output details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add significant new meaning beyond the schema; it mentions 'side-by-side comparison' but does not elaborate on parameter formatting or relationships. Thus, score remains at 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Benchmark your social media against competitors — side-by-side comparison of engagement, content strategy, audience growth, and competitive gaps.' It uses a specific verb ('Benchmark') and resource (social media vs competitors), and the sibling tools (e.g., analyze_profile, score_engagement) have distinct purposes, so differentiation is clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for competitive analysis but does not explicitly state when to use this tool versus alternatives. However, the sibling tool names suggest each has a unique focus, and the description provides clear context for the tool's purpose, earning a score of 4.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
content_calendarBInspect
🔒 [Pro] Generate a detailed social media content calendar — specific posts with captions, hashtags, optimal timing, and content templates for 1-4 weeks.
| Name | Required | Description | Default |
|---|---|---|---|
| duration | No | Calendar duration (default: 2_weeks) | |
| platforms | No | Comma-separated platforms (default: 'instagram,twitter,linkedin') | |
| business_or_niche | Yes | Business name or niche (e.g., 'fitness brand', 'Acme Plumbing') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It only states what the tool does (generates calendar) but omits behavioral traits like access requirements, rate limits, whether it calls external APIs, or any side effects. The '[Pro]' tag is ambiguous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is direct and informative. It includes key details (Pro, specific outputs, duration range) without unnecessary words, earning its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains the high-level output but does not specify the return format or structure. With three parameters and no output schema, it leaves ambiguity about how the calendar is returned (e.g., JSON, markdown), which is a moderate gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already has 100% description coverage, so the baseline is 3. The description adds context about what the generated calendar includes (captions, hashtags, etc.) but does not add new meaning to the parameters themselves beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates a detailed social media content calendar with specific outputs like posts, captions, hashtags, timing, and templates for a duration of 1-4 weeks. This sets it apart from siblings like 'detect_trends' or 'analyze_profile', which focus on analysis rather than generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The '[Pro]' prefix hints at a restriction, but there is no explanation of prerequisites, scenarios, or when not to use it. The description does not help the agent decide between this and sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
detect_trendsBInspect
Detect trending topics and conversations in a niche — viral content patterns, emerging topics, sentiment shifts, and opportunity alerts.
| Name | Required | Description | Default |
|---|---|---|---|
| niche | Yes | Industry or niche to monitor (e.g., 'AI marketing', 'fitness') | |
| timeframe | No | Timeframe for trend analysis (default: this_week) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must bear full burden. It reveals output types (trends, sentiment shifts) but omits how data is sourced, aggregated, or any limitations. No disclosure of auth, rate limits, or behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence covering multiple output aspects without redundancy. Efficient but could benefit from structured bullet points for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 2-param tool with no output schema. However, lacks any mention of return format or typical response structure, leaving some ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description adds minimal extra meaning beyond the schema. It mentions 'in a niche' but does not elaborate on parameter formats or usage nuances. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool detects trending topics and conversations in a niche, listing specific outputs like viral content patterns and sentiment shifts. It is a specific verb-resource pair that distinguishes itself from siblings like analyze_profile and competitor_benchmarks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. It does not mention prerequisites, exclusions, or typical use cases relative to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
research_hashtagsAInspect
Research effective hashtags for a topic — popularity estimates, related hashtags, niche vs broad classification, and recommended hashtag sets.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of hashtags to return (default: 20, max: 50) | |
| topic | Yes | Topic or keyword for hashtag research (e.g., 'real estate', 'fitness') | |
| platform | No | Target platform for hashtag optimization |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It lists high-level outputs but does not disclose any behavioral traits such as side effects, rate limits, or external dependencies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently conveys the tool's purpose and outputs without any superfluous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so the description needs to explain return values. It does mention types of output, but the descriptions are vague (e.g., 'popularity estimates' lacks detail on format or scale).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers 100% of parameter descriptions, so baseline is 3. The description adds no additional meaning beyond the schema, merely reiterating the tool's purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'research' and the resource 'hashtags', and lists specific outputs (popularity estimates, related hashtags, classification, recommended sets). It distinguishes from siblings like 'analyze_profile' and 'detect_trends'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for hashtag research on a topic, but does not explicitly state when to use this tool versus alternatives or provide any context on when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
score_engagementAInspect
Score social media engagement for a brand or topic — engagement rate estimates, content type effectiveness, posting time analysis, and benchmarks.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | No | Platform to focus on (analyzes all if omitted) | |
| brand_or_topic | Yes | Brand name or topic to analyze (e.g., 'Nike', 'AI marketing') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears the full burden of behavioral disclosure. It does not mention whether the tool is read-only, destructive, or requires authentication. The name 'score' suggests analysis, but no explicit safety or side-effect information is given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, well-structured sentence that front-loads the core action and expected outputs. Every part adds value, with no redundant details. Ideal conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 2 parameters, no output schema, and no annotations, the description adequately explains what the tool does but omits return format or data structure. For a scoring tool, the description is minimally viable but could benefit from explaining how results are presented (e.g., scores, benchmarks).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100%, with each parameter having a clear description (e.g., platform enum, brand_or_topic string). The description adds context about outputs but does not further clarify parameter usage or constraints. Baseline 3 is appropriate since schema already does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: scoring social media engagement for a brand or topic. It lists specific outputs (engagement rate estimates, content type effectiveness, posting time analysis, benchmarks) and the resource (brand/topic). This distinguishes it from sibling tools like analyze_profile (focuses on profile) or competitor_benchmarks (focuses on competitors).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (for engagement analysis) but does not explicitly state when not to use or provide alternatives among siblings. No guidance on prerequisites or context beyond the tool's function.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!