Skip to main content
Glama

EzBiz Social Media Analytics

Server Details

AI-powered social media intelligence: profile analysis, engagement scoring, and trend detection.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct aspect of social media analytics: profile analysis, competitor comparison, content planning, trend detection, hashtag research, and engagement scoring. No overlap in purpose.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with snake_case (e.g., analyze_profile, detect_trends). No mixing of conventions or styles.

Tool Count5/5

Six tools cover the core functions of social media analytics without being too few or too many. Each tool serves a clear, non-redundant purpose.

Completeness4/5

The tool set covers major analysis areas but lacks a dedicated reporting or export tool. Minor gap; core workflows are well-supported.

Available Tools

6 tools
analyze_profileAInspect

Analyze a social media profile or brand presence — posting patterns, content themes, audience indicators, and growth recommendations.

ParametersJSON Schema
NameRequiredDescriptionDefault
platformNoSocial media platform to analyze
usernameYesSocial media username or handle (e.g., '@hubspot')
business_nameNoBusiness name for broader cross-platform search
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It mentions analytical outputs (posting patterns, themes, audience indicators, recommendations) but omits details on side effects, auth requirements, rate limits, or whether the tool modifies data. The description is partially transparent but could be improved.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that effectively communicates the tool's primary function and outputs. It is concise but could be better structured with bullet points or separate sentences for clarity. Every word serves a purpose, though the listing of aspects feels slightly packed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description provides a reasonable overview of outputs (posting patterns, content themes, audience indicators, growth recommendations). However, it lacks details on the structure or format of these outputs, leaving some ambiguity about what the agent can expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all three parameters. The description does not elaborate on parameters, but the schema already provides sufficient meaning. The baseline score of 3 is appropriate as the description adds no additional semantic value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: analyzing a social media profile or brand presence, specifying distinct aspects like posting patterns, content themes, audience indicators, and growth recommendations. It distinguishes itself from sibling tools such as competitor_benchmarks (comparison) and content_calendar (planning).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for general profile analysis but does not explicitly state when to use this tool versus alternatives, nor does it provide exclusionary criteria or context prerequisites. This leaves the agent to infer based on the tool name and sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

competitor_benchmarksAInspect

🔒 [Pro] Benchmark your social media against competitors — side-by-side comparison of engagement, content strategy, audience growth, and competitive gaps.

ParametersJSON Schema
NameRequiredDescriptionDefault
brandYesYour brand name
platformNoPlatform to focus on (analyzes all if omitted)
competitorsYesComma-separated competitor names (e.g., 'Nike,Adidas,Puma')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It adds behavioral context: the output is a side-by-side comparison covering specific metrics, and the '[Pro]' prefix indicates it's a paid feature (important for agent awareness). It does not detail any potential side effects or data freshness, but for a read-only analytics tool, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys both the lock icon (pro feature) and the tool's function with no wasted words. It is appropriately front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 100% schema coverage and no output schema, the description explains the tool's output (comparison of specific metrics) but does not detail return format or data interpretation. For a benchmark tool, this is reasonably complete; a 5 would require explicit output details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add significant new meaning beyond the schema; it mentions 'side-by-side comparison' but does not elaborate on parameter formatting or relationships. Thus, score remains at 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'Benchmark your social media against competitors — side-by-side comparison of engagement, content strategy, audience growth, and competitive gaps.' It uses a specific verb ('Benchmark') and resource (social media vs competitors), and the sibling tools (e.g., analyze_profile, score_engagement) have distinct purposes, so differentiation is clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for competitive analysis but does not explicitly state when to use this tool versus alternatives. However, the sibling tool names suggest each has a unique focus, and the description provides clear context for the tool's purpose, earning a score of 4.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

content_calendarBInspect

🔒 [Pro] Generate a detailed social media content calendar — specific posts with captions, hashtags, optimal timing, and content templates for 1-4 weeks.

ParametersJSON Schema
NameRequiredDescriptionDefault
durationNoCalendar duration (default: 2_weeks)
platformsNoComma-separated platforms (default: 'instagram,twitter,linkedin')
business_or_nicheYesBusiness name or niche (e.g., 'fitness brand', 'Acme Plumbing')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It only states what the tool does (generates calendar) but omits behavioral traits like access requirements, rate limits, whether it calls external APIs, or any side effects. The '[Pro]' tag is ambiguous.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is direct and informative. It includes key details (Pro, specific outputs, duration range) without unnecessary words, earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the high-level output but does not specify the return format or structure. With three parameters and no output schema, it leaves ambiguity about how the calendar is returned (e.g., JSON, markdown), which is a moderate gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already has 100% description coverage, so the baseline is 3. The description adds context about what the generated calendar includes (captions, hashtags, etc.) but does not add new meaning to the parameters themselves beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates a detailed social media content calendar with specific outputs like posts, captions, hashtags, timing, and templates for a duration of 1-4 weeks. This sets it apart from siblings like 'detect_trends' or 'analyze_profile', which focus on analysis rather than generation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The '[Pro]' prefix hints at a restriction, but there is no explanation of prerequisites, scenarios, or when not to use it. The description does not help the agent decide between this and sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

research_hashtagsAInspect

Research effective hashtags for a topic — popularity estimates, related hashtags, niche vs broad classification, and recommended hashtag sets.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoNumber of hashtags to return (default: 20, max: 50)
topicYesTopic or keyword for hashtag research (e.g., 'real estate', 'fitness')
platformNoTarget platform for hashtag optimization
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It lists high-level outputs but does not disclose any behavioral traits such as side effects, rate limits, or external dependencies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the tool's purpose and outputs without any superfluous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description needs to explain return values. It does mention types of output, but the descriptions are vague (e.g., 'popularity estimates' lacks detail on format or scale).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers 100% of parameter descriptions, so baseline is 3. The description adds no additional meaning beyond the schema, merely reiterating the tool's purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'research' and the resource 'hashtags', and lists specific outputs (popularity estimates, related hashtags, classification, recommended sets). It distinguishes from siblings like 'analyze_profile' and 'detect_trends'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for hashtag research on a topic, but does not explicitly state when to use this tool versus alternatives or provide any context on when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

score_engagementAInspect

Score social media engagement for a brand or topic — engagement rate estimates, content type effectiveness, posting time analysis, and benchmarks.

ParametersJSON Schema
NameRequiredDescriptionDefault
platformNoPlatform to focus on (analyzes all if omitted)
brand_or_topicYesBrand name or topic to analyze (e.g., 'Nike', 'AI marketing')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears the full burden of behavioral disclosure. It does not mention whether the tool is read-only, destructive, or requires authentication. The name 'score' suggests analysis, but no explicit safety or side-effect information is given.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, well-structured sentence that front-loads the core action and expected outputs. Every part adds value, with no redundant details. Ideal conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 2 parameters, no output schema, and no annotations, the description adequately explains what the tool does but omits return format or data structure. For a scoring tool, the description is minimally viable but could benefit from explaining how results are presented (e.g., scores, benchmarks).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100%, with each parameter having a clear description (e.g., platform enum, brand_or_topic string). The description adds context about outputs but does not further clarify parameter usage or constraints. Baseline 3 is appropriate since schema already does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: scoring social media engagement for a brand or topic. It lists specific outputs (engagement rate estimates, content type effectiveness, posting time analysis, benchmarks) and the resource (brand/topic). This distinguishes it from sibling tools like analyze_profile (focuses on profile) or competitor_benchmarks (focuses on competitors).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (for engagement analysis) but does not explicitly state when not to use or provide alternatives among siblings. No guidance on prerequisites or context beyond the tool's function.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources