Skip to main content
Glama

EzBiz SEO & Marketing Analysis

Server Details

AI-powered SEO and marketing: keyword research, SERP analysis, and content optimization tools.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct SEO function (SERP, backlinks, keywords, content brief, content optimization, technical audit) with no overlap in purpose.

Naming Consistency4/5

Most tools follow a verb_noun snake_case pattern (analyze_serp, check_backlinks, optimize_content, site_audit). content_brief deviates slightly as a noun_noun but remains clear and consistent in style.

Tool Count5/5

Six tools cover the major areas of SEO and marketing analysis without being too few or too many, fitting the scope well.

Completeness4/5

The toolset covers core SEO functionalities including SERP analysis, backlinks, keywords, content optimization, technical audit, and content briefs. Minor gaps like rank tracking or competitor analysis exist but are not critical.

Available Tools

6 tools
analyze_serpAInspect

Analyze search engine results for a query β€” top ranking pages, content patterns, SERP features, and ranking opportunity assessment.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query to analyze
num_resultsNoNumber of results to analyze (max 10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It discloses what the tool returns (SERP features, opportunity assessment) but does not mention behavioral traits like data freshness, permissions, or rate limits. The description is accurate but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the primary purpose, and no extraneous information. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no nested objects, no output schema), the description covers the essential purpose and outputs. It lacks explicit usage guidance but is otherwise adequate for an agent to understand the tool's role.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds context about the overall analysis but does not significantly enhance understanding of individual parameters beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'Analyze search engine results for a query' and lists specific outputs like 'top ranking pages, content patterns, SERP features, and ranking opportunity assessment'. It distinguishes from siblings such as 'keyword_research' and 'content_brief'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is used for analyzing SERPs, but it does not provide explicit guidance on when to use it versus alternatives like 'check_backlinks' or 'keyword_research'. No when-not or context exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

content_briefAInspect

πŸ”’ [Pro] Generate a production-ready content brief β€” analyzes top-ranking pages, provides title options, full outline with word counts, keyword targets, and differentiation strategy.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesContent topic (e.g., 'best CRM for small businesses')
content_typeNoContent format (default: 'blog post', options: 'landing page', 'pillar page', 'comparison')
target_keywordYesPrimary keyword to rank for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description describes the tool's outputs but does not disclose behavioral traits such as side effects, rate limits, or authentication requirements; the Pro indicator is noted but not expanded.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose and outputs, with no extraneous information; the emoji and Pro tag are meaningful and compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the return value (a brief with title options, outline, etc.) fairly well, though it omits format (e.g., text or JSON) and prerequisites; overall sufficient for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds context about what the parameters are used for (analyzing top pages, generating brief) but does not provide additional format or constraint details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates a production-ready content brief and lists specific outputs (title options, outline, etc.), differentiating it from sibling tools like keyword_research or optimize_content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for content creation but lacks explicit guidance on when to use this over alternatives (e.g., keyword_research) or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

keyword_researchAInspect

Research keyword opportunities for a business β€” search volume indicators, difficulty estimates, related terms, and content suggestions.

ParametersJSON Schema
NameRequiredDescriptionDefault
industryNoBusiness industry or niche
locationNoTarget geographic location
seed_keywordYesPrimary keyword or topic to research
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must disclose behavioral details. It only lists high-level outputs without revealing data sources, accuracy, or any constraints. This is minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that immediately conveys the tool's purpose and outputs. No extra words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with no output schema, the description outlines the general output types but omits details on result structure or interpretation. It is adequate but leaves some gaps for effective agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage for all three parameters, providing descriptions for each. The description adds no parameter-specific detail beyond schema, but the baseline of 3 is appropriate since the schema already documents them adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it researches keyword opportunities, listing specific outputs like search volume, difficulty, related terms, and content suggestions. It easily distinguishes from sibling tools such as analyze_serp or site_audit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for keyword research but does not explicitly specify when to use it versus alternatives. No exclusions or conditions are provided, leaving the agent to infer context from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

optimize_contentAInspect

Analyze and optimize content for SEO β€” keyword density, readability, structure, meta tags, and actionable improvement suggestions.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL of the page to optimize
target_keywordYesPrimary keyword to optimize for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description alone must disclose behavioral traits. It mentions 'actionable improvement suggestions' which hints at read-only analysis, but the verb 'optimize' could imply modification. The tool's side effects or safety profile are not clarified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose. Every part is relevant and there is no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with two straightforward parameters and no output schema, the description is fairly completeβ€”it explains what inputs are used for and what analysis is performed. However, it could mention the output format (e.g., a report) to be fully self-contained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for both params. The tool description adds contextual value by listing what the tool does with the URL and keyword (analyze density, readability, etc.), but does not enhance understanding of the parameters beyond their existing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool analyzes and optimizes content for SEO, listing specific elements like keyword density, readability, structure, and meta tags. It distinctly differs from sibling tools such as analyze_serp (SERP analysis) or keyword_research (keyword discovery).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for optimizing a specific page for a target keyword, but does not explicitly state when to use it versus siblings like content_brief or site_audit. No alternatives or exclusions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

site_auditAInspect

πŸ”’ [Pro] Full technical SEO audit of a website β€” crawls multiple pages, checks SSL, speed, schema, headings, linking structure, and provides a prioritized fix plan.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesWebsite URL to audit (e.g., 'https://example.com')
focusNoSpecific audit focus (e.g., 'page speed', 'schema markup', 'mobile')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry behavioral context. It mentions crawling multiple pages and delivering a fix plan, but omits potential impact (e.g., cost, rate limits, destructive actions). Insufficient for a pro-level tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with effective use of emoji and hyphenated list. All information is front-loaded and relevant, no filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers inputs well with schema and description. Lacks output schema, but key expected outcome (prioritized fix plan) is stated. Adequate for a tool with clear purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds value by explaining the audit scope (crawl, checks, fix plan), which helps interpret the 'focus' parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool performs a full technical SEO audit, listing specific checks (SSL, speed, schema, headings, linking). This distinguishes it from siblings like analyze_serp or check_backlinks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or alternatives are given, but the description implies usage for technical SEO audits. Given sibling tools cover different tasks, the purpose is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources