EzBiz SEO & Marketing Analysis
Server Details
AI-powered SEO and marketing: keyword research, SERP analysis, and content optimization tools.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 6 of 6 tools scored.
Each tool targets a distinct SEO function (SERP, backlinks, keywords, content brief, content optimization, technical audit) with no overlap in purpose.
Most tools follow a verb_noun snake_case pattern (analyze_serp, check_backlinks, optimize_content, site_audit). content_brief deviates slightly as a noun_noun but remains clear and consistent in style.
Six tools cover the major areas of SEO and marketing analysis without being too few or too many, fitting the scope well.
The toolset covers core SEO functionalities including SERP analysis, backlinks, keywords, content optimization, technical audit, and content briefs. Minor gaps like rank tracking or competitor analysis exist but are not critical.
Available Tools
6 toolsanalyze_serpAInspect
Analyze search engine results for a query β top ranking pages, content patterns, SERP features, and ranking opportunity assessment.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query to analyze | |
| num_results | No | Number of results to analyze (max 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It discloses what the tool returns (SERP features, opportunity assessment) but does not mention behavioral traits like data freshness, permissions, or rate limits. The description is accurate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the primary purpose, and no extraneous information. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no nested objects, no output schema), the description covers the essential purpose and outputs. It lacks explicit usage guidance but is otherwise adequate for an agent to understand the tool's role.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description adds context about the overall analysis but does not significantly enhance understanding of individual parameters beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Analyze search engine results for a query' and lists specific outputs like 'top ranking pages, content patterns, SERP features, and ranking opportunity assessment'. It distinguishes from siblings such as 'keyword_research' and 'content_brief'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is used for analyzing SERPs, but it does not provide explicit guidance on when to use it versus alternatives like 'check_backlinks' or 'keyword_research'. No when-not or context exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_backlinksAInspect
Analyze a website's backlink profile β referring domains, anchor text patterns, link quality indicators, and link building opportunities.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Website URL to analyze | |
| competitor_urls | No | Comma-separated competitor URLs for comparison |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries the burden. It describes the tool as analytical, implying read-only behavior, and lists outputs, but does not explicitly confirm safety or side effects. Given the tool's nature, this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence (15 words) that front-loads the action and includes essential details without any waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the main purpose and outputs, but with no output schema and no annotations, it lacks details on return format or constraints. For a simple analysis tool, it is fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. The description does not add additional meaning beyond what the schema provides, so baseline score is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Analyze' and the resource 'website's backlink profile', enumerating specific aspects it covers (referring domains, anchor text, link quality, opportunities). It is distinct from siblings like 'analyze_serp' or 'site_audit' which focus on different aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for backlink analysis, but does not explicitly state when to use or when to prefer alternative tools. No exclusions or context are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
content_briefAInspect
π [Pro] Generate a production-ready content brief β analyzes top-ranking pages, provides title options, full outline with word counts, keyword targets, and differentiation strategy.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | Yes | Content topic (e.g., 'best CRM for small businesses') | |
| content_type | No | Content format (default: 'blog post', options: 'landing page', 'pillar page', 'comparison') | |
| target_keyword | Yes | Primary keyword to rank for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description describes the tool's outputs but does not disclose behavioral traits such as side effects, rate limits, or authentication requirements; the Pro indicator is noted but not expanded.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the tool's purpose and outputs, with no extraneous information; the emoji and Pro tag are meaningful and compact.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return value (a brief with title options, outline, etc.) fairly well, though it omits format (e.g., text or JSON) and prerequisites; overall sufficient for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds context about what the parameters are used for (analyzing top pages, generating brief) but does not provide additional format or constraint details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates a production-ready content brief and lists specific outputs (title options, outline, etc.), differentiating it from sibling tools like keyword_research or optimize_content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for content creation but lacks explicit guidance on when to use this over alternatives (e.g., keyword_research) or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
keyword_researchAInspect
Research keyword opportunities for a business β search volume indicators, difficulty estimates, related terms, and content suggestions.
| Name | Required | Description | Default |
|---|---|---|---|
| industry | No | Business industry or niche | |
| location | No | Target geographic location | |
| seed_keyword | Yes | Primary keyword or topic to research |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must disclose behavioral details. It only lists high-level outputs without revealing data sources, accuracy, or any constraints. This is minimal transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that immediately conveys the tool's purpose and outputs. No extra words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with no output schema, the description outlines the general output types but omits details on result structure or interpretation. It is adequate but leaves some gaps for effective agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage for all three parameters, providing descriptions for each. The description adds no parameter-specific detail beyond schema, but the baseline of 3 is appropriate since the schema already documents them adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it researches keyword opportunities, listing specific outputs like search volume, difficulty, related terms, and content suggestions. It easily distinguishes from sibling tools such as analyze_serp or site_audit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for keyword research but does not explicitly specify when to use it versus alternatives. No exclusions or conditions are provided, leaving the agent to infer context from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
optimize_contentAInspect
Analyze and optimize content for SEO β keyword density, readability, structure, meta tags, and actionable improvement suggestions.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL of the page to optimize | |
| target_keyword | Yes | Primary keyword to optimize for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description alone must disclose behavioral traits. It mentions 'actionable improvement suggestions' which hints at read-only analysis, but the verb 'optimize' could imply modification. The tool's side effects or safety profile are not clarified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose. Every part is relevant and there is no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with two straightforward parameters and no output schema, the description is fairly completeβit explains what inputs are used for and what analysis is performed. However, it could mention the output format (e.g., a report) to be fully self-contained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both params. The tool description adds contextual value by listing what the tool does with the URL and keyword (analyze density, readability, etc.), but does not enhance understanding of the parameters beyond their existing schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool analyzes and optimizes content for SEO, listing specific elements like keyword density, readability, structure, and meta tags. It distinctly differs from sibling tools such as analyze_serp (SERP analysis) or keyword_research (keyword discovery).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for optimizing a specific page for a target keyword, but does not explicitly state when to use it versus siblings like content_brief or site_audit. No alternatives or exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
site_auditAInspect
π [Pro] Full technical SEO audit of a website β crawls multiple pages, checks SSL, speed, schema, headings, linking structure, and provides a prioritized fix plan.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Website URL to audit (e.g., 'https://example.com') | |
| focus | No | Specific audit focus (e.g., 'page speed', 'schema markup', 'mobile') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry behavioral context. It mentions crawling multiple pages and delivering a fix plan, but omits potential impact (e.g., cost, rate limits, destructive actions). Insufficient for a pro-level tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with effective use of emoji and hyphenated list. All information is front-loaded and relevant, no filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers inputs well with schema and description. Lacks output schema, but key expected outcome (prioritized fix plan) is stated. Adequate for a tool with clear purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The description adds value by explaining the audit scope (crawl, checks, fix plan), which helps interpret the 'focus' parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool performs a full technical SEO audit, listing specific checks (SSL, speed, schema, headings, linking). This distinguishes it from siblings like analyze_serp or check_backlinks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives are given, but the description implies usage for technical SEO audits. Given sibling tools cover different tasks, the purpose is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail β every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control β enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management β store and rotate API keys and OAuth tokens in one place
Change alerts β get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption β public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics β see which tools are being used most, helping you prioritize development and documentation
Direct user feedback β users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!