Robot Speed
Server Details
SEO automation MCP with 39 tools. 12 free tools (no auth): audit pages, check Core Web Vitals, validate schema, score AI visibility, generate keywords. 27 pro tools with account: manage content calendar, track GSC traffic, monitor AI bot visits (ChatGPT/Perplexity), analyze backlinks, and publish directly to WordPress, Webflow, or Wix.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 12 of 12 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes (e.g., keyword_generator vs. readability_check). However, seo_audit, seo_plan, seo_score, and ai_visibility_score partially overlap in overall SEO assessment, though with different outputs.
All tool names use clear snake_case with descriptive nouns (e.g., broken_link_check, schema_validator). The pattern is consistent and predictable across all 12 tools.
12 tools is ideal for a comprehensive SEO toolkit. Each tool covers a specific SEO dimension without redundancy, making the set well-scoped.
The toolkit covers key SEO aspects: auditing, keyword, speed, meta, links, readability, schema, and noindex. Minor gaps exist (e.g., backlink analysis or competitor SEO comparison missing) but core workflows are complete.
Available Tools
12 toolsai_visibility_scoreAI Visibility ScoreBInspect
[FREE] How well can AI bots crawl your site? Checks robots.txt AI rules, llms.txt, structured data. FR: Score de visibilité IA — votre site est-il accessible aux bots IA ?
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to check |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description indicates the tool performs checks on robots.txt, llms.txt, and structured data, implying read-only behavior. However, with no annotations, it falls short of explicitly stating it does not modify any resources, nor does it mention rate limits, authentication needs, or any potential side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences plus a French translation, which is concise. The key information (what it checks) is front-loaded. The French translation adds length but may aid non-English users; however, it could be moved to a separate field or omitted for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter and no output schema, the description adequately covers what the tool does. However, it lacks details on the output format or score interpretation, and the complexity of checking AI visibility could benefit from more context about what 'AI rules' means.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a single parameter 'url' described as 'The URL to check', which is clear from the schema alone. The description adds no additional parameter semantics, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks how well AI bots can crawl a site, specifying it checks robots.txt AI rules, llms.txt, and structured data. While it distinguishes itself from SEO-focused siblings by focusing on AI bot visibility, it could more explicitly contrast with other tools like 'seo_audit' or 'noindex_checker'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'seo_audit' or 'noindex_checker'. It does not mention prerequisites or limitations, such as that the tool only checks a single URL and may not account for dynamic content or JavaScript-rendered sites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
broken_link_checkBroken Link CheckBInspect
[FREE] Find broken links (404, 500, timeout) on any page. FR: Trouvez les liens cassés sur n'importe quelle page.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to check for broken links |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It indicates the tool checks for HTTP error statuses (404, 500, timeout), implying a read-only operation. It does not mention rate limits, scope of checking (only one page, not recursive), or what happens with the output. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise (two sentences, bilingual) with no wasted words. Front-loaded with the core purpose. Slight redundancy with bilingual text, but acceptable. Efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple input, the description is reasonably complete for a straightforward tool. However, missing details like scope (one page vs whole site), result format, or timeout limits limit completeness. Adequate for its simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with a single 'url' parameter described as 'The URL to check for broken links'. The description adds no further detail beyond stating 'any page'. Baseline 3 is appropriate since schema already covers the parameter fully.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool finds broken links (404, 500, timeout) on any page, clearly specifying the verb (find), resource (broken links), and scope (any page). It distinguishes from siblings like 'seo_audit' which are broader. The bilingual addition doesn't harm clarity. However, it doesn't explicitly contrast with other tools, but purpose is clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for checking broken links on a page, but does not provide explicit guidance on when to use this tool versus alternatives like 'seo_audit' or 'page_speed_check'. No exclusion criteria or alternative recommendations are given, though the free label hints at no-cost usage. Context is implied but not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discoverDiscoverAInspect
[FREE] See all available free SEO tools and recommended workflows. Call this first. FR: Découvrez les outils SEO gratuits et les workflows recommandés.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It explicitly states it is free and returns a list of tools/workflows. However, does not mention anything about side effects or limitations. Since it is a discovery/listing tool, no destructive behavior is expected, but a note about data freshness or caching could improve score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very concise: two sentences in English, one short French phrase. Efficiently conveys purpose and usage instruction. However, the French phrase could be considered redundant for an English-speaking AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, no output schema, and simple purpose, the description is nearly complete. It tells the agent what it does and when to use it. Missing a brief note on what the output contains (e.g., 'returns a list of tool names and descriptions').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, and description coverage is 100%. Description adds value by explaining that no input is needed and the tool returns a list of available tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists free SEO tools and recommended workflows, with a specific instruction to call it first. The verb 'Discover' and noun 'SEO tools and workflows' are explicit, and it is clearly distinguished from sibling tools that perform specific SEO analyses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Call this first', indicating it should be invoked before other tools. This provides clear guidance on when to use it, implicitly suggesting it helps choose among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
keyword_generatorKeyword GeneratorBInspect
[FREE] Generate keyword suggestions for any topic. Returns related terms with estimated intent. FR: Générer des suggestions de mots-clés pour un sujet.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | Yes | The topic or seed keyword | |
| language | No | Target language | en |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It mentions 'estimated intent' and 'free' usage, but does not disclose limitations (e.g., rate limits, accuracy, or that it only returns a preset number of suggestions). The description is minimal but not contradictory.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is short (two sentences plus French translation). It is front-loaded with clear action. The French translation adds value for bilingual agents. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given it has 2 simple parameters and no output schema, the description is fairly complete for a straightforward keyword suggestion tool. However, it lacks mention of output format or number of suggestions, which could be useful. Overall adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no extra meaning beyond the schema; it simply says 'generate keyword suggestions' without detailing how parameters affect results. It does not elaborate on 'topic' or 'language' beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it generates keyword suggestions for any topic and returns related terms with estimated intent. The verb 'generate' and resource 'keyword suggestions' are specific. However, it doesn't distinguish it from sibling SEO tools like 'ai_visibility_score' or 'seo_audit', as none of them do keyword generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool vs alternatives. It implies usage for keyword research but lacks guidance on when not to use it or mention of sibling tools. The '[FREE]' tag suggests cost context, but no alternative tools are named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meta_tag_analyzerMeta Tag AnalyzerAInspect
[FREE] Analyze title, meta description, Open Graph, and Twitter Card tags for any URL. FR: Analyse des balises meta, OG et Twitter Card.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to analyze |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It discloses the tool is free and analyzes specific tags, but does not mention rate limits, authentication requirements, or what happens on failure (e.g., invalid URL). This is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded: key elements ('FREE', analyze, specific tags, URL) appear early. The French translation is redundant but does not harm conciseness significantly. Could be slightly shorter by omitting the French, but still efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter, no output schema), the description is reasonably complete: it names the exact tags analyzed, indicates it's free, and targets any URL. However, it lacks details about response format or error handling, which would be helpful for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% and there is only one parameter (url). The description adds context about what is analyzed (title, meta description, OG, Twitter Card) which goes beyond the schema's generic description 'The URL to analyze'. However, it doesn't specify expected format or whether HTTP/HTTPS is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Analyze') and clearly lists the resources (title, meta description, Open Graph, Twitter Card tags) and target (any URL). The presence of '[FREE]' clarifies it's a free tool. This distinguishes it from siblings like 'seo_audit' or 'page_speed_check' which cover broader or different aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool does but provides no guidance on when to use it versus alternatives (e.g., seo_audit, readability_check). It implies use when needing meta tag analysis, but lacks explicit exclusions or when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
noindex_checkerNoindex CheckerBInspect
[FREE] Check if a URL is indexable — detects noindex tags, canonical issues, robots.txt blocks. FR: Vérifiez si une URL est indexable.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to check |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It does not disclose whether checks are real-time or cached, any rate limits, or cost implications (despite marking [FREE]). It lacks behavioral details beyond the basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, stating the purpose and key capabilities. The bilingual repetition ('FR: ...') adds minor overhead but does not significantly detract. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a simple tool with 1 parameter and no output schema, the description is adequate but incomplete. It explains what checks are performed, but misses details like return format or how results are presented. Would benefit from describing output structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% and there is only one parameter. The description adds no additional meaning beyond the schema's 'The URL to check'. Baseline 3 is appropriate since schema already covers the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks if a URL is indexable, listing specific issues detected (noindex tags, canonical issues, robots.txt blocks). This distinguishes it from sibling tools like 'seo_audit' or 'meta_tag_analyzer' which have broader or different focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for checking indexability, but does not explicitly say when to use this vs alternatives (e.g., 'seo_audit' or 'meta_tag_analyzer'). It mentions detecting specific issues, which provides some context, but lacks explicit guidance on when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
page_speed_checkPage Speed CheckBInspect
[FREE] Performance check with real Core Web Vitals (CrUX field data) + Lighthouse score + page weight analysis. FR: Vérification de performance avec vrais Core Web Vitals (données terrain CrUX) + score Lighthouse + analyse du poids de page.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to check |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions that it is 'FREE' and uses 'real Core Web Vitals (CrUX field data)', which hints at behavioral traits like reliance on field data. However, it does not disclose if it makes external API calls, has latency, or affects server load.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, fitting in two sentences with a bilingual version. It front-loads the key value proposition (FREE, real data). It could be slightly more efficient by omitting the French translation if not needed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given it has a single parameter, no output schema, and no annotations, the description provides a reasonable summary but lacks details on return format, error handling, or interpretation of results. It is adequate for a simple tool but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter (url) described as 'The URL to check'. The description adds no additional meaning beyond this, but since coverage is high, a baseline of 3 is appropriate. It does not elaborate on URL format or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it performs 'Performance check' with specific metrics (Core Web Vitals, Lighthouse score, page weight). The verb 'check' combined with the resource 'page speed' is sufficiently specific. It doesn't explicitly differentiate from sibling tools, but its focus on speed metrics is distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives like 'seo_audit' or 'seo_score'. It also lacks information about prerequisites or limitations, such as whether it requires authentication or has rate limits.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
readability_checkReadability CheckCInspect
[FREE] Readability score and content quality analysis. FR: Score de lisibilité et analyse de la qualité du contenu.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to analyze |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It does not state what the tool does internally (e.g., how readability is computed), whether it modifies anything, any constraints (e.g., URL format, page size), or what the output looks like. The description only repeats the name and adds a bilingual tag.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is short but contains redundant information: the tool name and title already convey the purpose. The bilingual text repeats the English version. Some of this could be streamlined, while adding missing details about behavior or output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single param, no output schema, no annotations), the description should cover what the output includes (e.g., a score, grade level) and any limitations. It lacks these details, leaving the agent unsure of the return value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter 'url' described as 'The URL to analyze'. The description does not add extra info beyond the schema, but baseline 3-4 applies because schema coverage is high. A slight deduction because no usage tips (e.g., must be public URL) are added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is for readability score and content quality analysis, and it specifies the resource (URL) with the action 'check'. The 'FREE' tag adds clarity. However, it could better differentiate from sibling tools like 'seo_score' or 'seo_audit' which may also analyze content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'meta_tag_analyzer' or 'keyword_generator'. The context signals show many sibling tools related to SEO and content analysis, but the description does not advise when readability analysis is preferred.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
schema_validatorSchema ValidatorBInspect
[FREE] Validate JSON-LD structured data and detect missing schema types. FR: Validez vos données structurées JSON-LD.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to validate |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must carry behavioral info. It says 'Validate' and 'detect missing schema types' but doesn't disclose what happens during validation (e.g., whether it modifies data, requires authentication, or has rate limits). It adds the 'FREE' label but this is more about cost than behavior. Acceptable but thin.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with key info in first sentence. The second sentence is a French translation for accessibility. No wasted words. Slightly lower due to repetition in translation, but still concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (1 param, no output schema), the description is adequate. It states purpose and input. However, it lacks explanation of what constitutes a validation result (e.g., returns errors or passes) and how to interpret 'missing schema types'. Could be more complete for a validation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description doesn't need to explain parameters. The description mentions 'URL' implicitly in the action but doesn't add details beyond the schema's field description 'The URL to validate'. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool validates JSON-LD structured data and detects missing schema types. It uses a specific verb ('validate') and resource ('JSON-LD structured data'), distinguishing it from siblings like 'meta_tag_analyzer' or 'seo_audit' which cover different SEO aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The '[FREE]' tag hints at accessibility but doesn't set context. It doesn't mention when not to use it or what conditions make it appropriate. The French phrase 'Validez vos données structurées JSON-LD' is a direct translation and doesn't add usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
seo_auditSEO AuditAInspect
[FREE] Full 8-category SEO audit for any URL. Returns score, issues, strengths. No account needed. FR: Audit SEO complet en 8 catégories pour n'importe quelle URL.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to audit (e.g. https://example.com) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It notes the tool is free and covers 8 categories, but does not disclose rate limits, required permissions, or whether results are cached. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is short (two sentences in English plus French translation). Front-loaded with key info. Could be slightly more concise by omitting French duplicate or integrating it more elegantly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter, no output schema, and no annotations, the description is sufficient for a simple audit tool. It explains what it does and what it returns (score, issues, strengths), which is enough for an agent to decide to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for the single 'url' parameter. The description adds context that it expects a full URL (e.g. https://example.com), which aligns with schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'audit' and the resource 'any URL', specifies it covers 8 categories, and returns score, issues, strengths. Distinguishes from siblings like 'seo_score' or 'meta_tag_analyzer' by being a comprehensive audit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'no account needed' and provides a use case (any URL). It does not explicitly state when not to use it or compare to alternatives, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
seo_planSEO PlanBInspect
[FREE] Run a full audit and get a prioritized SEO action plan with estimated impact. The 'what do I fix first?' tool. FR: Audit complet + plan d'action SEO priorisé avec impact estimé.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to audit and plan for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry behavioral information. It notes that the audit is 'full' and the plan is 'prioritized with estimated impact', which implies a comprehensive analysis. However, it doesn't disclose how long the audit takes, whether it uses internal or external data, or if it modifies the site. The description is partially transparent but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences in English, front-loaded with the main action. It also includes a French translation for accessibility, which is relevant but slightly adds length. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description is fairly complete. It explains the input and the output (action plan). However, it lacks details on how results are returned (e.g., file, structured data) and any rate limits or delays. Adequate but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (one parameter with a description). The description adds context by explaining the parameter is the URL to audit, but does not specify what URL formats are accepted (e.g., full URL vs. domain) or any constraints. Additional value over schema is minimal.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool runs a full audit and produces a prioritized SEO action plan. The verb 'Run a full audit' and the resource 'prioritized SEO action plan' make the purpose clear. It distinguishes itself from siblings like 'seo_audit' (which might not provide a prioritized plan) and 'seo_score' (which provides a score, not an action plan).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs. alternatives like 'seo_audit' or 'keyword_generator'. The description 'The "what do I fix first?" tool' implies it's for prioritization, but doesn't exclude other use cases or mention prerequisites like needing to be a site owner. Lacks clear context for when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
seo_scoreSEO ScoreAInspect
[FREE] Quick SEO score (0-100) with letter grade for any URL. FR: Score SEO rapide (0-100) pour n'importe quelle URL.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to score |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that it is 'FREE' and 'quick,' implying no cost or rate limit concerns. However, it does not detail behavioral traits like whether the tool fetches the page, handles redirects, or what parameters affect scoring. With no annotations provided, the description carries the full burden but only partially addresses it.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with a bilingual note, front-loading core information. No wasted words, efficient for the simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description is mostly sufficient. However, it lacks detail about the return format or example output, which could help the agent confirm correct usage. For a scoring tool, this is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only one parameter and 100% schema coverage, the description does not need to add much. The schema already describes 'url' as 'The URL to score.' The description effectively restates this in context, adding the 'any' scope. This is adequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it provides a 'quick SEO score (0-100) with letter grade for any URL.' This clearly distinguishes it from sibling tools like 'ai_visibility_score' or 'meta_tag_analyzer' by focusing on a single numeric/grade output for any URL.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies a straightforward use case: any URL can be scored quickly. However, it does not explicitly state when to use this tool over alternatives (e.g., 'seo_audit' for detailed audit), nor does it mention when not to use it (e.g., for non-public URLs).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!