Server Quality Checklist
- Disambiguation4/5
Tools are well-differentiated by function, though the three 'query_' prefixed tools (query_search_analytics, query_by_search_type, query_by_search_appearance) serve related purposes and require careful reading to select the appropriate filter. All other tools have distinct, non-overlapping domains such as URL inspection, sitemap listing, and brand analysis.
Naming Consistency4/5Consistent snake_case format throughout with descriptive verb_noun patterns (e.g., analyze_brand_queries, export_analytics, inspect_url). Minor deviation with 'query_by_' prepositional structure for two tools, but this remains predictable and readable.
Tool Count5/5Thirteen tools is ideal for Google Search Console analytics coverage—sufficient to handle diverse querying needs (time comparisons, keyword trends, search appearances) without becoming overwhelming. Each tool addresses a specific analytical use case.
Completeness4/5Excellent coverage of GSC analytics workflows including performance comparison, keyword opportunity detection, and multi-dimensional querying. Minor gaps in site management operations (no submit_sitemap or request_indexing tools), but core data retrieval and URL inspection capabilities are fully represented.
Average 3.6/5 across 13 of 13 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.2
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 13 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly, idempotent, and non-destructive properties. The description adds context about the sorting behavior (by clicks, impressions, CTR, or position) but fails to disclose the data source (likely Google Search Console), aggregation methodology, or that lower position values indicate better rankings.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient 9-word sentence with the verb front-loaded. While there is no wasted text, the extreme brevity comes at the cost of contextual guidance that would help an agent select this tool correctly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of 11 sibling analytics tools and the lack of an output schema, the description is insufficiently complete. It fails to explain what constitutes 'top' performance, how pagination works, or the relationship between this tool and the broader analytics suite.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters including date formats and the sortBy enum. The description reinforces the sorting capability but does not add semantic depth regarding the 'account' alias system or default behaviors beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get), resource (top performing pages), and sorting dimensions (clicks, impressions, CTR, position). However, it does not explicitly differentiate from sibling tools like 'query_search_analytics' that may also return page-level data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as 'query_search_analytics' or 'compare_performance'. It omits prerequisites (e.g., requiring Search Console access) and does not indicate typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish the operation is read-only and safe. The description adds context that the output is a percentage breakdown of traffic, but does not disclose rate limits, pagination behavior, or the specific structure of the analysis results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with no redundancy, placing the core action first and the output second. It could potentially be consolidated into one sentence without loss of meaning, but remains appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of comprehensive input annotations and a complete input schema, the description adequately covers the tool's intent. However, without an output schema, it could better describe the return value structure (e.g., whether it returns a ratio, percentage object, or time-series data).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameters are fully documented in the structured schema. The description mentions 'brand searches' which loosely references the brandTerms parameter, but adds no semantic clarification beyond what the schema already provides, meeting the baseline for complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool analyzes 'branded vs non-branded search queries' and calculates traffic percentages, providing a specific verb and resource. However, it does not explicitly differentiate from the sibling tool query_search_analytics, which handles general search analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like query_search_analytics, nor does it mention prerequisites (e.g., that brandTerms must be provided) or scenarios where this analysis is inappropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations comprehensively cover safety (readOnly, idempotent, non-destructive), the description adds no behavioral context beyond this. It fails to clarify whether the export returns raw data, a download URL, or a file ID, and omits details about pagination limits, async processing, or data retention.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is efficiently structured with the verb front-loaded. The trailing phrase 'for external analysis or reporting' provides modest value, suggesting the description could be tighter, but there is no significant waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich annotations and complete input schema, the description adequately covers the basics. However, the absence of an output schema creates a gap that the description fails to fill—it does not explain what the export returns (file content, URL, or reference), which is critical for an export operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured documentation carries the semantic load. The description mentions 'CSV or JSON format' which aligns with the format parameter, but does not add clarifying details about dimension combinations, date range constraints, or account alias resolution logic beyond what the schema already states.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Export'), resource ('search analytics data'), and supported formats ('CSV or JSON'). However, it does not explicitly differentiate from the sibling 'query_search_analytics' tool, which likely retrieves the same data but in a different manner.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'for external analysis or reporting' provides implied usage context, suggesting when to choose file export formats. However, it lacks explicit guidance on when to use this versus 'query_search_analytics' or other data retrieval siblings, and does not mention prerequisites like account configuration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds the specific valid search type values (web, image, video, news, discover), reinforcing the enum constraint. However, it fails to disclose what data structure or metrics are returned (clicks, impressions, etc.) given the lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no redundant words. The critical information (search type filtering and specific values) is front-loaded and immediately scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having 7 parameters and no output schema, the description fails to characterize the returned analytics data or explain the tool's relationship to similar query tools. For a tool with this parameter complexity, the description is insufficiently informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is met. The description lists the search type enum values, which restates the schema, but adds no additional semantic context about date formats, dimension behavior, or account aliases beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Query) and resource (analytics) with specific scope (filtered by search type). It implies differentiation from the general `query_search_analytics` sibling by emphasizing the search type filter, though it doesn't explicitly name the alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to select this tool versus the general `query_search_analytics` or other sibling tools. It omits prerequisites such as valid date ranges or site URL requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent, non-destructive). The description adds valuable behavioral context that the output specifically identifies queries/pages that gained or lost traffic, hinting at the comparative analysis nature beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero redundancy. First sentence establishes the core operation; second sentence describes the specific output value (gainers/losers). Perfectly front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a read-only analytics tool with comprehensive schema coverage, though no output schema exists. The description hints at return content (gained/lost traffic) but doesn't describe return structure, format, or error conditions for invalid date ranges.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema carries the parameter documentation burden. The description mentions 'queries/pages' which maps to the dimension parameter, and 'two time periods' which maps to the date ranges, but doesn't add syntax details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares search performance between two time periods (specific verb + scope). It implicitly distinguishes from single-period siblings like get_top_pages by emphasizing the comparative nature, though it doesn't explicitly contrast with similar analytics tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus alternatives like get_keyword_trend or analyze_brand_queries. No mention of prerequisites (e.g., valid date ranges, account requirements) or when comparisons are most valuable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive traits. The description adds valuable return-value context ('daily clicks, impressions, and position changes') indicating granularity and data structure, but doesn't address rate limits, pagination, or auth requirements beyond the account parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. Front-loaded with the core action ('Get the performance trend...'), followed by specific return value details. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and comprehensive annotations, the description is adequate. It partially compensates for the missing output schema by specifying the returned metrics (clicks, impressions, position changes), though it could note if the data is aggregated or raw.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is met. The description implies the temporal scope ('over time') aligning with date parameters, but doesn't elaborate on parameter interactions or constraints (e.g., date range limits) beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'performance trend of a specific keyword over time' with specific metrics (clicks, impressions, position changes). It distinguishes from siblings like 'get_top_pages' by emphasizing single-keyword time-series analysis, though it could explicitly contrast with 'query_search_analytics'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'query_search_analytics' or 'compare_performance'. The agent must infer from the name and parameters that this is specifically for longitudinal single-keyword tracking.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds value by specifying the exact metrics returned (queries, clicks, impressions, CTR, position), which compensates for the missing output schema. It does not mention rate limits, pagination behavior, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences: the first declares the action and target system, the second lists return values. There is no redundant text, and the most important information (the querying capability) appears first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 7 well-documented parameters and comprehensive safety annotations, the description is sufficiently complete because it discloses the return structure (metrics list) that would normally appear in an output schema. It adequately supports agent decision-making for a read-only data retrieval operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all 7 parameters including the filter object structure. The description adds no parameter-specific guidance (e.g., date format details, dimension options), so it meets the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Query') and resource ('Google Search Console search analytics data'), and specifies the returned metrics (queries, clicks, impressions, CTR, average position). However, it does not differentiate this tool from similar siblings like 'query_by_search_appearance' or 'query_by_search_type'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_top_pages' or 'analyze_brand_queries', nor does it mention prerequisites (e.g., needing to verify site ownership first). It simply states what the tool does in isolation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover the safety profile (readOnly, idempotent, non-destructive). The description adds context by mapping technical enum values to user-friendly categories (e.g., 'Rich Results' covers multiple schema enums). However, it fails to describe what the analytics contain (clicks, impressions, CTR?), pagination behavior, or date range constraints given the lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action ('Query analytics'), immediately qualified by the specific filter dimension. Every word serves a purpose; no redundancy or filler content despite being minimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% schema coverage and annotations providing safety context, the description adequately covers the input side. However, given no output schema exists, the description should explain what analytics/metrics are returned (e.g., clicks, impressions, position) and their format, which it omits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100%, the description adds valuable semantic mapping between user-friendly terms ('AMP', 'Rich Results', 'Video') and the technical enum values in the schema. This helps the agent understand the domain meaning of the searchAppearance parameter beyond the schema's generic 'The search appearance type to filter by' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (Query) + resource (analytics) + specific filter mechanism (search appearance type). Examples (AMP, Rich Results, Video, FAQ) clarify the domain. However, it lacks explicit differentiation from sibling tool 'query_by_search_type' which has a very similar name and could confuse the agent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The specific examples (AMP, Rich Results, etc.) provide implied usage context—use this when analyzing SERP feature performance. However, there is no explicit 'when-not-to-use' or comparison to alternatives like 'query_search_analytics' or 'query_by_search_type' despite the high similarity in naming.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive traits. The description adds valuable behavioral context about what specifically gets inspected (indexing status, mobile usability, rich results) but omits rate limits, authentication requirements, or error conditions (e.g., unverified site behavior).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. The first sentence front-loads the action and scope, while the second clarifies the specific capability regarding crawl/index status. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given good annotations and complete schema coverage, the description adequately covers the tool's purpose. While it lacks an output schema, it describes what information is revealed (indexing, mobile usability, rich results), which is sufficient for agent selection, though explicit return structure details would improve it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all three parameters (account, siteUrl, inspectionUrl). The description does not add parameter-specific semantics, but the baseline score of 3 is appropriate given the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Inspect', 'check', 'Shows') and clearly identifies the resource (URL) and scope (indexing status, mobile usability, rich results, crawl/index capability). It effectively distinguishes from analytics-focused siblings like query_search_analytics or compare_performance by focusing on single-URL technical inspection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (single URL inspection) but lacks explicit guidance on when to use this versus bulk analytics tools like query_search_analytics or get_top_pages. No prerequisites (e.g., site verification requirements) or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable domain context ('Google Search Console') and relationship context ('submitted'), but does not disclose additional behavioral traits like return format, pagination, or error conditions for invalid site URLs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded with action verb ('List'), followed by scope ('all sitemaps'), qualifier ('submitted for a site'), and domain ('Google Search Console'). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 simple string parameters, no nesting) and 100% schema coverage with good annotations, the description is nearly complete. Minor gap: no output schema exists, so mentioning the return structure (e.g., 'returns array of sitemap metadata') would elevate this to 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both 'account' (optional alias) and 'siteUrl' (required). The description does not mention parameters explicitly, but at high schema coverage, baseline 3 is appropriate as the description focuses on the operation rather than parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'List' with clear resource 'sitemaps' and scope 'submitted for a site in Google Search Console'. It effectively distinguishes from siblings like list_accounts (accounts) and list_sites (sites) by specifying the target resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (retrieving sitemaps for a specific site), but provides no explicit when-to-use guidance, prerequisites, or differentiation from related tools like inspect_url which also interacts with individual URLs/paths.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnly, idempotent, non-destructive), allowing the description to focus on business logic. It successfully discloses the behavioral definition of 'opportunity' (high impressions + low CTR threshold) which isn't evident from the annotations alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero redundancy. The definition is front-loaded and every word earns its place by conveying both the action and the specific filtering logic.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and read-only nature, the description adequately explains the core business logic. However, without an output schema, it could briefly mention what data is returned (e.g., list of queries with metrics) to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents all 7 parameters. The description conceptually maps to minImpressions and maxCtr by mentioning 'high impressions' and 'low CTR,' but does not add syntax, format details, or examples beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Find') and clearly defines both the resource (keyword optimization opportunities) and the specific criteria used to identify them (queries with high impressions but low CTR). This distinguishes it from generic analytics siblings like query_search_analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use the tool (when seeking to optimize CTR for underperforming high-impression queries) but lacks explicit guidance on when NOT to use it or how it differs from siblings like analyze_brand_queries or get_keyword_trend.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations comprehensively cover safety profile (readOnly, non-destructive, idempotent). Description adds valuable context about scope ('all' accounts) and return content ('associated GSC sites'), but omits pagination behavior, rate limits, or specific OAuth scope requirements beyond implied authentication.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with zero waste. Information density is high—conveys operation type, scope ('all'), authentication context, and return payload structure without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficient for a discovery tool with rich annotations. Description compensates for missing output schema by specifying return content includes 'associated GSC sites'. Could be improved by noting if results are cached or real-time, but adequate given tool simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present per schema (100% coverage of empty schema). Per rubric, zero-parameter tools receive baseline score of 4. Description appropriately requires no parameter clarification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'List' with clear resource 'authenticated Google accounts and their associated GSC sites'. Effectively distinguishes from sibling 'list_sites' by emphasizing account-level discovery with site associations, clarifying this returns account hierarchies rather than just site lists.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage context through 'authenticated' qualifier (indicating this discovers available auth contexts), but lacks explicit when-to-use guidance versus 'list_sites' or prerequisites for authentication. No alternatives named despite functional overlap with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare read-only/idempotent properties, the description adds valuable behavioral context about the grouping logic (grouped by account) when multiple accounts exist. It also confirms the external scope (Google Search Console), aligning with the openWorldHint annotation without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first sentence front-loads the core purpose, while the second provides essential conditional behavior regarding account handling. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single optional parameter, read-only operation) and strong annotations, the description is appropriately complete. It explains what is returned (sites grouped by account) despite the absence of an output schema, though it could briefly mention the return format structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the input schema fully documents the account parameter's purpose and optional nature. The description reinforces this behavior but does not add significant semantic meaning—such as example account aliases or validation rules—beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List') and resource ('sites') with context ('Google Search Console'). It effectively distinguishes from sibling tools like 'list_accounts' (which lists accounts) and 'list_sitemaps' (which lists sitemaps) by specifying it returns sites the user has access to.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear guidance on parameter behavior, explaining exactly what happens when the account parameter is omitted in multi-account scenarios ('shows all accounts' sites grouped by account'). However, it lacks explicit guidance on when to use this versus 'list_accounts' first, or how it relates to site-specific tools like 'inspect_url'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.