Skip to main content
Glama
acamolese

Google Search Console Audit MCP

gsc_audit

Generate a complete HTML SEO audit report for a Google Search Console property. Runs multiple queries to analyze performance, compare periods, identify top queries and pages, check devices, countries, daily trends, sitemaps, and indexing issues. Detects common problems and builds an actionable strategy with a self-contained report including Chart.js graphs.

Instructions

Generate a complete HTML SEO audit report for a Search Console property.

Runs multiple queries (overview, previous-period comparison, top queries, top pages, devices, countries, daily trend, sitemaps, indexing check), detects common issues, builds an actionable strategy and renders everything in a self-contained HTML report with Chart.js graphs. The report layout and colors can be customized via branding.json.

IMPORTANT: If the user has not specified a date range, ask them before calling this tool. Do not assume defaults.

Args: site_url: Site URL (e.g. "https://example.com/" or "sc-domain:example.com"). date_from: Start date (YYYY-MM-DD). date_to: End date (YYYY-MM-DD). output_dir: Directory where to save the HTML report. Defaults to ~/gsc-reports/. branding_path: Optional path to a custom branding.json overriding the default one.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
site_urlYes
date_fromYes
date_toYes
output_dirNo
branding_pathNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It explains the tool runs multiple queries and detects issues, but does not disclose potential side effects (e.g., API quota consumption, execution time, or whether it modifies any state). It also does not specify that the tool is read-only. A 3 is appropriate as basic transparency is present but gaps remain.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a concise first paragraph defining overall purpose, followed by a critical usage note, and then a clear Args section. Each sentence serves a purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters and an output schema (implying structured return), the description adequately covers purpose, constraints, and parameter semantics. However, it could mention the output schema (e.g., 'returns a path to the report') for completeness. Slight gap but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema provides 0% description coverage (no descriptions on parameters), so the description must compensate. It does so by explaining each parameter: site_url format ('https://example.com/ or sc-domain:example.com'), date format (YYYY-MM-DD), and default output_dir (~/gsc-reports/). This adds significant meaning beyond the schema's raw properties.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'generates a complete HTML SEO audit report for a Search Console property.' It lists the types of queries and outputs (self-contained HTML with Chart.js graphs), which is specific and distinguishes it from siblings like gsc_performance_overview or gsc_query which are more limited in scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly warns that if the user has not specified a date range, the agent must ask before calling. It also lists required arguments (site_url, date_from, date_to) and provides context-sensitive guidance, making it clear when and how to use the tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/acamolese/google-search-console-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server