Skip to main content
Glama

search_console_analytics_query

Retrieve organic Google Search performance data via Search Console API. Output raw rows with clicks, impressions, CTR, and position. Group by query, page, device, date, or country. Filter by date range and dimensions for SEO analysis.

Instructions

Query the Search Console Search Analytics API for organic Google Search performance data. Returns the raw 'rows' array from the searchAnalytics.query response: [{keys: [], clicks (int), impressions (int), ctr (float 0.0-1.0), position (float, 1-indexed average ranking)}]. Empty array when no data. Read-only. Use dimensions=['query'] for keywords, ['page'] for URLs, ['device'] for device split, ['date'] for a daily trend. For convenience shortcuts use search_console_analytics_top_queries / top_pages / device_breakdown; for before/after comparisons use search_console_analytics_compare_periods.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
site_urlYesProperty identifier as registered in Search Console. For URL-prefix properties use the full URL including trailing slash (e.g. 'https://example.com/'). For Domain properties use the 'sc-domain:' prefix (e.g. 'sc-domain:example.com'). The property must be verified and accessible to the authenticated Google account.
start_dateYesInclusive start date in 'YYYY-MM-DD' format (e.g. '2026-03-01'). Search Console data typically lags 2-3 days, so 'today' returns no rows. Maximum lookback is 16 months.
end_dateYesInclusive end date in 'YYYY-MM-DD' format (e.g. '2026-03-31'). Must be >= start_date. Search Console data lags 2-3 days; requesting the last two days typically returns no rows.
dimensionsNoDimensions to group rows by. Allowed: query, page, country, device, date, searchAppearance. Omit for an ungrouped total (clicks/impressions/ctr/position across the window). Each additional dimension multiplies row cardinality — agents should usually pick 1-2.
row_limitNoMaximum rows to return. Default 100. Search Console API caps at 25000 per request; agents that need more should split the call by date range.
dimension_filter_groupsNoOptional Search Console dimensionFilterGroups payload (list of {groupType: 'and', filters: [{dimension, operator ('equals'|'contains'|'notContains'|'notEquals'|'includingRegex'|'excludingRegex'), expression}]}). Passed through verbatim to the REST API.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavioral traits: read-only, data lag of 2-3 days, empty array when no data, maximum lookback of 16 months, default row_limit, API cap, and the return structure including field types.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-organized, starting with the core purpose, then return format, then examples, then alternatives. Every sentence adds value without redundancy. It is long but appropriately so for the complexity of the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 6 parameters, no output schema, and moderate complexity, the description covers all essential aspects: what it does, what it returns, parameter usage, data limitations, and alternatives. It is complete enough for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds extra context beyond the schema, such as typical dimension choices (e.g., 'agents should usually pick 1-2') and the meaning of the return format. However, some schema descriptions are already thorough, so the added value is moderate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it queries the Search Console Search Analytics API for organic Google Search performance data, returns the raw 'rows' array, and provides examples of dimensions. It also explicitly distinguishes itself from sibling convenience shortcuts like top_queries, top_pages, device_breakdown, and compare_periods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives, naming specific sibling tools for shortcuts and period comparisons. It also offers recommendations on dimensions and row_limit, enhancing decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/logly/mureo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server