Skip to main content
Glama
SerpstatGlobal

Serpstat MCP Server

Official

get_rt_project_keyword_serp_history

Retrieve historical Google SERP data for tracked keywords to analyze competitor positions, search volumes, and ranking trends over time.

Instructions

Get complete Google top-100 SERP history for tracked keywords in a rank tracker project. Returns full competitor analysis with historical positions, URLs, domains, and search volumes for each date. WARNING: This method returns large datasets (full top-100 for each keyword/date combination). Recommended pageSize: 20-50 for most use cases. Use date filters and keyword filters to reduce response size. Supports keyword tagging for grouping and filtering. This method does not consume API credits.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYesProject identifier
projectRegionIdYesRegion ID for the project. Required. Get from get_rt_project_regions_list method. Each project can track multiple regions. See region reference: https://docs.google.com/spreadsheets/d/1LUDtm-L1qWMVpmWuN-nvDyYFfQtfiXUh5LIHE8sjs0k/edit?gid=75443986#gid=75443986
pageYesPage number for pagination. Starts at 1.
pageSizeNoNumber of keywords per page. Allowed values: 20, 50, 100, 500. RECOMMENDED: Use 20 or 50 to avoid response truncation due to large dataset sizes. Each keyword returns full top-100 SERP for all dates.
dateFromNoStart date of the period in YYYY-MM-DD format (e.g., '2025-09-01'). Use to filter historical data and reduce response size.
dateToNoEnd date of the period in YYYY-MM-DD format (e.g., '2025-09-30'). Use to filter historical data and reduce response size.
sortNoSort results by 'keyword' (alphabetically) or 'date' (chronologically). Default is 'date'.
orderNoSorting order: 'asc' (oldest first) or 'desc' (newest first). Default is 'desc'.
keywordsNoFilter by specific keywords (max 1000 keywords)
withTagsNoInclude keyword tags in the response. Tags are used to group and categorize keywords in the project. Set to true to receive tag IDs and values for each keyword.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: returns large datasets (full top-100 for each keyword/date), includes competitor analysis with historical data, does not consume API credits, and warns about response size management. It lacks details on error handling or rate limits, but covers essential operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by important warnings and usage tips. Every sentence earns its place by adding critical information about dataset size, recommendations, filtering, tagging, and API credit impact, with no redundant or verbose content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (10 parameters, no output schema, no annotations), the description does a strong job by explaining the tool's purpose, behavioral traits, and usage guidance. It could improve by detailing the output structure (e.g., format of returned data) since there's no output schema, but it adequately covers input handling and operational context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 10 parameters thoroughly. The description adds marginal value by mentioning keyword tagging (related to 'withTags') and filtering recommendations (implied for 'dateFrom', 'dateTo', 'keywords'), but does not provide significant additional semantics beyond what the schema descriptions already cover.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get complete Google top-100 SERP history for tracked keywords in a rank tracker project.' It specifies the verb ('Get'), resource ('SERP history'), and scope ('tracked keywords in a rank tracker project'), distinguishing it from siblings like 'get_rt_project_url_serp_history' which focuses on URLs rather than keywords.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context with warnings about large datasets and recommendations for pageSize (20-50), plus advice to use date and keyword filters to reduce response size. It mentions keyword tagging for grouping/filtering but does not explicitly state when NOT to use this tool or name specific alternatives among siblings, though it implies it's for keyword-based SERP history.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SerpstatGlobal/serpstat-mcp-server-js'

If you have feedback or need assistance with the MCP directory API, please join our Discord server