Skip to main content
Glama
chouayb123

Haloscan MCP Server

by chouayb123

get_domains_history_pages

Retrieve historical domain position data by specifying input domains, date ranges, and filtering criteria for in-depth SEO performance analysis.

Instructions

Obtenir l’historique des positions des domaines.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
date_fromYes
date_toYes
inputYes
known_versions_maxNo
known_versions_minNo
lineCountNoMax number of returned results.
modeNo
orderNoWhether the results are sorted in ascending or descending order.
order_byNoField used for sorting results. Default sorts by descending volume.
total_top_100_maxNo
total_top_100_minNo
total_top_10_maxNo
total_top_10_minNo
total_top_3_maxNo
total_top_3_minNo
total_top_50_maxNo
total_top_50_minNo
total_traffic_maxNo
total_traffic_minNo
unique_keywords_maxNo
unique_keywords_minNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only states it retrieves historical position data, without disclosing behavioral traits like whether it's a read-only operation, potential rate limits, authentication requirements, pagination behavior, or what happens with large date ranges. For a tool with 21 parameters and no annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence in French that efficiently states the tool's general purpose. It's front-loaded with the core action ('Obtenir l’historique') and resource ('des positions des domaines'). However, given the tool's complexity (21 parameters), this brevity borders on under-specification rather than optimal conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given high complexity (21 parameters, 3 required), low schema coverage (14%), no annotations, and no output schema, the description is incomplete. It doesn't address what the tool returns (e.g., list of historical positions, aggregated trends), how results are structured, or usage constraints. For a data retrieval tool with extensive filtering options, this minimal description leaves critical gaps for an AI agent to understand and invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 14% (3 out of 21 parameters have descriptions), so the description must compensate but adds no parameter information. It doesn't explain what 'input' represents (e.g., domain names, URLs), how date ranges work, what 'mode' controls, or the purpose of numerous min/max filters (e.g., total_top_100_min). With 21 parameters mostly undocumented, the description fails to provide necessary semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Obtenir l’historique des positions des domaines' (Get the history of domain positions) states a general purpose but lacks specificity. It mentions 'domains' and 'history of positions' but doesn't clarify what kind of positions (e.g., search rankings, visibility scores) or what time-based data is returned. It distinguishes from siblings like 'get_domains_positions' (current positions) by specifying 'history', but remains vague about the exact resource being retrieved.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools like 'get_domains_history_positions' (similar name), 'get_domains_positions' (current positions), and 'get_domains_visibility_trends' (trend data), there's no indication of which tool to choose for specific historical position queries. No prerequisites, exclusions, or context for usage are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chouayb123/mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server