Skip to main content
Glama
competlab

competlab-mcp-server

get_positioning_history

Track competitor homepage messaging changes over time by retrieving paginated positioning monitoring history. Access run timestamps and IDs to analyze rival brand evolution.

Instructions

Get paginated history of Positioning monitoring runs with completion timestamps. Use this to track how competitors change their homepage messaging over time. Retrieve specific run data with get_positioning_run_detail using the runId from this response. Read-only. Returns paginated JSON array with pagination.hasMore flag.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYesProject ID (from list_projects)
pageNoPage number (1-indexed, default: 1)
limitNoItems per page (default: 20, max: 100)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses 'Read-only' safety status, pagination behavior, and output structure ('paginated JSON array with pagination.hasMore flag'), effectively compensating for missing output schema. Could be improved by mentioning sort order or history retention limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences with zero waste: (1) core action and resource, (2) use case, (3) sibling relationship/workflow, (4) safety and return format. Every sentence earns its place and is appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage but no annotations and no output schema, the description adequately compensates by describing the return format and sibling relationships. Missing minor details like error handling or rate limits, but complete enough for a paginated list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all 3 parameters fully documented), so baseline 3 applies. The description references 'runId from this response' which indirectly hints at output structure but does not add significant semantic context to input parameters beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with resource 'paginated history of Positioning monitoring runs' and distinguishes itself from sibling get_positioning_run_detail by stating this returns history while the sibling retrieves 'specific run data' using runId from this response. It also clarifies the scope includes 'completion timestamps'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('track how competitors change their homepage messaging over time') and explicitly names the alternative tool ('get_positioning_run_detail') for retrieving specific run data, establishing clear workflow guidance between list and detail operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/competlab/competlab-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server