Skip to main content
Glama
SurfRankAI

SurfRank MCP Server

by SurfRankAI

get_latest_analyses

Retrieve per-keyword analyses from the most recent completed report to inspect which prompts mention your brand and which engines surfaced it.

Instructions

Get the per-keyword analyses from the project's most recent completed report. Useful for inspecting which prompts mention the brand and which engines surfaced it.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYes

Implementation Reference

  • Handler function for the 'get_latest_analyses' tool. Calls the SurfRank API endpoint GET /projects/{projectId}/reports/latest/analyses to fetch per-keyword analyses from the most recent completed report.
      handler: async ({ projectId }) =>
        api.get(`/projects/${projectId}/reports/latest/analyses`),
    },
  • Input schema for the tool. Accepts a single required string parameter 'projectId'.
    inputSchema: {
      type: 'object',
      properties: { projectId: { type: 'string' } },
      required: ['projectId'],
    },
  • src/index.js:34-34 (registration)
    The 'get_latest_analyses' tool is registered in the MCP server by spreading the 'reportTools' array into 'ALL_TOOLS'. The server's ListTools handler exposes it, and the CallTool handler dispatches to it via toolByName lookup.
    ...reportTools,
  • The HTTP client helper used by the handler. 'api.get' delegates to the 'request' function which sends authenticated GET requests to the SurfRank API.
    export const api = {
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that the tool always retrieves from the 'most recent completed report', but does not mention what happens if no report exists, error handling, or confirmation that it is read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two focused sentences with no extraneous information. The first sentence states the action and resource, and the second adds context. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description sufficiently explains what it returns (per-keyword analyses) and the source (most recent completed report). However, it could benefit from a brief note on the structure of the analyses or potential error cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% and the tool description does not mention the projectId parameter at all. With only one required parameter, the description should explain its purpose (e.g., which project) but fails to add any meaning beyond the schema's bare type.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Get), resource (per-keyword analyses from the most recent completed report), and adds context (inspecting prompts mentioning brand and engines). It distinguishes itself from siblings like get_report and list_reports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a use case ('useful for inspecting which prompts mention the brand and which engines surfaced it') but does not explicitly state when to use this tool versus alternatives like get_report or list_reports. The guidance is implied but not definitive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SurfRankAI/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server