Skip to main content
Glama
SurfRankAI

SurfRank MCP Server

by SurfRankAI

run_quick_test

Start a low-cost AI-visibility snapshot for a domain. Runs asynchronously with immediate return; poll results with get_quick_test.

Instructions

Start a "quick test" — a low-cost (1 credit) AI-visibility snapshot for a domain. Returns immediately; the test runs asynchronously. Use get_quick_test to poll. If your API key is scoped to a project, websiteId must match that project.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
domainYesRoot domain, e.g. "example.com"
brandNameNoBrand name (optional — auto-detected if omitted)
countryNoISO-2 country code (e.g. "US")
enginesNoAI engines to query (e.g. ["chatgpt", "perplexity"]). Defaults to all available.
websiteIdNoOptional — link the test to an existing project.

Implementation Reference

  • The handler that executes the 'run_quick_test' tool logic. It sends a POST request to '/quick-test' with the input parameters (domain, brandName, country, engines, websiteId).
    handler: async (input) => api.post('/quick-test', input),
  • The input schema for 'run_quick_test'. Requires 'domain' string, with optional properties: brandName, country, engines (array of strings), and websiteId.
    inputSchema: {
      type: 'object',
      properties: {
        domain: { type: 'string', description: 'Root domain, e.g. "example.com"' },
        brandName: { type: 'string', description: 'Brand name (optional — auto-detected if omitted)' },
        country: { type: 'string', description: 'ISO-2 country code (e.g. "US")' },
        engines: {
          type: 'array',
          items: { type: 'string' },
          description: 'AI engines to query (e.g. ["chatgpt", "perplexity"]). Defaults to all available.',
        },
        websiteId: { type: 'string', description: 'Optional — link the test to an existing project.' },
      },
      required: ['domain'],
    },
  • src/index.js:31-39 (registration)
    The tool is registered as part of the ALL_TOOLS array by spreading quickTestTools, and looked up by name via the toolByName Map at line 41.
    const ALL_TOOLS = [
      ...projectTools,
      ...keywordTools,
      ...reportTools,
      ...quickTestTools,
      ...keywordResearchTools,
      ...competitorTools,
      ...opportunityTools,
    ];
  • src/index.js:56-64 (registration)
    The ListToolsRequestSchema handler exposes all tools (including run_quick_test) to the MCP client by stripping the handler from each tool definition.
    server.setRequestHandler(ListToolsRequestSchema, async () => {
      return {
        tools: ALL_TOOLS.map(({ name, description, inputSchema }) => ({
          name,
          description,
          inputSchema,
        })),
      };
    });
  • src/index.js:67-96 (registration)
    The CallToolRequestSchema handler dispatches tool calls by name, invoking tool.handler(args) — this is the runtime dispatch that executes run_quick_test.
    server.setRequestHandler(CallToolRequestSchema, async (request) => {
      const { name, arguments: args = {} } = request.params;
      const tool = toolByName.get(name);
      if (!tool) {
        return {
          isError: true,
          content: [{ type: 'text', text: `Unknown tool: ${name}` }],
        };
      }
    
      try {
        const result = await tool.handler(args);
        return {
          content: [
            {
              type: 'text',
              text: JSON.stringify(result, null, 2),
            },
          ],
        };
      } catch (err) {
        // Surface the SurfRank error message verbatim so the model can react
        // (e.g. "insufficient credits" → tell the user to top up).
        const message = err?.message || 'Unknown error';
        return {
          isError: true,
          content: [{ type: 'text', text: message }],
        };
      }
    });
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses low cost (1 credit), asynchronous execution, and immediate return. However, it doesn't specify error behavior or what response contains, lacking full transparency for a write operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, each with crucial information. No fluff, front-loaded with purpose, then usage and constraints. Ideal structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description omits what the immediate return contains (e.g., test ID). The agent must infer from sibling tools. While the async nature is explained, the missing return value info reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%. The description adds value beyond schema by clarifying that websiteId must match the project scope if applicable, and that engines default to all available. This provides useful context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Start a quick test' with a specific verb and resource, and distinguishes it from sibling tools like get_quick_test and list_quick_tests. It also mentions it's a low-cost AI-visibility snapshot.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that the test runs asynchronously and to use get_quick_test for polling. It also provides context about websiteId requirement when API key is scoped. However, it does not explicitly exclude scenarios where alternative tools should be used.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SurfRankAI/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server