Skip to main content
Glama

maasy_get_seo_status

Retrieve SEO/GEO scores, keyword rankings, visibility trends, and top queries for a brand by providing its UUID.

Instructions

SEO/GEO scores, keyword rankings, visibility trends, top queries.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idNoBrand UUID

Implementation Reference

  • src/index.ts:241-246 (registration)
    The tool 'maasy_get_seo_status' is registered with the MCP server, with a Zod schema accepting an optional project_id string.
    server.tool(
      "maasy_get_seo_status",
      "SEO/GEO scores, keyword rankings, visibility trends, top queries.",
      { project_id: z.string().optional().describe("Brand UUID") },
      toolHandler("get_seo_status")
    );
  • The generic toolHandler function creates an async handler for each tool. For 'maasy_get_seo_status', it calls callGateway('get_seo_status', args) which sends the request to the remote mcp-gateway edge function.
    function toolHandler(toolName: string, argsFn?: (args: Record<string, unknown>) => Record<string, unknown>) {
      return async (args: Record<string, unknown>) => {
        try {
          const gatewayArgs = argsFn ? argsFn(args) : args;
          // Auto-inject default project_id if not provided
          if (DEFAULT_PROJECT_ID && !gatewayArgs.project_id) {
            gatewayArgs.project_id = DEFAULT_PROJECT_ID;
          }
          const result = await callGateway(toolName, gatewayArgs);
          return { content: [{ type: "text" as const, text: JSON.stringify(result, null, 2) }] };
        } catch (e: unknown) {
          return {
            content: [{ type: "text" as const, text: `Error: ${e instanceof Error ? e.message : String(e)}` }],
            isError: true,
          };
        }
      };
    }
  • The callGateway function sends a POST request with the tool name and args to a remote Supabase edge function (mcp-gateway), which contains the actual business logic for 'get_seo_status'.
    export async function callGateway(tool: string, args: Record<string, unknown> = {}): Promise<unknown> {
      const res = await fetch(gatewayUrl, {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
          [authHeader.name]: authHeader.value,
        },
        body: JSON.stringify({ tool, args }),
      });
    
      const data = await res.json();
    
      if (!res.ok) {
        throw new Error(data.error || `Gateway error (${res.status})`);
      }
    
      return data.result;
    }
  • Input schema for 'maasy_get_seo_status': optional project_id (string, described as 'Brand UUID').
    { project_id: z.string().optional().describe("Brand UUID") },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description does not disclose any behavioral traits (e.g., read-only, no side effects, rate limits). The agent must infer this is a read operation from the name, but the description fails to confirm.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise phrase, but it is a list rather than a structured sentence. It could be improved with a clear verb and structure, but it is not overly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description is the sole indicator of return values. It lists multiple metric types but lacks details on format, pagination, or how results are structured. This is incomplete for a tool that likely returns complex data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter (project_id) with description 'Brand UUID'. Schema coverage is 100%, so the description adds no extra meaning. Baseline at 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description lists data types (SEO/GEO scores, keyword rankings, visibility trends, top queries) which gives some idea of output, but does not explicitly state the action (getting/reporting). The tool name 'get_seo_status' implies the action, but the description is a noun phrase, not a clear statement of purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings like maasy_discover_keywords or maasy_get_alerts. No prerequisites or context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Jbelieve/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server