Skip to main content
Glama

run_aeo_audit

Start an AEO audit to analyze your website's visibility across AI platforms like ChatGPT, Perplexity, Claude, and Google AI. Get citation rates, health scores, and content gap analysis.

Instructions

Start an AEO audit for a URL (async). Returns auditId immediately. Then call check_aeo_audit_status every 10–15s until is_complete or free_preview_ready (free tier stops at step 2).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe website URL to audit (e.g. https://example.com)
keywordNoPrimary industry keyword; defaults from domain if omitted
tierNoAudit tier: free (8 queries) or paid (40 queries)free

Implementation Reference

  • Handler implementation for run_aeo_audit tool, which initiates an audit via an API call and supports optional polling for results.
    server.tool(
      "run_aeo_audit",
      "Start an AEO audit for a URL (async). Returns auditId immediately. Then call check_aeo_audit_status every 10–15s until is_complete or free_preview_ready (free tier stops at step 2).",
      {
        url: z.string().url().describe("The website URL to audit (e.g. https://example.com)"),
        keyword: z.string().optional().describe("Primary industry keyword; defaults from domain if omitted"),
        tier: z.enum(["free", "paid"]).optional().default("free").describe("Audit tier: free (8 queries) or paid (40 queries)"),
      },
      async ({ url, keyword, tier }) => {
        try {
          const kw = defaultKeyword(url, keyword);
    
          const tierVal = tier || "free";
          const res = await fetch(`${API_BASE}/api/aeo-audit`, {
            method: "POST",
            headers: {
              "Content-Type": "application/json",
              "X-API-Key": apiKey,
              // Ensures Render applies admin paid bypass if body tier is lost by a proxy/client
              ...(String(tierVal).toLowerCase() === "paid"
                ? { "X-AgentAEO-Admin-Paid-Tier": "1" }
                : {}),
            },
            body: JSON.stringify({ url, keyword: kw, tier: tierVal, async: true }),
          });
          const data = (await res.json()) as Record<string, unknown>;
          if (!res.ok) {
            const err = (data?.error as string) || (data?.message as string) || `HTTP ${res.status}`;
            return {
              content: [{ type: "text" as const, text: `Error: ${err}` }],
              isError: true,
            };
          }
          const auditId = (data?.auditId ?? data?.audit_id ?? data?.id) as string | undefined;
          if (!auditId) {
            return {
              content: [{ type: "text" as const, text: `Audit started but no auditId returned:\n${JSON.stringify(data, null, 2)}` }],
            };
          }
    
          const reportUrl = `https://agentaeo.com/audit/${auditId}/summary`;
    
          if (!inlinePoll) {
            const text =
              `✅ Audit job accepted (async).\n\n` +
              `auditId: ${auditId}\n` +
              `keyword used: ${kw}\n\n` +
              `Next: call tool **check_aeo_audit_status** every 10–15s: **free** tier → stop when **free_preview_ready**; **paid** tier → keep polling until **is_complete** (full report, step 5). If **paid_pipeline_pending** is true, the paid pipeline is still running — keep polling.\n\n` +
              `View report when ready: ${reportUrl}\n\n` +
              `Server response:\n${JSON.stringify(data, null, 2)}`;
            return { content: [{ type: "text" as const, text }] };
          }
    
          // Optional long poll (may exceed Claude Desktop ~60s tool limit)
          const POLL_INTERVAL_MS = 12000;
          const MAX_POLLS = 30;
          let lastStatus: Record<string, unknown> = {};
    
          for (let i = 0; i < MAX_POLLS; i++) {
            await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS));
    
            const pollRes = await fetch(`${API_BASE}/api/aeo-status/${auditId}`, {
              method: "GET",
              headers: { "X-API-Key": apiKey },
            });
            const pollData = (await pollRes.json()) as Record<string, unknown>;
            lastStatus = pollData;
    
            const paidPipelinePending = (pollData?.paid_pipeline_pending as boolean) === true;
            const isComplete = (pollData?.is_complete as boolean) === true;
            const freePreviewReady = (pollData?.free_preview_ready as boolean) === true;
            const isTerminal = (pollData?.is_terminal as boolean) === true;
    
            if (paidPipelinePending) {
              continue;
            }
            if (isComplete || freePreviewReady || isTerminal) {
              const text =
                `✅ Audit complete!\n` +
                `auditId: ${auditId}\n` +
                `Status: ${pollData?.status ?? "free_preview"}\n` +
                `free_preview_ready: ${freePreviewReady}\n` +
                `View report: ${reportUrl}\n\n` +
                `Raw response:\n${JSON.stringify(pollData, null, 2)}`;
              return { content: [{ type: "text" as const, text }] };
            }
          }
    
          return {
            content: [{
              type: "text" as const,
              text: `Audit started (auditId: ${auditId}) but did not complete within 6 minutes.\nLast status:\n${JSON.stringify(lastStatus, null, 2)}\nUse check_aeo_audit_status with auditId "${auditId}" to continue polling.`,
            }],
          };
        } catch (err) {
          const msg = err instanceof Error ? err.message : String(err);
          return {
            content: [{ type: "text" as const, text: `Error: ${msg}` }],
            isError: true,
          };
        }
      }
    );
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the operation is asynchronous ('async'), returns an auditId immediately, and has tier-based limitations (free tier stops at step 2). It also implies rate limits or polling intervals ('every 10–15s'). However, it doesn't detail error handling, timeouts, or authentication needs, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: two sentences that front-load the core action and immediately follow with essential usage instructions. Every sentence earns its place by providing critical operational details without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (async operation with polling) and lack of annotations or output schema, the description is largely complete. It covers the async nature, return value (auditId), polling requirements, and tier limitations. However, it doesn't explain what 'step 2' entails or potential errors, leaving minor gaps for an agent to handle edge cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (url, keyword, tier) with descriptions and defaults. The description adds no additional parameter semantics beyond what's in the schema, such as explaining keyword selection impact or tier implications in more depth. Thus, it meets the baseline but doesn't enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Start an AEO audit for a URL (async).' It specifies the action ('Start'), resource ('AEO audit'), and scope ('for a URL'), distinguishing it from sibling tools like check_aeo_audit_status or generate_aeo_content_suite by focusing on initiating an audit rather than checking status or generating content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: it instructs to 'call check_aeo_audit_status every 10–15s until is_complete or free_preview_ready' and notes that 'free tier stops at step 2.' This gives clear when-to-use context (start audit, then poll) and distinguishes from alternatives by implying this is the entry point for audits, with check_aeo_audit_status as the follow-up.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/agentaeo/agentaeo-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server