Skip to main content
Glama

detect_ai

Analyze media content to identify AI-generated material across images, video, audio, and text formats using specialized detection models.

Instructions

Detect whether media content was generated by AI. Supports images, video, audio, and text/PDF. Runs multiple specialized detection models in parallel for the given media type. Returns a job_id — use check_job to poll for results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
media_urlNoPublic URL of the media to analyze
mediaNoBase64-encoded media content to analyze
textNoPlain text content to analyze for AI generation
mimeNoMIME type of the media (e.g. image/png, audio/wav, text/plain)
tagsNoTags for organizing and filtering

Implementation Reference

  • Main implementation of detect_ai tool - defines the tool with Zod schema for input validation (media_url, media, text, mime, tags) and the handler function that posts to /api/v1/detect/ai endpoint, returning a job_id for polling via check_job
    export function register(server: McpServer, api: ApiClient): void {
      server.tool(
        "detect_ai",
        "Detect whether media content was generated by AI. Supports images, video, audio, " +
          "and text/PDF. Runs multiple specialized detection models in parallel for the given " +
          "media type. Returns a job_id — use check_job to poll for results.",
        {
          media_url: z
            .string()
            .url()
            .optional()
            .describe("Public URL of the media to analyze"),
          media: z
            .string()
            .optional()
            .describe("Base64-encoded media content to analyze"),
          text: z
            .string()
            .optional()
            .describe("Plain text content to analyze for AI generation"),
          mime: z
            .string()
            .optional()
            .describe("MIME type of the media (e.g. image/png, audio/wav, text/plain)"),
          tags: z
            .array(z.string())
            .optional()
            .describe("Tags for organizing and filtering"),
        },
        async (params) => {
          try {
            const body: Record<string, unknown> = {};
            if (params.media_url) body.media_url = params.media_url;
            if (params.media) body.media = params.media;
            if (params.text) body.text = params.text;
            if (params.mime) body.mime = params.mime;
            if (params.tags) body.tags = params.tags;
    
            const result = await api.post("/api/v1/detect/ai", body);
            const res = result as { job_id: string };
    
            return {
              content: [
                {
                  type: "text" as const,
                  text:
                    `AI detection job created.\n\n` +
                    `Job ID: ${res.job_id}\n\n` +
                    `Use check_job with this job_id to poll for results.`,
                },
              ],
            };
          } catch (err) {
            return {
              content: [
                {
                  type: "text" as const,
                  text: `Error: ${err instanceof Error ? err.message : String(err)}`,
                },
              ],
              isError: true as const,
            };
          }
        },
      );
    }
  • Zod input schema definition for detect_ai tool parameters including optional media_url, media, text, mime type, and tags fields
      media_url: z
        .string()
        .url()
        .optional()
        .describe("Public URL of the media to analyze"),
      media: z
        .string()
        .optional()
        .describe("Base64-encoded media content to analyze"),
      text: z
        .string()
        .optional()
        .describe("Plain text content to analyze for AI generation"),
      mime: z
        .string()
        .optional()
        .describe("MIME type of the media (e.g. image/png, audio/wav, text/plain)"),
      tags: z
        .array(z.string())
        .optional()
        .describe("Tags for organizing and filtering"),
    },
  • src/index.ts:12-12 (registration)
    Import statement for detect_ai register function
    import { register as detectAi } from "./tools/detect-ai.js";
  • src/index.ts:54-54 (registration)
    Registration of detect_ai tool with the MCP server instance and API client
    detectAi(server, api);
  • ApiClient.post method used by detect_ai handler to make API requests to the SDRM API endpoint
    async post<T = unknown>(path: string, body: unknown): Promise<T> {
      return this.request<T>(new URL(`${this.baseUrl}${path}`), {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify(body),
      });
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: the asynchronous nature ('Returns a job_id — use check_job to poll for results') and parallel processing approach. However, it lacks details about rate limits, authentication needs, error conditions, or what happens to submitted media.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose first, followed by supported formats, processing approach, and output handling. Every sentence earns its place with zero wasted words, making it highly efficient for agent comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (asynchronous detection with multiple input methods) and no annotations/output schema, the description does well by explaining the asynchronous workflow and supported media types. However, it could better address error handling, performance expectations, or how to choose between the three input parameters (media_url, media, text).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter semantics beyond the schema: it implies media_url, media, and text are alternative input methods, and mentions media types supported. However, it doesn't explain parameter relationships or usage patterns beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Detect whether media content was generated by AI' with specific verb ('Detect') and resource ('media content'), and distinguishes from siblings by mentioning 'Runs multiple specialized detection models in parallel for the given media type'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: 'Supports images, video, audio, and text/PDF' and explicitly mentions the alternative tool 'check_job' for polling results. However, it doesn't specify when NOT to use this tool or compare with other detection siblings like detect_fingerprint or detect_membership.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sidearmDRM/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server