identifAI MCP Server
Server Details
Detect AI-generated images, videos, and audio with identifAI's deepfake detection tools.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 19 of 19 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes based on media type (audio/image/video) and action (classify/get/override), but there is some overlap between classify_* and classify_*_url pairs that could cause confusion if users don't carefully read descriptions about file size limitations. The tampering tools (submit_tampering_tickets, get_tampering_batch_results) are clearly separate from the classification tools.
Tool names follow a highly consistent verb_noun pattern throughout, with clear prefixes for media types (classify_, get_, override_) and consistent use of snake_case. The naming convention is predictable and helps users understand tool relationships at a glance.
19 tools is slightly high but reasonable for a comprehensive media classification service covering three media types with multiple operations each. The count feels justified by the domain scope, though some tools like the classify_* and classify_*_url pairs could potentially be consolidated into single tools with optional parameters.
The tool set provides complete coverage for the media classification domain with CRUD-like operations (classify, get results, override), batch operations, credit checking, and specialized features like heatmaps and tampering detection. There are no obvious gaps in the workflow from submission to result retrieval and management.
Available Tools
19 toolsclassify_audioAInspect
Upload an audio or speech file to detect whether it is human-recorded or AI-synthesized. Provide the audio content as a base64-encoded string. Returns a classification identifier for async result retrieval. WARNING: base64 encoding adds ~33% overhead to the original file size. For audio files larger than 10 MB, use classify_audio_url instead and provide a publicly accessible URL to avoid payload size issues. Authentication: provide your Identifai API key via the apiKey parameter or configure the X-Api-Key HTTP header in your MCP client (recommended).
| Name | Required | Description | Default |
|---|---|---|---|
| refId | No | Optional caller-defined reference ID | |
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| noCache | No | Bypass cache and force fresh classification | |
| fileData | Yes | Base64-encoded raw binary content of the audio file to classify. Do not pass a file-system path; encode the file bytes directly as base64. Only suitable for files up to ~10 MB; for larger audio files use classify_audio_url with a publicly accessible URL. | |
| filename | No | Optional file name including extension (e.g. voice.mp3). Used to hint the media type. | |
| preventC2paForces | No | If true, a C2PA signature will not force the classification to artificial |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the async nature ('Returns a classification identifier for async result retrieval'), file size limitations with overhead warning, authentication requirements, and cache bypass options. It doesn't mention rate limits or error handling, but covers most critical operational aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It front-loads the core purpose, then provides operational details, warnings, and alternatives. Every sentence serves a clear purpose, though it could be slightly more concise in the authentication explanation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description does a good job covering the essential context. It explains the purpose, usage guidelines, behavioral characteristics, and key constraints. The main gap is the lack of information about return values or result format, which would be helpful given the async nature mentioned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description adds some context about the fileData parameter (base64 encoding overhead, size limitations) and apiKey parameter (authentication approach), but doesn't provide significant additional semantic value beyond what's in the schema descriptions. This meets the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Upload an audio or speech file to detect whether it is human-recorded or AI-synthesized.' It specifies both the verb (upload/detect) and resource (audio/speech file), and distinguishes it from its sibling classify_audio_url by mentioning the file size limitation and alternative approach.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'For audio files larger than 10 MB, use classify_audio_url instead.' It also specifies authentication requirements and configuration recommendations, making it clear when this tool is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
classify_audio_urlAInspect
Submit a publicly accessible audio URL for AI-generated speech detection. Returns a classification identifier for async result retrieval. Authentication: provide your Identifai API key via the apiKey parameter or configure the X-Api-Key HTTP header in your MCP client (recommended).
| Name | Required | Description | Default |
|---|---|---|---|
| refId | No | Optional caller-defined reference ID | |
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| noCache | No | Bypass cache | |
| audioUrl | Yes | Public HTTP/HTTPS URL of the audio to classify | |
| preventC2paForces | No | If true, a C2PA signature will not force the classification to artificial |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's an async operation ('Returns a classification identifier for async result retrieval'), includes authentication details (API key via parameter or header), and mentions caching behavior ('Bypass cache' implied via parameter). It doesn't cover rate limits, error handling, or response formats, but adds substantial context beyond basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by async behavior and authentication details. It avoids redundancy and wastes no words, though the authentication explanation is slightly verbose. Overall, it's efficient and structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (async operation with authentication and multiple parameters), no annotations, and no output schema, the description is moderately complete. It covers purpose, async nature, and authentication well, but lacks details on output format, error cases, or sibling tool differentiation. It's adequate for basic use but has gaps for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds minimal parameter semantics beyond the schema—it mentions the 'audioUrl' parameter implicitly and provides authentication context for 'apiKey,' but doesn't explain other parameters like 'refId' or 'preventC2paForces.' Given high schema coverage, the baseline score of 3 is appropriate as the description adds some value but relies heavily on the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Submit a publicly accessible audio URL for AI-generated speech detection.' It specifies the verb ('submit'), resource ('audio URL'), and objective ('AI-generated speech detection'), which is specific and actionable. However, it doesn't explicitly differentiate from sibling tools like 'classify_audio' (which might handle file uploads vs. URLs), leaving some ambiguity in sibling context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning 'publicly accessible audio URL,' suggesting this tool is for remote URLs rather than local files. However, it doesn't provide explicit guidance on when to use this vs. alternatives like 'classify_audio' or other media-type tools, nor does it mention prerequisites or exclusions beyond the URL requirement. Usage is contextually implied but not clearly articulated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
classify_imageAInspect
Upload an image file to detect whether it is human-made or AI-generated. Provide the image content as a base64-encoded string. Returns a classification identifier for async result retrieval. WARNING: base64 encoding adds ~33% overhead to the original file size. For images larger than 4 MB, use classify_image_url instead and provide a publicly accessible URL to avoid payload size issues. Authentication: provide your Identifai API key via the apiKey parameter or configure the X-Api-Key HTTP header in your MCP client (recommended).
| Name | Required | Description | Default |
|---|---|---|---|
| refId | No | Optional caller-defined reference ID attached to the result | |
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| noCache | No | Bypass cache and force a fresh classification | |
| fileData | Yes | Base64-encoded raw binary content of the image file to classify. Do not pass a file-system path; encode the file bytes directly as base64. Only suitable for files up to ~4 MB; for larger images use classify_image_url with a publicly accessible URL. | |
| filename | No | Optional file name including extension (e.g. photo.jpg). Used to hint the media type. | |
| withNsfw | No | Enable NSFW (Not Safe For Work) content detection on the image | |
| withHeatmap | No | Generate an AI content heatmap alongside the classification | |
| withMorphing | No | Enable face morphing analysis on the image | |
| withTampering | No | Enable tampering/splicing detection on the image | |
| preventC2paForces | No | If true, a C2PA signature will not force the classification to artificial |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing authentication requirements, performance characteristics (base64 overhead, 4 MB limit), and the async nature of the operation ('Returns a classification identifier for async result retrieval'). It doesn't mention rate limits or error handling, but covers most essential behavioral aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose. Each sentence adds value: purpose, technical requirements, performance warning, alternative tool guidance, and authentication. It could be slightly more concise but maintains good information density without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 10 parameters, no annotations, and no output schema, the description does well by covering authentication, performance constraints, alternative usage, and the async nature. It doesn't explain the classification result format or error scenarios, but provides sufficient context for effective tool selection and basic usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds some value by emphasizing the base64 encoding requirement and file size limitation for the fileData parameter, but doesn't provide significant additional semantic context beyond what's already documented in the comprehensive schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Upload an image file to detect whether it is human-made or AI-generated'), identifies the resource (image), and distinguishes from sibling tools by explicitly mentioning the alternative classify_image_url for larger files. It provides a complete purpose statement with verb, resource, and differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs. alternatives ('For images larger than 4 MB, use classify_image_url instead'), includes prerequisites (authentication requirements), and mentions performance considerations (base64 overhead). It gives clear context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
classify_image_urlAInspect
Submit a publicly accessible image URL to detect whether it is human-made or AI-generated. Returns a classification identifier for result retrieval. Supports the same analysis options as file-based classification. Authentication: provide your Identifai API key via the apiKey parameter or configure the X-Api-Key HTTP header in your MCP client (recommended).
| Name | Required | Description | Default |
|---|---|---|---|
| refId | No | Optional caller-defined reference ID | |
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| noCache | No | Bypass cache and force fresh classification | |
| imageUrl | Yes | Public HTTP/HTTPS URL of the image to classify | |
| withNsfw | No | Enable NSFW (Not Safe For Work) content detection on the image | |
| withHeatmap | No | Generate an AI content heatmap | |
| withMorphing | No | Enable face morphing analysis | |
| withTampering | No | Enable tampering/splicing detection | |
| preventC2paForces | No | If true, a C2PA signature will not force the classification to artificial |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about authentication methods (API key via parameter or header) and mentions the return format ('classification identifier for result retrieval'), but lacks details on rate limits, error handling, or what specific classification results entail beyond the identifier.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three sentences: purpose, return value, and authentication guidance. It's front-loaded with the core functionality and avoids unnecessary repetition, though the authentication details could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters, no annotations, no output schema), the description is moderately complete. It covers purpose, return format, and authentication, but lacks details on output structure, error cases, or how to use the classification identifier with sibling tools like get_image_classification for result retrieval.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema, such as explaining how analysis options interact or providing examples. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('detect') and resource ('publicly accessible image URL'), distinguishing it from siblings by focusing on URL-based image classification rather than file-based (classify_image) or other media types (classify_audio, classify_video).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('Submit a publicly accessible image URL') and mentions it 'Supports the same analysis options as file-based classification,' implicitly suggesting classify_image as an alternative for file-based input. However, it doesn't explicitly state when not to use this tool or compare it to all siblings like classify_audio_url.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
classify_videoAInspect
Upload a video file to detect whether it is human-made or AI-generated. Provide the video content as a base64-encoded string. The video is split into frames which are individually classified. Returns a classification identifier for async result retrieval. WARNING: base64 encoding adds ~33% overhead to the original file size. For videos larger than 10 MB, use classify_video_url instead and provide a publicly accessible URL to avoid payload size issues. Authentication: provide your Identifai API key via the apiKey parameter or configure the X-Api-Key HTTP header in your MCP client (recommended).
| Name | Required | Description | Default |
|---|---|---|---|
| refId | No | Optional caller-defined reference ID | |
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| frames | No | Maximum number of frames to extract and classify | |
| noCache | No | Bypass cache and force fresh classification | |
| fileData | Yes | Base64-encoded raw binary content of the video file to classify. Do not pass a file-system path; encode the file bytes directly as base64. Only suitable for files up to ~10 MB; for larger videos use classify_video_url with a publicly accessible URL. | |
| filename | No | Optional file name including extension (e.g. clip.mp4). Used to hint the media type. | |
| withNsfw | No | Enable NSFW (Not Safe For Work) content detection on video frames | |
| keyFrames | No | Extract key frames rather than uniform frames | |
| withAudio | No | [BETA] Analyze the audio track of the video in addition to the frames. Requires enablement in the pricing plan. | |
| withMorphing | No | Enable face morphing analysis on video frames | |
| withTampering | No | Enable tampering detection on video frames | |
| keyFramesMethod | No | Algorithm used for key frame extraction | |
| preventC2paForces | No | If true, a C2PA signature will not force the classification to artificial | |
| ensureFacePerFrame | No | Ensure at least one face is detected in each frame used for classification. Useful combined with withMorphing to detect face swaps. May increase processing time. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: the async nature ('Returns a classification identifier for async result retrieval'), technical constraints ('base64 encoding adds ~33% overhead'), size limitations ('only suitable for files up to ~10 MB'), and authentication requirements. It doesn't fully describe error conditions or rate limits, but covers most essential operational aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose. Each sentence adds valuable information (technical constraints, alternatives, authentication). While slightly dense, there's minimal waste, and the structure flows logically from purpose to implementation details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 14 parameters and no output schema, the description provides substantial context: purpose, technical constraints, size limits, authentication, and sibling tool alternatives. It doesn't describe the return format or error responses, but given the comprehensive schema coverage and behavioral disclosures, it's mostly complete for agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 14 parameters thoroughly. The description adds minimal parameter-specific information beyond what's in the schema, mainly reinforcing the fileData parameter's base64 requirement and size limitation. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Upload a video file to detect whether it is human-made or AI-generated') and distinguishes it from sibling tools by explicitly mentioning the alternative classify_video_url for larger files. It specifies the resource (video file) and method (base64 encoding).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs. alternatives: 'For videos larger than 10 MB, use classify_video_url instead.' It also specifies authentication requirements and recommends configuration approaches, giving clear context for proper usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
classify_video_urlBInspect
Submit a publicly accessible video URL for AI-generated content detection. Supports the same frame extraction and analysis options as file-based classification. Authentication: provide your Identifai API key via the apiKey parameter or configure the X-Api-Key HTTP header in your MCP client (recommended).
| Name | Required | Description | Default |
|---|---|---|---|
| refId | No | Optional caller-defined reference ID | |
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| frames | No | Maximum frames to extract | |
| noCache | No | Bypass cache | |
| videoUrl | Yes | Public HTTP/HTTPS URL of the video to classify | |
| withNsfw | No | Enable NSFW (Not Safe For Work) content detection on video frames | |
| keyFrames | No | Extract key frames | |
| withAudio | No | [BETA] Analyze the audio track of the video. Requires enablement in the pricing plan. | |
| withMorphing | No | Enable face morphing analysis | |
| withTampering | No | Enable tampering detection | |
| keyFramesMethod | No | Key frame extraction algorithm | |
| preventC2paForces | No | If true, a C2PA signature will not force the classification to artificial | |
| ensureFacePerFrame | No | Ensure at least one face is detected per frame. Useful with withMorphing. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds value by explaining authentication methods (API key parameter or HTTP header) and noting that the MCP client configuration is recommended for remote deployments. However, it doesn't describe critical behaviors like rate limits, response format, error handling, or whether the operation is idempotent—significant gaps for a tool with 13 parameters and no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core purpose, and the second covers authentication details. It's front-loaded with the main function and avoids unnecessary repetition. However, the authentication sentence is slightly verbose, and it could be more concise by integrating the purpose and sibling differentiation more tightly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (13 parameters, no annotations, no output schema), the description is incomplete. It covers authentication and hints at sibling tool relationships but lacks details on output format, error cases, performance characteristics, or usage limits. For a tool with rich input options and no structured output, more contextual guidance is needed to help an agent use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 13 parameters thoroughly. The description doesn't add any parameter-specific semantics beyond what's in the schema—it only mentions authentication for 'apiKey' generally. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description provides no additional parameter insights.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Submit a publicly accessible video URL for AI-generated content detection.' It specifies the verb ('submit'), resource ('video URL'), and goal ('AI-generated content detection'), distinguishing it from file-based classification tools. However, it doesn't explicitly differentiate from sibling tools like 'classify_video' (which likely handles files instead of URLs).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context by mentioning it 'Supports the same frame extraction and analysis options as file-based classification,' which implies similarity to 'classify_video' but doesn't explicitly state when to use this URL-based tool versus file-based alternatives. It includes authentication guidance but lacks clear when-to-use versus when-not-to-use directives or named alternatives for different scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_all_audio_classificationsAInspect
Retrieve classification results for multiple audio files in a single request. Accepts up to 100 identifiers.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| identifiers | Yes | Array of classification identifiers (max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about batch processing ('up to 100 identifiers'), which is not inferable from the schema alone. However, it lacks details on response format, error handling, authentication requirements (though partially covered in schema), or rate limits, leaving gaps for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose, and every word earns its place. It efficiently conveys key information (batch retrieval, limit of 100) without redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a read-only operation (implied by 'Retrieve'), the description is adequate but incomplete. It covers the batch nature and limit, but lacks details on return values, error cases, or authentication context, which are important for a tool with multiple parameters and no structured behavioral hints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description adds no additional parameter semantics beyond what the schema provides (e.g., it mentions 'identifiers' but doesn't clarify format or source). Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve classification results'), resource ('for multiple audio files'), and scope ('in a single request'), distinguishing it from sibling tools like get_audio_classification (singular) and classify_audio (submission vs. retrieval). It precisely defines what the tool does without being vague or tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('for multiple audio files in a single request') and implies an alternative (use get_audio_classification for single files). However, it does not explicitly state when NOT to use it or compare it to all relevant siblings like get_all_image_classifications, leaving some room for improvement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_all_image_classificationsAInspect
Retrieve classification results for multiple images in a single request. Accepts up to 100 identifiers. Useful for batch result polling.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| identifiers | Yes | Array of classification identifiers (max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses key behavioral traits: batch capability ('multiple images'), capacity limit ('up to 100 identifiers'), and a use case ('batch result polling'). However, it doesn't mention authentication requirements (though the schema covers apiKey), rate limits, error handling, or response format. For a tool with no annotations, this is adequate but leaves gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: two sentences that efficiently convey purpose, capacity, and usage. Every word earns its place with zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is minimally complete. It covers the core functionality and batch context, but lacks details on authentication (implied by schema), response format, error cases, or performance characteristics. For a batch retrieval tool, this is adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters (apiKey and identifiers). The description adds marginal value by reinforcing the batch nature and limit ('up to 100 identifiers'), but doesn't provide additional semantic context beyond what's in the schema. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieve classification results for multiple images in a single request.' It specifies the resource (classification results) and verb (retrieve), and distinguishes it from single-image retrieval tools like 'get_image_classification'. However, it doesn't explicitly differentiate from sibling batch tools like 'get_all_audio_classifications' beyond the 'images' context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'Useful for batch result polling.' This implies it should be used when needing results for multiple images at once, rather than individual requests. It doesn't explicitly state when NOT to use it or name alternatives, but the context strongly guides toward batch scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_all_video_classificationsBInspect
Retrieve classification results for multiple videos in a single request. Accepts up to 100 identifiers.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| identifiers | Yes | Array of classification identifiers (max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the batch capability and limit of 100 identifiers, but lacks details on authentication needs (though the schema covers apiKey), rate limits, response format, error handling, or whether it's a read-only operation. For a tool with no annotations, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes a key constraint. There is no wasted verbiage, and it's appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It covers the batch nature and limit but omits critical details like authentication requirements (though hinted in schema), response format, error conditions, and how it differs from sibling batch tools. For a tool with 2 parameters and no structured behavioral hints, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear documentation for both parameters (apiKey and identifiers). The description adds marginal value by reinforcing the 'up to 100 identifiers' limit, but doesn't provide additional semantic context beyond what the schema already specifies. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'retrieve' and resource 'classification results for multiple videos', making the purpose evident. It distinguishes from single-video tools like 'get_video_classification' by specifying 'multiple videos in a single request', though it doesn't explicitly contrast with sibling 'get_all_audio_classifications' or 'get_all_image_classifications' beyond the video focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when batch processing is needed ('multiple videos in a single request') and sets a constraint ('up to 100 identifiers'), but it doesn't explicitly state when to use this over alternatives like 'get_video_classification' for single videos or other batch tools. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_audio_classificationAInspect
Retrieve the classification result for a previously submitted audio file. Poll until the result is ready.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| identifier | Yes | Classification identifier returned by the submission endpoint |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: this is a retrieval operation (implied read-only), it requires a previous submission (dependency), and it involves polling behavior ('Poll until the result is ready'), which is crucial for understanding its asynchronous nature. However, it doesn't mention potential rate limits, error conditions, or authentication details beyond what's in the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded with essential information in just two sentences. The first sentence states the core purpose, and the second adds critical behavioral context about polling. Every word earns its place with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (retrieval with polling), no annotations, no output schema, and 100% schema coverage, the description is mostly complete. It covers the purpose, usage context, and key behavior (polling), but lacks details on output format, error handling, or authentication nuances. For a retrieval tool, this is adequate but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already fully documents both parameters ('apiKey' and 'identifier'). The description adds minimal value beyond the schema by mentioning that the identifier comes 'from the submission endpoint,' but doesn't provide additional syntax, format, or usage context for the parameters. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve the classification result'), the resource ('for a previously submitted audio file'), and the operational behavior ('Poll until the result is ready'). It distinguishes this tool from sibling tools like 'classify_audio' (which submits for classification) and 'get_all_audio_classifications' (which retrieves multiple results).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool: after an audio file has been submitted for classification and you need to retrieve the result. It implies an alternative (the submission endpoint) by mentioning 'previously submitted' and 'identifier returned by the submission endpoint,' but doesn't explicitly name when NOT to use it or compare with all sibling tools like 'get_all_audio_classifications.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_classification_heatmapAInspect
Retrieve a visual heatmap highlighting which regions of the image were detected as AI-generated. Requires the classification to have been submitted with withHeatmap enabled.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| identifier | Yes | Classification identifier for which to fetch the heatmap |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the prerequisite (withHeatmap enabled) which is valuable behavioral context. However, it doesn't mention authentication requirements (though the schema covers apiKey), rate limits, response format, or whether this is a read-only operation. The description adds some context but leaves gaps for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste. The first sentence states the purpose, and the second provides critical usage guidance. Every word earns its place, and the information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does well by specifying the prerequisite (withHeatmap enabled) and the visual nature of the output. However, it doesn't describe the return format (e.g., image data, URL, or metadata) or error conditions. For a tool with 2 parameters and 100% schema coverage but no output schema, it's mostly complete but could benefit from output details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain the identifier format or heatmap output details). With complete schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve a visual heatmap') and resource ('regions of the image were detected as AI-generated'), distinguishing it from sibling tools like get_image_classification which likely returns general classification results rather than heatmap visualizations. It precisely defines what the tool delivers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Requires the classification to have been submitted with withHeatmap enabled.' This provides clear prerequisites and distinguishes it from alternatives like get_image_classification that don't require this specific condition.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_image_classificationAInspect
Retrieve the classification result for a previously submitted image. Use the identifier returned by classify_image or classify_image_url. Poll this endpoint until the result is available.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| identifier | Yes | Classification identifier returned by the submission endpoint |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: this is a polling/retrieval operation (not a submission), it may require multiple attempts until results are ready, and it depends on prior submission tools. However, it doesn't mention authentication requirements, rate limits, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste - first sentence states purpose and prerequisites, second provides crucial behavioral guidance about polling. Every word earns its place and the information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a retrieval tool with no annotations and no output schema, the description does well by explaining the polling behavior and dependencies. However, it doesn't describe what the classification result contains or mention authentication requirements that are only covered in the parameter schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds some context by mentioning the identifier comes from classify_image or classify_image_url, but doesn't provide additional semantic meaning beyond what's already documented in the schema descriptions for apiKey and identifier parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('retrieve classification result'), resource ('previously submitted image'), and distinguishes it from siblings by specifying it uses identifiers from classify_image or classify_image_url. It's not a tautology and provides meaningful differentiation from other classification tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool ('for a previously submitted image'), specifies the prerequisite ('use the identifier returned by classify_image or classify_image_url'), and provides clear behavioral guidance ('poll this endpoint until the result is available'). This gives comprehensive usage context including timing and dependencies.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tampering_batch_resultsAInspect
Retrieve the tampering detection results for a previously submitted batch of tickets. Poll until the "done" field is true. Each result contains a verdict ("authentic" or "tampered") and per-heuristic verdicts.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| batchId | Yes | Batch identifier returned by submit_tampering_tickets |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a polling operation ('poll until the "done" field is true'), describes the response structure ('Each result contains a verdict...'), and implies it's a read operation (retrieving results). It doesn't mention authentication needs or rate limits, but covers core behavior adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states purpose and polling behavior, second describes result structure. Every sentence earns its place by providing essential information not obvious from the tool name alone.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides good context: purpose, polling behavior, and result structure. It could mention error handling or authentication requirements, but covers the core use case well given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description adds no additional parameter semantics beyond what's in the schema (e.g., it doesn't explain batchId format or apiKey usage scenarios). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('retrieve') and resource ('tampering detection results for a previously submitted batch of tickets'), distinguishing it from sibling tools like submit_tampering_tickets (which submits) and classification tools (which classify rather than retrieve results). It specifies the exact type of results being fetched.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: after submitting tickets via submit_tampering_tickets (referenced by batchId) and to poll until completion. It distinguishes from classification tools by focusing on tampering detection results rather than classification tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_user_creditsAInspect
Retrieve the current available and used classification credits for the authenticated Identifai account. Use this to check quota status before submitting large batches.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly indicates this is a read operation ('retrieve'), mentions authentication context ('authenticated Identifai account'), and provides practical guidance about checking quota before large batches. However, it doesn't mention rate limits, response format, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste - the first states the purpose, the second provides usage guidance. Every word serves a clear purpose, and the description is appropriately sized for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with no annotations and no output schema, the description provides good context about purpose and usage. However, it doesn't describe the return format or what 'available and used classification credits' specifically means, which would be helpful given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single optional parameter. The description doesn't add any parameter-specific information beyond what's in the schema, which is acceptable given the high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('retrieve'), resource ('current available and used classification credits'), and target ('authenticated Identifai account'). It distinguishes from sibling tools that focus on classification operations rather than quota checking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'to check quota status before submitting large batches.' This provides clear context and purpose, distinguishing it from classification-related sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_video_classificationAInspect
Retrieve the classification result for a previously submitted video. Poll until the result is ready — video classification is always asynchronous.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| identifier | Yes | Classification identifier returned by the submission endpoint |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool retrieves results (implying read-only), it's for previously submitted videos (implying it doesn't initiate classification), and it requires polling due to asynchronicity. However, it lacks details on error handling, rate limits, or authentication needs beyond what the schema covers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and adds critical behavioral context in the second. Every sentence earns its place with no wasted words, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (asynchronous retrieval), no annotations, and no output schema, the description is somewhat complete but has gaps. It covers the purpose and polling behavior but lacks details on response format, error cases, or authentication context, which would be helpful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional meaning about parameters beyond implying the 'identifier' is from a submission endpoint, which is already covered in the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve the classification result') and resource ('for a previously submitted video'), distinguishing it from sibling tools like 'classify_video' (which submits) and 'get_all_video_classifications' (which lists multiple). It precisely defines the tool's scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('for a previously submitted video') and provides context about its asynchronous nature ('Poll until the result is ready'), but it does not explicitly mention when not to use it or name alternatives among siblings (e.g., 'get_all_video_classifications').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
override_audio_classificationCInspect
Manually override the classification verdict for a previously classified audio file. Sets the result to either "human" or "artificial".
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| identifier | Yes | Classification identifier to override | |
| classification | Yes | The correct classification value to apply |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool performs a manual override, implying a mutation/write operation, but doesn't describe authentication needs (though the schema covers apiKey), rate limits, side effects (e.g., if this action is reversible or logs changes), or what the response looks like. For a mutation tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste. It front-loads the core purpose and efficiently specifies the allowed values ('human' or 'artificial'). Every sentence earns its place by providing essential information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a mutation operation with no annotations and no output schema), the description is incomplete. It lacks details on behavioral traits (e.g., authentication, side effects), response format, and usage guidelines. While the schema covers parameters well, the description doesn't compensate for the missing context needed for safe and effective tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters (apiKey, identifier, classification) with descriptions and enum values. The description adds no additional meaning beyond what's in the schema, such as explaining the format of 'identifier' or the implications of setting 'classification'. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Manually override the classification verdict for a previously classified audio file. Sets the result to either "human" or "artificial".' It specifies the verb ('override'), resource ('classification verdict for a previously classified audio file'), and action ('sets the result'). However, it doesn't explicitly differentiate from sibling tools like 'override_image_classification' or 'override_video_classification' beyond mentioning 'audio file'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., that the audio must have been previously classified), exclusions, or comparisons to sibling tools like 'classify_audio' or 'get_audio_classification'. Usage is implied through the phrase 'previously classified audio file', but no explicit when/when-not instructions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
override_image_classificationAInspect
Manually override the classification verdict for a previously classified image. Sets the result to either "human" or "artificial". Used for corrections and feedback.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| identifier | Yes | Classification identifier to override | |
| classification | Yes | The correct classification value to apply |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses this is a mutation tool ('override', 'sets') and mentions the purpose ('corrections and feedback'), but doesn't address permissions, rate limits, whether changes are reversible, or what the response looks like. It adds some behavioral context but leaves gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with zero waste. The first sentence states the core functionality, and the second provides usage context. Every word earns its place in this well-structured description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description provides adequate purpose and context but lacks details about behavioral implications (permissions, reversibility, response format). It's complete enough for basic understanding but has gaps for safe operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('manually override'), resource ('classification verdict for a previously classified image'), and outcome ('sets the result to either "human" or "artificial"'). It distinguishes this tool from sibling classification tools by focusing on correction rather than initial classification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('for corrections and feedback'), but doesn't explicitly state when not to use it or mention specific alternatives like the sibling override tools for audio/video. It implies usage for post-classification adjustments.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
override_video_classificationAInspect
Manually override the classification verdict for a previously classified video. Sets the result to either "human" or "artificial".
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| identifier | Yes | Classification identifier to override | |
| classification | Yes | The correct classification value to apply |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the tool's purpose (overriding classifications) and the binary outcome ('human' or 'artificial'), but lacks details on permissions needed, whether the override is reversible, rate limits, or what the response looks like. For a mutation tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Manually override the classification verdict') and specifies the outcome. Every word earns its place with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is minimal but adequate for basic understanding. It covers the what and outcome, but lacks details on behavioral traits (e.g., side effects, error handling) and doesn't compensate for the missing output schema, leaving gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all parameters (apiKey, identifier, classification). The description adds no additional meaning beyond what the schema provides, such as explaining the identifier format or classification implications. Baseline 3 is appropriate when schema does all the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Manually override'), resource ('classification verdict for a previously classified video'), and outcome ('Sets the result to either "human" or "artificial"'). It distinguishes from sibling tools like 'classify_video' (which creates classifications) and 'get_video_classification' (which retrieves them).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'previously classified video' and 'override', suggesting it's for correcting existing classifications. However, it doesn't explicitly state when to use this versus alternatives like 'override_audio_classification' or 'override_image_classification', nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_tampering_ticketsAInspect
Submit one or more ticket images (as base64-encoded strings) to the Identifai v2 API for batch tampering detection. Each ticket is analysed independently; results are retrieved asynchronously via get_tampering_batch_results using the returned batch_id. Supports PDF files (each page becomes a separate analysis entry). Maximum 10 tickets per batch. Authentication: provide your Identifai API key via the apiKey parameter or configure the X-Api-Key HTTP header in your MCP client (recommended).
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | Identifai API key used to authenticate requests to the Identifai backend. Optional when the MCP client is configured to send the X-Api-Key HTTP header (the recommended approach for remote server deployments — set it once in your MCP client config). Required only when the header is not configured. Never use a placeholder value — only pass the real key supplied by the user. Keys can be obtained from the Identifai dashboard at https://identifai.net. | |
| models | No | Comma-separated list of model names to use for detection. If omitted, all models available in the pricing plan are used. | |
| refIds | No | Comma-separated list of reference IDs, one per ticket (e.g. "TICKET-001,TICKET-002"). Must match the number of tickets if provided. | |
| tickets | Yes | Array of base64-encoded raw binary content of ticket image/PDF files to analyse. Do not pass file-system paths; encode the file bytes directly as base64. Maximum 10 items. For single files, wrap in an array: ["<base64>"]. | |
| filenames | No | Optional array of filenames (e.g. ["ticket1.jpg", "ticket2.pdf"]). Must match the length of the tickets array if provided. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: asynchronous processing, batch size limit (10 tickets), PDF handling (each page analyzed separately), authentication requirements, and the need for follow-up retrieval. It doesn't mention rate limits or error handling, keeping it from a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose. Every sentence adds value: batch processing explanation, async retrieval method, file format support, limits, and authentication. Minor redundancy exists in mentioning base64 encoding twice, preventing a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (batch processing, async workflow, 5 parameters) and no annotations or output schema, the description does well by covering purpose, usage flow, authentication, and limits. It could improve by briefly mentioning expected output structure or common error scenarios, but it's largely complete for the tool's scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter-specific context beyond the schema, only mentioning the apiKey parameter in the authentication section and implying the tickets parameter's base64 encoding requirement. It doesn't provide additional semantic value for other parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('submit ticket images for batch tampering detection'), the resource ('Identifai v2 API'), and distinguishes from siblings by focusing on tampering detection rather than classification. It explicitly mentions the sibling tool 'get_tampering_batch_results' for result retrieval, establishing clear differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('for batch tampering detection'), when not to use it (implied: not for classification tasks like sibling tools), and names the alternative tool for result retrieval ('get_tampering_batch_results'). It also specifies prerequisites like authentication methods and batch size limits.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!