Skip to main content
Glama

scan_full

Audit content comprehensively with AI detection, plagiarism checking, readability analysis, grammar review, and fact verification in a single scan for pre-publish quality assurance.

Instructions

Run a comprehensive content audit: AI detection + plagiarism + readability + grammar/spelling + fact-checking. Returns all scores in one call. Plagiarism checks can take up to 60 seconds. Use for pre-publish content audits. Costs credits for each enabled check.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesPlain text content to scan.
titleNoLabel for the scan.Full Content Audit
check_factsNoEnable fact-checking (verifies claims against sources). Adds processing time.
excluded_urlsNoURLs to exclude from plagiarism matching (e.g., your own site).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does so effectively. It discloses critical behavioral traits: processing time ('up to 60 seconds'), cost ('Costs credits'), and the comprehensive nature of the audit. However, it doesn't mention error handling, rate limits, or authentication requirements, which would elevate it to a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by key behavioral details (timing, usage, cost). Every sentence adds essential information—no wasted words. It's appropriately sized for a tool with multiple checks and parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple checks, 4 parameters, no output schema, no annotations), the description is largely complete. It covers purpose, usage, timing, and cost. However, without an output schema, it doesn't describe return values (e.g., score formats), leaving a minor gap that prevents a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter semantics beyond the schema—it implies that parameters enable specific checks (e.g., 'check_facts' for fact-checking) but doesn't elaborate on format or constraints. This meets the baseline without significant added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Run a comprehensive content audit') and enumerates the exact checks performed (AI detection, plagiarism, readability, grammar/spelling, fact-checking). It distinguishes from siblings by emphasizing 'all scores in one call' versus specialized tools like scan_ai or scan_plagiarism.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('Use for pre-publish content audits') and provides clear alternatives by naming sibling tools (scan_ai, scan_plagiarism, etc.) that handle individual checks. It also specifies timing constraints ('Plagiarism checks can take up to 60 seconds') and cost implications ('Costs credits for each enabled check').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/EfrainTorres/armavita-originality-ai-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server