Skip to main content
Glama

get_scan_results

Fetch stored AI detection, plagiarism, readability, and SEO scan results using a scan ID to access complete analysis reports.

Instructions

Retrieve previously stored scan results by scan ID. Use this to fetch full results for scans that were stored (storeScan=true), or to check on scans that may have been processing when originally submitted.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
scan_idYesThe scan ID (returned as 'id' in the original scan response).

Implementation Reference

  • Handler function that retrieves stored scan results by scan_id, validates the input, calls the client API, and formats the response using _format_full_result()
    async def handle_get_scan_results(
        arguments: dict[str, Any],
        client: OriginalityClient,
    ) -> list[TextContent]:
        """Retrieve stored scan results."""
        scan_id = arguments.get("scan_id", "")
        if not scan_id:
            return [TextContent(type="text", text="No scan_id provided.")]
    
        result = await client.get_scan_results(scan_id)
        return [TextContent(type="text", text=_format_full_result(result))]
  • Tool schema definition with name 'get_scan_results', description, and inputSchema requiring scan_id parameter
    Tool(
        name="get_scan_results",
        description=(
            "Retrieve previously stored scan results by scan ID. Use this to fetch "
            "full results for scans that were stored (storeScan=true), or to check "
            "on scans that may have been processing when originally submitted."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "scan_id": {
                    "type": "string",
                    "description": "The scan ID (returned as 'id' in the original scan response).",
                },
            },
            "required": ["scan_id"],
        },
    ),
  • Registration mapping tool name 'get_scan_results' to its handler function handle_get_scan_results
    TOOL_HANDLERS = {
        "scan_ai": handle_scan_ai,
        "scan_full": handle_scan_full,
        "scan_plagiarism": handle_scan_plagiarism,
        "scan_readability": handle_scan_readability,
        "scan_seo": handle_scan_seo,
        "scan_url": handle_scan_url,
        "get_scan_results": handle_get_scan_results,
        "credit_balance": handle_credit_balance,
    }
  • Client method that makes the actual HTTP GET request to /scan-results endpoint with scan_id parameter
    async def get_scan_results(self, scan_id: str) -> dict[str, Any]:
        """Retrieve stored scan results by ID."""
        resp = await self.client.get("/scan-results", params={"id": scan_id})
        resp.raise_for_status()
        return resp.json()
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about fetching stored results and checking processing status, which goes beyond the basic 'retrieve' action. However, it lacks details on error handling, response format, or performance characteristics (e.g., rate limits), leaving gaps for a mutation-free tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core purpose, and the second adds usage context. Every word earns its place without redundancy, and it's front-loaded with the primary function. This is a model of concise tool documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with one parameter and no output schema, the description is reasonably complete. It covers purpose and usage context adequately. However, without annotations or output schema, it could benefit from more behavioral details (e.g., what the results look like), keeping it at an adequate but not exceptional level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the scan_id parameter well-documented. The description adds marginal value by mentioning that scan_id comes from 'the original scan response' and relating it to stored/processing scans. This provides context but doesn't significantly enhance the schema's semantics, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Retrieve previously stored scan results by scan ID.' It specifies the verb ('retrieve') and resource ('scan results'), and distinguishes it from siblings by focusing on fetching stored results rather than performing new scans. However, it doesn't explicitly differentiate from all siblings (e.g., scan_url might also retrieve results), so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool: for scans that were stored (storeScan=true) or to check on processing scans. It implies alternatives by referencing 'originally submitted' scans, suggesting other tools for initial scanning. However, it doesn't explicitly name when-not-to-use cases or list specific sibling alternatives, preventing a score of 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/EfrainTorres/armavita-originality-ai-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server