Skip to main content
Glama
CSOAI-ORG

Explainability Report MCP

quick_scan

Analyze an AI system description to receive an immediate transparency and explainability assessment. No API key needed.

Instructions

Describe an AI system -> instant transparency and explainability assessment. No API key required.

Behavior: This tool is read-only and stateless — it produces analysis output without modifying any external systems, databases, or files. Safe to call repeatedly with identical inputs (idempotent). Free tier: 10/day rate limit. Pro tier: unlimited. No authentication required for basic usage.

When to use: Use this tool when you need structured analysis or classification of inputs against established frameworks or standards.

When NOT to use: Not suitable for real-time production decision-making without human review of results. Behavioral Transparency: - Side Effects: This tool is read-only and produces no side effects. It does not modify any external state, databases, or files. All output is computed in-memory and returned directly to the caller. - Authentication: No authentication required for basic usage. Pro/Enterprise tiers require a valid MEOK API key passed via the MEOK_API_KEY environment variable. - Rate Limits: Free tier: 10 calls/day. Pro tier: unlimited. Rate limit headers are included in responses (X-RateLimit-Remaining, X-RateLimit-Reset). - Error Handling: Returns structured error objects with 'error' key on failure. Never raises unhandled exceptions. Invalid inputs return descriptive validation errors. - Idempotency: Fully idempotent — calling with the same inputs always produces the same output. Safe to retry on timeout or transient failure. - Data Privacy: No input data is stored, logged, or transmitted to external services. All processing happens locally within the MCP server process.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
descriptionYes

Implementation Reference

  • server.py:279-279 (registration)
    The tool 'quick_scan' is registered as an MCP tool via the @mcp.tool() decorator on line 279.
    @mcp.tool()
  • The handler function for 'quick_scan'. Takes a description string, checks rate limits, assesses transparency level, detects model type, checks for high-risk keywords, recommends explainability methods, and returns a structured response with transparency assessment.
    def quick_scan(description: str) -> dict:
        """Describe an AI system -> instant transparency and explainability assessment. No API key required.
    
        Behavior:
            This tool is read-only and stateless — it produces analysis output
            without modifying any external systems, databases, or files.
            Safe to call repeatedly with identical inputs (idempotent).
            Free tier: 10/day rate limit. Pro tier: unlimited.
            No authentication required for basic usage.
    
        When to use:
            Use this tool when you need structured analysis or classification
            of inputs against established frameworks or standards.
    
        When NOT to use:
            Not suitable for real-time production decision-making without
            human review of results.
        Behavioral Transparency:
            - Side Effects: This tool is read-only and produces no side effects. It does not modify
              any external state, databases, or files. All output is computed in-memory and returned
              directly to the caller.
            - Authentication: No authentication required for basic usage. Pro/Enterprise tiers
              require a valid MEOK API key passed via the MEOK_API_KEY environment variable.
            - Rate Limits: Free tier: 10 calls/day. Pro tier: unlimited. Rate limit headers are
              included in responses (X-RateLimit-Remaining, X-RateLimit-Reset).
            - Error Handling: Returns structured error objects with 'error' key on failure.
              Never raises unhandled exceptions. Invalid inputs return descriptive validation errors.
            - Idempotency: Fully idempotent — calling with the same inputs always produces the
              same output. Safe to retry on timeout or transient failure.
            - Data Privacy: No input data is stored, logged, or transmitted to external services.
              All processing happens locally within the MCP server process.
        """
        limit_err = _check_rate_limit("quick_scan_anonymous")
        if limit_err:
            return {"error": "rate_limited", "message": limit_err}
    
        transparency_level, transparency_score, positive_signals = _assess_transparency_level(description)
        text_lower = description.lower()
    
        # Determine model type
        detected_type = "unknown"
        for mtype in MODEL_TYPES:
            if mtype in text_lower or (mtype == "nlp" and any(w in text_lower for w in ["language", "text", "chat", "llm"])):
                detected_type = mtype
                break
        if detected_type == "unknown" and any(w in text_lower for w in ["image", "vision", "photo", "video"]):
            detected_type = "computer_vision"
        if detected_type == "unknown" and any(w in text_lower for w in ["generat", "create", "synthes"]):
            detected_type = "generative"
        if detected_type == "unknown" and any(w in text_lower for w in ["recommend", "suggest", "personali"]):
            detected_type = "recommendation"
        if detected_type == "unknown" and any(w in text_lower for w in ["predict", "classif", "detect"]):
            detected_type = "classification"
    
        # Determine if high-risk (needs full transparency)
        high_risk_keywords = [
            "hiring", "recruit", "loan", "credit", "insurance", "medical", "diagnosis",
            "judicial", "law enforcement", "biometric", "education", "grading",
        ]
        is_high_risk = bool(_match_keywords(description, high_risk_keywords))
    
        # Recommend explainability methods
        recommended_methods = []  # type: List[str]
        if detected_type in MODEL_TYPES:
            method_keys = MODEL_TYPES[detected_type]["explainability_methods"]
            for mk in method_keys:
                if mk in EXPLAINABILITY_METHODS:
                    recommended_methods.append(EXPLAINABILITY_METHODS[mk]["name"])
    
        # Build top actions
        if transparency_level == "low":
            top_actions = [
                "URGENT: Create model documentation covering purpose, capabilities, and limitations",
                "Implement at least one explainability method ({})".format(
                    recommended_methods[0] if recommended_methods else "SHAP or LIME"
                ),
                "Document known limitations and failure modes for deployers",
            ]
        elif transparency_level == "moderate":
            top_actions = [
                "Strengthen documentation with quantitative performance metrics per group",
                "Add human-readable decision explanations for end users",
                "Conduct a transparency audit against EU AI Act Article 13",
            ]
        else:
            top_actions = [
                "Good transparency baseline -- formalise into EU AI Act Article 13 compliant documentation",
                "Consider generating a model card for public disclosure",
                "Implement ongoing transparency monitoring for model updates",
            ]
    
        return {
            "transparency_level": transparency_level,
            "transparency_score": transparency_score,
            "positive_signals": positive_signals,
            "detected_model_type": detected_type,
            "is_high_risk": is_high_risk,
            "recommended_explainability_methods": recommended_methods if recommended_methods else ["SHAP", "LIME", "Counterfactual Explanations"],
            "top_3_actions": top_actions,
            "eu_ai_act_relevance": (
                "HIGH-RISK: Article 13 transparency obligations are MANDATORY. "
                "Full technical documentation per Annex IV required."
                if is_high_risk
                else "Transparency obligations under Article 50 may apply (user disclosure, content labelling)."
            ),
            "next_step": "Use generate_model_card for structured documentation or transparency_audit for full assessment",
            "meok_labs": "https://meok.ai",
        }
  • The input schema is defined by type hints: description: str -> dict. No Pydantic models used; validation is via inline type hints and the function docstring.
    def quick_scan(description: str) -> dict:
        """Describe an AI system -> instant transparency and explainability assessment. No API key required.
    
        Behavior:
            This tool is read-only and stateless — it produces analysis output
            without modifying any external systems, databases, or files.
            Safe to call repeatedly with identical inputs (idempotent).
            Free tier: 10/day rate limit. Pro tier: unlimited.
            No authentication required for basic usage.
    
        When to use:
            Use this tool when you need structured analysis or classification
            of inputs against established frameworks or standards.
    
        When NOT to use:
            Not suitable for real-time production decision-making without
            human review of results.
        Behavioral Transparency:
            - Side Effects: This tool is read-only and produces no side effects. It does not modify
              any external state, databases, or files. All output is computed in-memory and returned
              directly to the caller.
            - Authentication: No authentication required for basic usage. Pro/Enterprise tiers
              require a valid MEOK API key passed via the MEOK_API_KEY environment variable.
            - Rate Limits: Free tier: 10 calls/day. Pro tier: unlimited. Rate limit headers are
              included in responses (X-RateLimit-Remaining, X-RateLimit-Reset).
            - Error Handling: Returns structured error objects with 'error' key on failure.
              Never raises unhandled exceptions. Invalid inputs return descriptive validation errors.
            - Idempotency: Fully idempotent — calling with the same inputs always produces the
              same output. Safe to retry on timeout or transient failure.
            - Data Privacy: No input data is stored, logged, or transmitted to external services.
              All processing happens locally within the MCP server process.
        """
  • Helper function _assess_transparency_level() used by quick_scan to evaluate transparency from the description text.
    def _assess_transparency_level(description):
        # type: (str) -> Tuple[str, float, List[str]]
        """Assess how transparent an AI system appears from its description."""
        text_lower = description.lower()
        score = 0.0
        positive_signals = []  # type: List[str]
        max_score = 12.0
    
        transparency_indicators = [
            ("document", "Documentation mentioned"),
            ("explain", "Explainability considered"),
            ("transparen", "Transparency explicitly addressed"),
            ("human oversight", "Human oversight mentioned"),
            ("audit", "Auditability considered"),
            ("log", "Logging capability mentioned"),
            ("monitor", "Monitoring mentioned"),
            ("bias", "Bias awareness mentioned"),
            ("fairness", "Fairness considered"),
            ("accuracy", "Accuracy metrics mentioned"),
            ("limitation", "Limitations acknowledged"),
            ("user inform", "User information provided"),
        ]
    
        for keyword, signal in transparency_indicators:
            if keyword in text_lower:
                score += 1.0
                positive_signals.append(signal)
    
        normalised = score / max_score
        if normalised >= 0.6:
            level = "high"
        elif normalised >= 0.3:
            level = "moderate"
        else:
            level = "low"
    
        return level, round(normalised, 2), positive_signals
  • Helper function _match_keywords() used by quick_scan to detect high-risk keywords in the description.
    def _match_keywords(text, keywords):
        # type: (str, List[str]) -> List[str]
        """Return matched keywords found in text (case-insensitive)."""
        text_lower = text.lower()
        return [kw for kw in keywords if kw.lower() in text_lower]
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and excels. It details side effects (read-only, stateless), authentication, rate limits, error handling, idempotency, and data privacy comprehensively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections and front-loaded purpose. While thorough, it is somewhat verbose; every sentence earns its place, but brevity could be improved slightly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is remarkably complete. It covers usage, privacy, rate limits, error handling, and idempotency, leaving no significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has one parameter 'description' with 0% coverage. The description adds context by stating the input should be a description of an AI system, but lacks additional details like format or examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Describe an AI system -> instant transparency and explainability assessment.' It uses a specific verb and resource, and distinguishes itself from siblings by emphasizing instant results and no API key required.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit 'When to use' and 'When NOT to use' sections, providing clear context. However, it does not explicitly name alternative tools from the sibling list for differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CSOAI-ORG/explainability-report-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server