Skip to main content
Glama

get_statistical_reporting_check_prompt

Check statistical results text for completeness and consistency. Verify effect size, confidence intervals, p-value format, sample sizes, and statistical tests.

Instructions

[PRO] Review results text for consistent, complete statistical reporting. Checks: effect size, 95% CI, p-value format, N per group, statistical test. DATA SAFETY: Only input published or approved statistical results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
results_textYes
journal_or_styleNoAMA

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The @mcp.tool() decorated function that implements the 'get_statistical_reporting_check_prompt' tool logic. It accepts 'results_text' and optional 'journal_or_style' (default 'AMA') and returns a prompt string instructing review of statistical reporting consistency.
    @mcp.tool()
    def get_statistical_reporting_check_prompt(results_text: str, journal_or_style: str = "AMA") -> str:
        """
        [PRO] Review results text for consistent, complete statistical reporting.
        Checks: effect size, 95% CI, p-value format, N per group, statistical test.
        DATA SAFETY: Only input published or approved statistical results.
        """
        return f"""Review the following results text and ensure statistical values are reported consistently
    per {journal_or_style} style.
    
    For each result, confirm:
    - Effect size or difference
    - 95% confidence interval
    - P-value (formatted per {journal_or_style} style)
    - N for each group
    - Statistical test used (if not already in Methods)
    
    {results_text}
    
    Flag missing elements and suggest where they should be inserted.
    
    🔒 DATA SAFETY: Only input statistical results from published papers or data approved for disclosure."""
  • server.py:1000-1000 (registration)
    The tool is listed in a directory of pro-tier tools, with description 'Verify consistent statistical reporting'. This entry is used by the 'get_tool_directory' function (which generates a listing of all available tools).
    ("get_statistical_reporting_check_prompt", "Verify consistent statistical reporting"),
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It transparently lists the checks performed and includes a safety warning about input data. However, it does not disclose whether the tool is read-only or if there are any side effects, though the nature of a 'prompt' tool suggests it is safe.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, focused paragraph with a clear structure: a line describing the purpose, a list of checks, and a safety warning. Every sentence adds value, and there is no unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown), the description does not need to explain return values. It covers the tool's function and safety adequately. A minor gap is the lack of context on how the returned prompt should be used, but overall it is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The parameters 'results_text' and 'journal_or_style' are self-explanatory from their names, and the default for 'journal_or_style' is given as 'AMA'. With 0% schema description coverage, the description adds little beyond what is obvious, but the names are clear enough.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool reviews results text for consistent and complete statistical reporting, listing specific checks such as effect size, 95% CI, p-value format, N per group, and statistical test. This clearly differentiates it from sibling tools that generate prompts for other academic tasks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a data safety note ('Only input published or approved statistical results') but does not provide explicit guidance on when to use this tool versus alternatives. Usage is implied through the specific checks mentioned, but no when-not-to-use or comparison with similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pubspro/medwriter-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server