Skip to main content
Glama

analyze_claim

Evaluate claims using empirical, responsible, harmonic, or pluralistic frameworks to identify validation steps and ensure credible information.

Instructions

Analyze a claim using multiple epistemological frameworks and suggest validation steps

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
frameworkNoValidation framework to use (empirical, responsible, harmonic, or pluralistic)
textYesClaim to analyze

Implementation Reference

  • The main execution logic for the 'analyze_claim' tool. It determines the validation framework, performs validation using helper functions, generates framework-specific cross-reference prompts, and formats the response content.
    if (name === "analyze_claim") {
      const framework: keyof typeof VALIDATION_FRAMEWORKS = 
        (typeof args.framework === 'string' && args.framework in VALIDATION_FRAMEWORKS) 
          ? args.framework as keyof typeof VALIDATION_FRAMEWORKS 
          : VALIDATION_FRAMEWORK;
      const validation = validateWithFramework(args.text, framework, {
        hasEmpirical: /evidence|study|research|data/i.test(args.text),
        servesWellbeing: /benefit|improve|help|support/i.test(args.text),
        maintainsHarmony: /balance|harmony|integrate/i.test(args.text)
      });
    
      const suggestions = getValidationSuggestions(args.text, framework);
      
      // Generate cross-referencing prompts
      const crossRefPrompts = [
        `- Use Exa MCP server to search for general information: "${args.text}"`,
        `- Use Brave Search for independent web sources: "${args.text}"`,
        `- Search ArXiv for preprints and technical papers: "${args.text}"`,
        `- Use Google Scholar MCP server to find peer-reviewed research: "${args.text}"`,
        `- Cross-reference findings between academic and general sources to identify consensus or conflicts`
      ];
    
      // Framework-specific cross-references
      if (framework === "empirical" || framework === "pluralistic") {
        crossRefPrompts.push(
          `- Compare methodologies between ArXiv papers and peer-reviewed research`,
          `- Analyze replication status across different studies`,
          `- Cross-validate findings between academic databases`
        );
      }
    
      if (framework === "responsible" || framework === "pluralistic") {
        crossRefPrompts.push(
          `- Use Exa MCP server to search for community impact studies: "${args.text}"`,
          `- Cross-reference academic findings with community experiences`,
          `- Compare traditional knowledge with modern research findings`
        );
      }
    
      if (framework === "harmonic" || framework === "pluralistic") {
        crossRefPrompts.push(
          `- Use Exa MCP server to search for alternative perspectives: "${args.text}"`,
          `- Compare Eastern and Western research approaches`,
          `- Synthesize findings across different knowledge systems`
        );
      }
    
      return {
        content: [
          {
            type: "text",
            text: `Analysis using ${framework} framework:\n\n` +
                 `Requirements:\n${suggestions.join("\n")}\n\n` +
                 `Confidence level: ${validation.confidence}\n\n` +
                 `Suggested cross-references:\n${crossRefPrompts.join("\n")}`
          },
          {
            type: "text",
            text: JSON.stringify({
              framework,
              validation,
              suggestions,
              crossRefPrompts
            })
          }
        ],
      };
    }
  • Input schema definition for the 'analyze_claim' tool, specifying the required 'text' parameter and optional 'framework' with allowed values.
    {
      name: "analyze_claim",
      description: "Analyze a claim using multiple epistemological frameworks and suggest validation steps",
      inputSchema: {
        type: "object",
        properties: {
          text: {
            type: "string",
            description: "Claim to analyze",
          },
          framework: {
            type: "string",
            description: "Validation framework to use (empirical, responsible, harmonic, or pluralistic)",
            enum: ["empirical", "responsible", "harmonic", "pluralistic"],
          }
        },
        required: ["text"],
      },
    },
  • src/index.ts:19-23 (registration)
    Server capabilities registration declaring 'analyze_claim' as an available tool.
    tools: {
      analyze_claim: true,
      validate_sources: true,
      check_manipulation: true
    },
  • Helper function that applies the selected framework's validation logic to compute requirements and confidence level, called by the analyze_claim handler.
    export function validateWithFramework(
      claim: string,
      framework: keyof typeof VALIDATION_FRAMEWORKS,
      evidence: any
    ) {
      const validator = VALIDATION_FRAMEWORKS[framework];
      return {
        requirements: validator.validateClaim(claim),
        confidence: validator.confidenceLevel(evidence)
      };
    }
  • Helper function that generates specific validation step suggestions from the framework's requirements, used in the analyze_claim response.
    export function getValidationSuggestions(
      claim: string,
      framework: keyof typeof VALIDATION_FRAMEWORKS
    ): string[] {
      const requirements = VALIDATION_FRAMEWORKS[framework].validateClaim(claim).requirements;
      return requirements.map(req => `- Verify if claim "${claim}" meets requirement: ${req}`);
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'analyze' and 'suggest validation steps' but doesn't specify what the analysis entails (e.g., output format, depth), whether it's read-only or has side effects, or any constraints like rate limits or authentication needs. This leaves significant gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('analyze a claim') and adds key details ('using multiple epistemological frameworks and suggest validation steps'). Every word contributes meaning without redundancy or waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of analyzing claims with frameworks and no annotations or output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., analysis results, validation steps list), how frameworks differ, or behavioral aspects like safety or performance. For a tool with 2 parameters and no structured output, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('framework' with enum values and 'text' as the claim). The description adds no additional semantic context beyond what's in the schema, such as explaining the frameworks or how they affect analysis. Baseline 3 is appropriate when the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('analyze') and resource ('a claim'), specifying it uses 'multiple epistemological frameworks' and 'suggest validation steps'. This provides a specific verb+resource combination, though it doesn't explicitly differentiate from sibling tools like 'check_manipulation' or 'validate_sources' which might have overlapping domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'check_manipulation' or 'validate_sources'. It mentions 'validation steps' but doesn't clarify if this is for preliminary analysis, in-depth validation, or specific contexts. No exclusions or prerequisites are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bmorphism/anti-bullshit-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server