Skip to main content
Glama

research_finding_add

Record an atomic research claim with evidence from arxiv, wiki, curated, or web sources. Specify source kind, optional reference, evidence URL, confidence level, and notes to build project memory.

Instructions

Record an atomic claim with its evidence into the project. Each finding has a source kind (arxiv|wiki|curated|web), an optional ref/url, a confidence 0-1, and free-text notes. Findings are the building blocks; export consolidates them into a memo.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYes
claimYes
sourceKindYes"arxiv" | "wiki" | "curated" | "web"
sourceRefNo
evidenceUrlNo
confidenceNo0-1, default 0.7.
notesNo

Implementation Reference

  • The handler function `handleFindingAdd` that executes the research_finding_add tool logic. Inserts a row into the `research_findings` table with project_id, source_kind, source_ref, claim, evidence_url, confidence, and notes.
    const handleFindingAdd: McpToolHandler = async (args, ctx) => {
      const pool = (await ensureSchema(ctx)) as any;
      const r = await pool.query(
        `INSERT INTO research_findings (project_id, source_kind, source_ref, claim, evidence_url, confidence, notes)
         VALUES ($1,$2,$3,$4,$5,$6,$7) RETURNING id`,
        [String(args.projectId), String(args.sourceKind), args.sourceRef ?? null, String(args.claim), args.evidenceUrl ?? null, args.confidence ?? 0.7, args.notes ?? null],
      );
      return ok(asText({ success: true, finding_id: r.rows[0].id }));
    };
  • The tool definition/registration schema for research_finding_add: defines input properties (projectId, claim, sourceKind, sourceRef, evidenceUrl, confidence, notes) with required fields projectId, claim, and sourceKind.
    definition: {
      name: 'research_finding_add',
      description: 'Record an atomic claim with its evidence into the project. Each finding has a source kind (arxiv|wiki|curated|web), an optional ref/url, a confidence 0-1, and free-text notes. Findings are the building blocks; export consolidates them into a memo.',
      inputSchema: {
        type: 'object',
        properties: {
          projectId: { type: 'string' },
          claim: { type: 'string' },
          sourceKind: { type: 'string', description: '"arxiv" | "wiki" | "curated" | "web"' },
          sourceRef: { type: 'string' },
          evidenceUrl: { type: 'string' },
          confidence: { type: 'number', description: '0-1, default 0.7.' },
          notes: { type: 'string' },
        },
        required: ['projectId', 'claim', 'sourceKind'],
      },
    },
  • The registration entry in the RESEARCH_TOOLS array that maps the tool definition (name, description, inputSchema) to the handleFindingAdd handler, categorized under group 'ai'.
    {
      group: 'ai',
      definition: {
        name: 'research_finding_add',
        description: 'Record an atomic claim with its evidence into the project. Each finding has a source kind (arxiv|wiki|curated|web), an optional ref/url, a confidence 0-1, and free-text notes. Findings are the building blocks; export consolidates them into a memo.',
        inputSchema: {
          type: 'object',
          properties: {
            projectId: { type: 'string' },
            claim: { type: 'string' },
            sourceKind: { type: 'string', description: '"arxiv" | "wiki" | "curated" | "web"' },
            sourceRef: { type: 'string' },
            evidenceUrl: { type: 'string' },
            confidence: { type: 'number', description: '0-1, default 0.7.' },
            notes: { type: 'string' },
          },
          required: ['projectId', 'claim', 'sourceKind'],
        },
      },
      handler: handleFindingAdd,
    },
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description must cover behavioral traits. It lists fields (sourceKind, optional ref/url, confidence, notes) and explains the role of findings in the workflow, but does not mention side effects, idempotency, or permissions. Adequate for a simple create operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, each sentence adds value. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters (3 required) and no output schema, the description provides sufficient context: what fields to specify, the workflow (findings as building blocks, export for memo). Complete for a create tool of moderate complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 29% (only sourceKind and confidence described). Description adds context for sourceKind (enum values), sourceRef/evidenceUrl as optional, and notes as free-text, but does not describe projectId or claim in detail. Partially compensates for low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool records an atomic claim with evidence into a project. The verb 'record' and resource 'atomic claim with evidence' are specific. It distinguishes from sibling tools like research_export or research_gap_add by focusing on adding individual findings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage by describing findings as building blocks and mentioning export consolidates them into a memo, but lacks explicit guidance on when to use this vs alternatives (e.g., research_synthesize). No exclusions or when-not-to-use provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/terrizoaguimor/celiums-memory'

If you have feedback or need assistance with the MCP directory API, please join our Discord server