Skip to main content
Glama

export_research_files

Export research artifacts to disk, including verified reports and raw evidence batches, with configurable file splitting and output options.

Instructions

[EXPORT] Automatically writes research artifacts to disk. It can expand and write every verified report chunk without asking the LLM to loop finalize manually, and it can also write every gathered raw-evidence batch even when verify has not passed yet.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sessionIdNodefault
outputDirYesDirectory where research export files will be written. Prefer an absolute path so the caller knows exactly where the artifacts landed.
baseFileNameNoBase filename prefix for all written research artifacts. The helper sanitizes it into a filesystem-safe ASCII stem.research_export
exportVerifiedReportNoWhen true, automatically expands and writes the full verified report by looping all finalize chunks internally. This path remains blocked until verify has passed.
exportRawEvidenceNoWhen true, writes every gathered research batch as raw evidence files even if verify has not passed, separating evidence capture from narrative approval.
maxChunkCharsNoMaximum size for each written markdown file. Large batches are automatically split across multiple files when needed.
overwriteNoWhether existing export files may be overwritten. Defaults to false so exports do not silently clobber prior artifacts.
finalSummaryNoOptional final summary override for verified report export. If omitted, the helper uses the stored pipeline summary or existing analysis and verification outputs.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does an excellent job describing key behaviors: automatic file writing, internal chunk processing, blocking behavior until verification passes, separation of evidence capture from narrative approval, and file splitting for large batches. The only gap is lack of information about error handling or permission requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences that each earn their place. The first establishes the core export functionality, while the second elaborates on the two distinct export modes. It's appropriately sized for an 8-parameter tool with complex behavior, though it could be slightly more front-loaded with the most critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex export tool with 8 parameters, no annotations, and no output schema, the description provides substantial context about what the tool does and how it behaves. It covers the two main export modes, automation aspects, and file handling. The main gap is lack of information about return values or error conditions, which would be helpful given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 88% schema description coverage, the baseline is 3. The description adds meaningful context about parameter behavior: it explains that the tool 'can expand and write every verified report chunk' (relates to exportVerifiedReport), 'can also write every gathered raw-evidence batch even when verify has not passed yet' (relates to exportRawEvidence), and implies automation that affects multiple parameters. This provides valuable semantic understanding beyond the schema's technical descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('writes research artifacts to disk', 'expand and write every verified report chunk', 'write every gathered raw-evidence batch') and distinguishes it from sibling tools by focusing on export functionality. It explicitly mentions automation capabilities that differentiate it from manual processes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use the tool ('automatically writes research artifacts', 'without asking the LLM to loop finalize manually') and distinguishes between two export modes (verified reports vs raw evidence). However, it doesn't explicitly mention when NOT to use this tool or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/XJTLUmedia/Context-First-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server