Skip to main content
Glama

run_snapshot

Capture passing test results as a golden baseline to define expected behavior. Future runs compare against this snapshot for regression detection.

Instructions

Run tests and save passing results as the new golden baseline. Use this to establish or update the expected behavior after an intentional change. Future run_check calls will compare against this snapshot. Call this: (1) after creating a new test with create_test, (2) after confirming a behavioral change is intentional, (3) before making large refactors so you have a clean rollback point. Only passing tests are saved — failing tests are skipped with a warning. IMPORTANT: Automatically detect test_path by looking for a 'tests/evalview/' directory in the current project. If it exists, pass it as test_path.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
testNoSnapshot only this specific test by name (optional, snapshots all by default)
notesNoHuman-readable note about why this snapshot was taken
test_pathNoPath to the test directory. Auto-detect: use 'tests/evalview/' if it exists, otherwise 'tests'.
variantNoSave as named variant for non-deterministic agents (max 5 per test). E.g. 'v2', 'async-path'.
previewNoShow what would change without saving (dry-run mode). Default: false.
resetNoDelete all existing baselines before capturing new ones. Default: false.
judgeNoJudge model for scoring (e.g. 'gpt-5', 'sonnet').
timeoutNoTimeout per test in seconds (default: 30).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses key behaviors: only passing tests are saved (failing skipped with warning) and automatic test_path detection. However, it does not explicitly mention that saving a new snapshot overwrites the previous baseline or warn about the destructive nature of the 'reset' parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately concise and well-structured: a clear purpose statement, followed by usage guidance, and an important note about test_path detection. No redundant sentences, but could be slightly more compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters, no output schema, and no annotations, the description covers the tool's role, usage workflow, failure behavior, and parameter hints. It provides sufficient context for an agent to use the tool correctly, though return value details are omitted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so each parameter already has a description. The tool description adds only minor additional context (e.g., auto-detect logic for test_path). Baseline 3 is appropriate as the description does not significantly enhance parameter understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Run tests and save passing results as the new golden baseline', using a specific verb and resource. It distinguishes from sibling tools like run_check (which compares against this snapshot) and create_test (which creates tests).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use scenarios: after creating a test, after confirming a behavioral change, before large refactors. It also mentions the alternative run_check for comparison, giving clear guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hidai25/eval-view'

If you have feedback or need assistance with the MCP directory API, please join our Discord server