Skip to main content
Glama

varrd_ai

Turn a trading idea into a tested edge. The system loads data, charts patterns, runs statistical tests, backtests with stops, and generates exact trade setups.

Instructions

Talk to VARRD AI. Describe any trading idea in plain language and the system handles everything — loading decades of market data, charting your pattern, running statistical tests, backtesting with stops, and generating exact trade setups. Requires credits.

MULTI-TURN: First call creates a session. Keep calling with the same session_id, following context.next_actions each time.

  1. Your idea -> VARRD charts pattern

  2. 'test it' -> statistical test (event study or backtest)

  3. 'show me the trade setup' -> exact entry/stop/target prices

HYPOTHESIS INTEGRITY (critical): VARRD tests ONE hypothesis at a time — one formula, one setup. Never combine multiple setups into one formula or ask to 'test all' — each idea must be tested as a separate hypothesis for the statistics to be valid. Say 'start a new hypothesis' between ideas to reset cleanly.

  • ALLOWED: Test the SAME setup across multiple markets ('test this on ES, NQ, and CL') — same formula, different data.

  • NOT ALLOWED: Test multiple DIFFERENT formulas/setups at once — each is a separate hypothesis requiring its own chart-test-result cycle. If ELROND council returns 4 setups, test each one separately: chart setup 1 -> test -> results -> 'start new hypothesis' -> chart setup 2 -> etc.

KEY CAPABILITIES you can ask for:

  • 'Use the ELROND council on [market]' -> 8 expert investigators

  • 'Optimize the stop loss and take profit' -> SL/TP grid search

  • 'Test this on ES, NQ, and CL' -> multi-market testing

  • 'Simulate trading this with 1.5 ATR stop' -> backtest with stops

EDGE VERDICTS in context.edge_verdict after testing:

  • STRONG EDGE: Significant vs zero AND vs market baseline

  • MARGINAL: Significant vs zero only (beats nothing, but real signal)

  • PINNED: Significant vs market only (flat returns but different from market)

  • NO EDGE: Neither significant test passed

TERMINAL STATES: Stop when context.has_edge is true (edge found) or false (no edge — valid result). Always read context.next_actions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
messageYesYour trading idea, research question, or instruction (e.g. 'test it', 'show trade setup').
session_idNoSession ID from a previous call. Omit to start a new research session.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses important behavioral traits beyond annotations: requires credits, multi-turn session management, hypothesis testing constraints, and edge verdict definitions. No contradiction with annotations (openWorldHint, non-readOnly).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is lengthy but well-structured: front-loaded with purpose, then multi-turn steps, critical rules, capabilities, and outcomes. Each section earns its place given the complexity, though minor trimming could improve conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and complex multi-turn behavior, the description fully covers the process, edge cases (e.g., hypothesis integrity), and expected outcomes (edge verdicts, terminal states). An agent can effectively use this tool with the provided guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds meaning by explaining parameter usage in context: 'message' as trading idea/instruction with examples, 'session_id' as session continuation. This adds value beyond the schema, justifying a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'Talk to VARRD AI. Describe any trading idea...' It uses specific verbs and resources, and the multi-turn process is explicitly outlined, distinguishing it from sibling 'autonomous_varrd_ai' which likely differs in autonomy.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Extensive usage guidelines are provided, including the multi-turn workflow, hypothesis integrity rules (what is allowed and not allowed), key capabilities, and terminal states. It explicitly advises when to start a new hypothesis and how to handle multiple setups, making it clear when and how to use the tool vs alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/augiemazza/varrd'

If you have feedback or need assistance with the MCP directory API, please join our Discord server