Skip to main content
Glama

autonomous_research

Automatically generates and tests trading hypotheses from a topic, handling data analysis, statistical validation, and trade setup identification to discover market edges.

Instructions

Launch VARRD's autonomous research engine to discover and test a trading edge. Give it a topic and it handles everything: generates a creative hypothesis using its concept knowledge base, loads data, charts the pattern, runs the statistical test, and gets the trade setup if an edge is found.

BEST FOR: Exploring a space broadly. The autonomous engine excels at tangential idea generation — give it 'momentum on grains' and it might test wheat seasonal patterns, corn spread reversals, or soybean crush ratio momentum. It propagates from your seed idea into related concepts you might not think of. Great for running many hypotheses at scale.

Returns a complete result — edge/no edge, stats, trade setup. Each call tests ONE hypothesis through the full pipeline. Call again for another idea.

Use 'research' instead when YOU have a specific idea to test and want full control over each step.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topicYesResearch topic or trading idea (e.g. 'BTC 240min short setups', 'momentum on grains', 'mean reversion after VIX spikes').
marketsNoFocus on specific markets (e.g. ['ES', 'NQ']). Omit for VARRD to choose.
test_typeNoType of statistical test. Default: event_study.event_study
search_modeNofocused = stay close to topic. explore = creative freedom. Default: focused.focused
asset_classesNoLimit to specific asset classes. Default: all.
contextNoPrior conversation context — recent user queries to use as research inspiration. Optional.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate non-read-only and external access (readOnlyHint:false, openWorldHint:true). The description adds substantial pipeline context beyond these hints: it discloses the internal workflow (generates hypothesis, loads data, charts, runs statistical test), clarifies that each call tests exactly ONE hypothesis, and explains the return structure ('edge/no edge, stats, trade setup').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear functional paragraphs: execution model, best-use cases, return values, and sibling comparison. Information is front-loaded with the core action. Slightly verbose but every sentence serves the decision-making process; headers like 'BEST FOR' aid scanability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema provided, the description compensates adequately by detailing the return payload ('complete result — edge/no edge, stats, trade setup'). It explains the creative propagation behavior ('might test wheat seasonal patterns...') and discrete unit of work, providing sufficient context for an autonomous research tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The narrative reinforces the 'topic' parameter conceptually ('Give it a topic') and provides usage examples ('momentum on grains'), but does not add syntax or semantic details beyond what the comprehensive schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Launch[es] VARRD's autonomous research engine to discover and test a trading edge,' providing a specific verb and resource. It clearly distinguishes from sibling 'research' by contrasting autonomous execution vs. manual control ('Use 'research' instead when YOU have a specific idea...').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly defines when to use with 'BEST FOR: Exploring a space broadly' and details its strength at 'tangential idea generation.' Critically, it explicitly names the sibling alternative ('Use 'research' instead...') and defines the control trade-off, giving the agent clear decision criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/augiemazza/varrd'

If you have feedback or need assistance with the MCP directory API, please join our Discord server