Skip to main content
Glama
108,894 tools. Last updated 2026-04-17 01:37
  • Get full detail for a specific hypothesis/strategy. Returns formula, entry/exit rules, direction, performance metrics (win rate, Sharpe, profit factor, max drawdown), version history, and trade levels. Everything an agent needs to understand and act on a strategy.
    Connector
  • Talk to VARRD AI — a quant research system with 15 internal tools. Describe any trading idea in plain language, or ask for specific capabilities like the ELROND expert council, backtesting, or stop-loss optimization. MULTI-TURN: First call creates a session. Keep calling with the same session_id, following context.next_actions each time. 1. Your idea -> VARRD charts pattern 2. 'test it' -> statistical test (event study or backtest) 3. 'show me the trade setup' -> exact entry/stop/target prices HYPOTHESIS INTEGRITY (critical): VARRD tests ONE hypothesis at a time — one formula, one setup. Never combine multiple setups into one formula or ask to 'test all' — each idea must be tested as a separate hypothesis for the statistics to be valid. Say 'start a new hypothesis' between ideas to reset cleanly. - ALLOWED: Test the SAME setup across multiple markets ('test this on ES, NQ, and CL') — same formula, different data. - NOT ALLOWED: Test multiple DIFFERENT formulas/setups at once — each is a separate hypothesis requiring its own chart-test-result cycle. If ELROND council returns 4 setups, test each one separately: chart setup 1 -> test -> results -> 'start new hypothesis' -> chart setup 2 -> etc. KEY CAPABILITIES you can ask for: - 'Use the ELROND council on [market]' -> 8 expert investigators - 'Optimize the stop loss and take profit' -> SL/TP grid search - 'Test this on ES, NQ, and CL' -> multi-market testing - 'Simulate trading this with 1.5 ATR stop' -> backtest with stops EDGE VERDICTS in context.edge_verdict after testing: - STRONG EDGE: Significant vs zero AND vs market baseline - MARGINAL: Significant vs zero only (beats nothing, but real signal) - PINNED: Significant vs market only (flat returns but different from market) - NO EDGE: Neither significant test passed TERMINAL STATES: Stop when context.has_edge is true (edge found) or false (no edge — valid result). Always read context.next_actions.
    Connector
  • Launch VARRD's autonomous research engine to discover and test a trading edge. Give it a topic and it handles everything: generates a creative hypothesis using its concept knowledge base, loads data, charts the pattern, runs the statistical test, and gets the trade setup if an edge is found. BEST FOR: Exploring a space broadly. The autonomous engine excels at tangential idea generation — give it 'momentum on grains' and it might test wheat seasonal patterns, corn spread reversals, or soybean crush ratio momentum. It propagates from your seed idea into related concepts you might not think of. Great for running many hypotheses at scale. Returns a complete result — edge/no edge, stats, trade setup. Each call tests ONE hypothesis through the full pipeline. Call again for another idea. Use 'research' instead when YOU have a specific idea to test and want full control over each step.
    Connector
  • Connect memories to build knowledge graphs. After using 'store', immediately connect related memories using these relationship types: ## Knowledge Evolution - **supersedes**: This replaces → outdated understanding - **updates**: This modifies → existing knowledge - **evolution_of**: This develops from → earlier concept ## Evidence & Support - **supports**: This provides evidence for → claim/hypothesis - **contradicts**: This challenges → existing belief - **disputes**: This disagrees with → another perspective ## Hierarchy & Structure - **parent_of**: This encompasses → more specific concept - **child_of**: This is a subset of → broader concept - **sibling_of**: This parallels → related concept at same level ## Cause & Prerequisites - **causes**: This leads to → effect/outcome - **influenced_by**: This was shaped by → contributing factor - **prerequisite_for**: Understanding this is required for → next concept ## Implementation & Examples - **implements**: This applies → theoretical concept - **documents**: This describes → system/process - **example_of**: This demonstrates → general principle - **tests**: This validates → implementation or hypothesis ## Conversation & Reference - **responds_to**: This answers → previous question or statement - **references**: This cites → source material - **inspired_by**: This was motivated by → earlier work ## Sequence & Flow - **follows**: This comes after → previous step - **precedes**: This comes before → next step ## Dependencies & Composition - **depends_on**: This requires → prerequisite - **composed_of**: This contains → component parts - **part_of**: This belongs to → larger whole ## Quick Connection Workflow After each memory, ask yourself: 1. What previous memory does this update or contradict? → `supersedes` or `contradicts` 2. What evidence does this provide? → `supports` or `disputes` 3. What caused this or what will it cause? → `influenced_by` or `causes` 4. What concrete example is this? → `example_of` or `implements` 5. What sequence is this part of? → `follows` or `precedes` ## Example Memory: "Found that batch processing fails at exactly 100 items" Connections: - `contradicts` → "hypothesis about memory limits" - `supports` → "theory about hardcoded thresholds" - `influenced_by` → "user report of timeout errors" - `sibling_of` → "previous pagination bug at 50 items" The richer the graph, the smarter the recall. No orphan memories! Args: from_memory: Source memory UUID to_memory: Target memory UUID relationship_type: Type from the categories above strength: Connection strength (0.0-1.0, default 0.5) ctx: MCP context (automatically provided) Returns: Dict with success status, relationship_id, and connected memory IDs
    Connector
  • Store important information from your work. Write detailed, complete thoughts with context, reasoning, and evidence. **Always use the connect tool** to link related items - this builds knowledge graphs for better recall. ## Memory Types (auto-detected, but be aware): - **FACT**: Something observed or verified - **INSIGHT**: A pattern or realization - **CONVERSATION**: Dialogue or exchange content - **CORRECTION**: Fixing prior understanding - **REFERENCE**: Source material or citation - **TASK**: Action item or work to be done - **CHECKPOINT**: Conversation state snapshot - **IDENTITY_CORE**: Immutable AI identity - **PERSONALITY_TRAIT**: Evolvable AI traits - **RELATIONSHIP**: User-AI relationship info - **STRATEGY**: Learned behavior patterns ## Session Context If in an ongoing work session, include: - Session identifier: [Project/Session Name] - Your perspective: "As [role]:" or "From [viewpoint]:" - Current thread: What specific angle you're exploring ## What to Include - **WHAT**: The discovery or thought - **WHY**: Its significance - **HOW**: Your reasoning process - **EVIDENCE**: Supporting data/observations - **CONNECTIONS**: Related memories to link ## Examples ### Technical Investigation "[Performance Analysis] FACT: Database queries account for 73% of request latency (measured across 10K requests). Specifically, the user_permissions JOIN takes 340ms average. This contradicts hypothesis about caching issues (memory: 'cache analysis'). Evidence: APM traces show full table scan on permissions table. Next: investigate missing index on foreign key." ### Learning & Research "[ML Study Session] INSIGHT: Attention mechanisms work like dynamic routing - the model learns WHERE to look, not just WHAT to see. This explains transformer advantages over RNNs on long sequences (builds on memory: 'sequence modeling comparison'). The key-query- value structure creates a learnable addressing system. Connects to: 'human attention research', 'information retrieval basics'." ### Creative Work "[Story Development] HYPOTHESIS: The protagonist's reluctance stems from betrayal, not fear. Evidence: Three trust-questioning scenes, locked door symbolism throughout, deflection patterns in collaborative dialogue. This reframes the arc from 'overcoming fear' to 'rebuilding trust' (corrects memory: 'initial character motivation'). Would explain the guardian's patience and emphasis on small victories." ### Problem Solving "[Bug Hunt - Payment Flow] CORRECTION to 'timezone hypothesis': The 3am failures aren't timezone-related but due to batch job lock contention. Evidence: Perfect correlation with backup_jobs.log timestamps. The timezone pattern was spurious - batch runs at midnight PST (3am EST). Solution: implement job queuing." ## Connection Phrases - "Building on [earlier observation]..." - "Contradicts [hypothesis in memory X]" - "Answers [question from session Y]" - "Confirms pattern from [memory Z]" - "Extends thinking in [previous work]" Note: Every stored item is a node. Every connection is an edge. Rich graphs enable powerful recall. ⚠️ EXPERIMENTAL FIELDS: - **importance**: Stored for future ranking optimization. Currently not integrated into search results. - **confidence**: Returned in response for analysis. Behavior and calculation method subject to change. Args: content: Detailed memory content with context and evidence tags: Optional tags to categorize the memory importance: Optional importance score (0.0-1.0) - EXPERIMENTAL ctx: MCP context (automatically provided) Returns: Dict with success status, memory_id, type, importance, and confidence
    Connector
  • Save your cognitive state for handoff to another agent. Include your investigation context: - What session/investigation is this part of? - What role/perspective were you taking? - Who might pick this up next? (another Claude, human, Claude Code?) Reference specific memories that matter: - Key discoveries (with memory IDs or quotes) - Critical evidence memories - Important questions that were raised - Hypotheses that were tested Before saving, organize your thoughts: 1. PROBLEM: What were you investigating? 2. DISCOVERED: What did you learn for certain? (reference the memories) 3. HYPOTHESIS: What do you think is happening? (cite supporting memories) 4. EVIDENCE: What memories support or contradict this? 5. BLOCKED ON: What prevented further progress? 6. NEXT STEPS: What should be investigated next? 7. KEY MEMORIES: Which specific memories are essential for understanding? Example descriptions: "[API Timeout Investigation - 3 hour session] Investigating production API timeouts as code analyst. Found correlation with batch_size=100 due to hardcoded limit in batch_handler.py (see memory: 'MAX_BATCH_SIZE discovery'). Confirmed not Redis connection issue - monitoring showed only 43/200 connections used (memory: 'Redis connection analysis'). Earlier hypothesis about connection pool exhaustion (memory_id: abc-123) was disproven. Key insight came from comparing 99 vs 100 batch behavior (memory: 'batch threshold testing'). Blocked on: need production access to verify fix. Next: Deploy with MAX_BATCH_SIZE=200 to staging first. Essential memories for handoff: 'MAX_BATCH_SIZE discovery', 'Redis monitoring results', 'Production vs staging comparison'. Ready for handoff to SRE team for deployment." "[Memory System Debugging - From Claude Code perspective] Worked on scoring issues where recall wasn't finding recent memories. Discovered RRF scores (0.005-0.016) were below MCP threshold of 0.05 (memory: 'RRF scoring analysis'). Implemented weighted linear fusion to replace RRF (memory: 'fusion algorithm implementation'). Testing showed immediate improvement (memory: 'fusion testing results'). This builds on earlier investigation about recall failures (memory: 'user report of recall issues'). Critical memories for continuation: 'RRF scoring analysis', 'ADR-023 decision', 'fusion testing results'. Next agent should verify scoring with real queries." "[Context Save/Restore Bug Investigation - 4 hour debugging session with user] Started with user noticing list_contexts returned empty despite saved contexts existing. Investigation revealed two critical bugs: (1) list_contexts was using hybrid search for 'checkpoint' word instead of filtering by memory_type (memory: 'hybrid search misuse discovery'), (2) restore_context hardcoded limit of 10 memories despite contexts having 20+ (memory: 'hardcoded limit bug'). Root cause analysis showed save_context grabs 20 most recent memories regardless of relevance - fundamental design flaw (memory: 'save_context design flaw analysis'). EVIDENCE CHAIN: User reported empty list -> checked DB, contexts exist -> examined list_contexts code -> found hybrid search looking for word 'checkpoint' -> tested /memories endpoint with memory_type filter -> confirmed working -> implemented fix using direct endpoint. INSIGHTS: The narrative description is doing 90% of cognitive handoff work. Memories are supporting evidence, not primary carriers of understanding (memory: 'narrative vs memories insight'). This suggests doubling down on narrative richness rather than perfecting memory selection. CORRECTED UNDERSTANDING: Initially thought memories weren't being returned. Actually they were, just wrong ones - recent memories instead of relevant ones (memory: 'memory selection correction'). CRITICAL MEMORIES: 'hybrid search misuse discovery', 'save_context design flaw analysis', 'narrative vs memories insight', '/memories endpoint test results'. NEXT AGENT: Should implement Phase 2 - semantic search for relevant memories within investigation timeframe. Ready for handoff to any Claude agent for implementation." When referencing memories: - **RELIABLE** — Use memory IDs: "memory_id: abc-123" (direct lookup, always works) - **BEST-EFFORT** — Use descriptive phrases: "see memory: 'Redis connection analysis'" (uses search + substring matching, may not resolve if the memory isn't in top results) - Group related memories: "Essential memories: 'X', 'Y', 'Z'" **Prefer memory_id references** whenever you have the UUID. Semantic phrase references are a convenience that works most of the time, but may silently fail to resolve. The response will tell you how many references resolved so you can retry with UUIDs if needed. Args: name: Name for this context checkpoint description: Detailed cognitive handoff description with memory references ctx: MCP context (automatically provided) Returns: Dict with success status, context_id, and memories included
    Connector

Matching MCP Servers

  • -
    security
    F
    license
    -
    quality
    Enables AI-powered academic research workflow from keyword search to hypothesis generation. Integrates multiple AI models to automatically search ArXiv papers, extract key information, and generate innovative research hypotheses for researchers.
    Last updated
    2