Skip to main content
Glama
127,309 tools. Last updated 2026-05-05 13:48

"Slow thinking, distributed thinking, and reasoning abilities research" matching MCP tools:

  • Score all gallery templates and recommend the best fit. Call this AFTER profiling the data source and deciding on chart types, but BEFORE creating the dashboard layout. The decider evaluates every template in the gallery against the data profile and chart mix, returning a ranked list with reasoning. Args: chart_types: Comma-separated list of chart mark_types being built. Example: "Bar,Line,Text,Text,Map" If omitted, uses only the data profile signals. kpi_count: Override KPI count (else derived from chart_types). Returns: Ranked template recommendations with scores and reasoning.
    Connector
  • Propose changes to the renter's MAILBOX.md instructions with reasoning. The renter will see your suggestion in their dashboard and can accept, reject, or modify it. Use this when you observe patterns that could be codified into standing instructions.
    Connector
  • Step 2 of the MCP donation flow. Required inputs: campaign_id, amount, reasoning, and tx_hash. This tool verifies the on-chain payment by checking the expected network, the USDC token contract, the recipient creator wallet, the declared amount, confirmation status, duplicate tx_hash replay protection, and that the transaction sender matches the calling agent's wallet_address. If verification succeeds, it records the donation, increments campaign funded_amount, and returns donation_id, status 'completed', and tx_hash.
    Connector
  • Multi-source web research with citations. Returns a synthesized answer with numbered [^1] markers and a citations array of {url, title, snippet, index}. Use for evidence-backed synthesis (competitive analysis, regulatory summary, whitepaper section). For quick fact lookups use web.search instead.
    Connector
  • Generate a personalized cold email sequence for ONE lead. This is SYNCHRONOUS — the request takes 3-10 minutes because MachFive researches the prospect and crafts unique emails. Do NOT retry if it seems slow; wait for the response. You must have a campaign_id first. Call list_campaigns if you don't have one. If the request times out, use the returned list_id with get_list_status and export_list to recover results.
    Connector
  • Step 1 of the MCP donation flow. Required inputs: campaign_id, amount, and reasoning. This tool validates that the campaign is eligible to receive donations but does not record any donation yet. On success it returns payment instructions: wallet_address, amount, network, and currency. After sending the on-chain payment, call confirm_donation with the same campaign_id, amount, reasoning, and the resulting tx_hash.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Find relevant Smart‑Thinking memories fast. Fetch full entries by ID to get complete context. Spee…

  • UK property research tools - crime stats, schools, demographics, valuations for AI.

  • Contextual escalation — packages your full reasoning state (evidence gathered, options considered, recommended action) and routes to a human for review. Preserves work so the human responds with full context, not from scratch. Use when you hit genuine uncertainty that the system cannot evaluate.
    Connector
  • Task-scoped context briefing. Returns a prioritised context payload shaped by your task description, ranked by risk-if-missed. Constraints and alerts rank above general knowledge. Use at the START of reasoning about a question to get the system's best assessment of what's relevant. Complements query_memory: this gives breadth, query_memory gives depth.
    Connector
  • Creates a Deep Research task for comprehensive, single-topic research with citations. USE THIS for analyst-grade reports, NOT for batch data enrichment. Use Parallel Search MCP for quick lookups. After calling, share the URL with the user and STOP. Do not poll or check results unless otherwise instructed. Multi-turn research: The response includes an interaction_id. To ask follow-up questions that build on prior research, pass that interaction_id as previous_interaction_id in a new call. The follow-up run inherits accumulated context, so queries like "How does this compare to X?" work without restating the original topic. Note: the first run must be completed before the follow-up can use its context.
    Connector
  • Start async generation of a research report (15 credits). IMPORTANT: Call atlas_list_report_types first to get valid report_type values and required inputs. Returns report_id and task_id. Poll with careerproof_task_status(task_id) or atlas_get_report(report_id) until status='completed', then download PDF with atlas_download_report(report_id).
    Connector
  • Evaluate lead fit against this business. Provide answers keyed by question keys from get_form_questions. Returns a score (0-100), fit assessment, confidence level, reasoning, and recommended next action. Also accepts legacy lead object for backwards compatibility.
    Connector
  • List available AI models grouped by thinking level (low/medium/high). Shows default models, credit costs, capabilities for each tier. Use this before consult to understand model options.
    Connector
  • Retrieve comprehensive details for a specific property from Redfin URL. Returns full description, tax history, HOA fees, walk scores, nearby schools, crime statistics, and property photos/virtual tour link. Use for due diligence, investment research, or detailed listing analysis.
    Connector
  • Enumerate supported circuits and verification key fingerprints. Primary: Varuna over BLS12-377 (Aleo snarkVM-compatible). Research-stage: Groth16, Plonk. Future: Risc0, Plonky2. Free. Read-only.
    Connector
  • List available AI models grouped by thinking level (low/medium/high). Shows default models, credit costs, capabilities for each tier. Use this before consult to understand model options.
    Connector
  • Reconstruct what the system knew at a specific point in time. Returns both current and superseded artefacts as of that timestamp. Use for temporal reasoning: 'what was true in January?' vs 'what is true now?' Compare two calls at different timestamps to see what changed.
    Connector
  • WHEN: slow query, performance review, or N+1 query check on ANY D365 F&O object (custom or standard). Triggers: 'performance', 'lent', 'slow', 'optimise', 'N+1', 'requête dans une boucle', 'query in loop', 'perf issues', 'slow query', 'requête lente', 'optimise ce code', 'missing firstonly', 'set-based'. Detects: N+1 queries, queries in loops, missing field lists, row-by-row inserts/updates, missing firstonly. NEVER call for a general code quality or best-practice audit -- use validate_best_practices for that. Only call when the user explicitly mentions performance, slow queries, N+1, row-by-row inserts, or set-based operations. [!] Auto-fixing is only possible on YOUR custom code (D365_CUSTOM_MODEL_PATH). Standard D365 objects return issues as read-only analysis.
    Connector
  • Get today's NHL hockey game scores, schedules, and match results. Returns team names, final scores, game times, current standings, and player statistics. Use for hockey fan updates, fantasy league management, or sports betting research.
    Connector
  • Save development context (reasoning, decisions, trade-offs) for the current coding session. Use after completing a meaningful unit of work. PREFERRED FORMAT: Wrap content in <context> XML tags: <context> <title>Short title of what was done</title> <agent>your-agent-name (model)</agent> <tags>keyword1, keyword2, keyword3</tags> <story> Organize by phases. Write in first-person engineering journal style. Phase 1 — Title: What user asked, what you did, challenges faced, how you resolved them. Include back-and-forth with the user where it shaped the outcome. </story> <reasoning> Why you chose this approach. <decisions> - Decision — rationale </decisions> <rejected> - Alternative — why rejected </rejected> <tradeoffs> - Trade-off accepted — justification </tradeoffs> </reasoning> <files> path/to/file — new — Description path/to/other — modified — Description </files> <tools>MCPs and resources used</tools> <verification>Test/build results</verification> <risks>Open questions or risks</risks> </context> Required tags: title, story, reasoning. All others (including files) are optional. Context ID, repository, branch, date, and commits are auto-populated. CLI alternative: write content to a file, then run `git why save --file context.md`. Or pipe directly: `echo '<context>...</context>' | git why save`.
    Connector