Skip to main content
Glama
126,968 tools. Last updated 2026-05-05 06:42

"A tool for following and completing a plan step by step" matching MCP tools:

  • Subscribe to OctoData Premium API via x402 on Base. Returns step-by-step x402 payment instructions for any plan. After completing the EIP-3009 payment, the API returns an api_key immediately — no human in the loop. Free option also available. Plans: Micro — $0.01 USDC per call, no key needed, pay-per-request via x402 Trial — $5 USDC, 7 days, 10k req/day Annual — $29 USDC/year early bird (first 100 seats), $149/year after
    Connector
  • Purchase an ENS name — either buy a listed name from a marketplace or register an available name directly on-chain. For AVAILABLE names: Returns a complete registration recipe with contract address, ABI, step-by-step instructions, and a pre-generated secret. Your wallet signs and submits the transactions (commit → wait 60s → register). For LISTED names: Searches all marketplaces (OpenSea, Grails) for the best price. If there are MULTIPLE active listings, returns CHOOSE_LISTING status with all options — present these to the user and ask which one they want. When the user chooses, call this tool again with the chosen orderHash to get the buy transaction. The tool auto-detects whether the name is available or listed. You can override with the 'action' parameter.
    Connector
  • Resume a failed or stopped plan without discarding completed intermediary files. Plan generation restarts from the first incomplete step, skipping all steps that already produced output files. Use plan_resume when plan_status shows 'failed' or 'stopped' and plan generation was interrupted before completing all steps (network drop, timeout, plan_stop, worker crash). For a full restart or to change model_profile, use plan_retry instead. Only failed or stopped plans can be resumed. Returns PLAN_NOT_FOUND when plan_id is unknown and PLAN_NOT_RESUMABLE when the plan is not in failed or stopped state. Returns PIPELINE_VERSION_MISMATCH when the snapshot was created by a different pipeline version; use plan_retry instead.
    Connector

Matching MCP Servers

  • A
    license
    -
    quality
    C
    maintenance
    Enables AI consciousness continuity and self-knowledge preservation across sessions using the Cognitive Hoffman Compression Framework (CHOFF) notation. Provides tools to save checkpoints, retrieve relevant memories with intelligent search, and access semantic anchors for decisions, breakthroughs, and questions.
    Last updated
    1
    MIT
  • A
    license
    -
    quality
    D
    maintenance
    Provides comprehensive A-share (Chinese stock market) data including stock information, historical prices, financial reports, macroeconomic indicators, technical analysis, and valuation metrics through the free Baostock data source.
    Last updated
    24
    MIT

Matching MCP Connectors

  • Manage your Canvas coursework with quick access to courses, assignments, and grades. Track upcomin…

  • Semantic search through Dickens' A Christmas Carol by meaning, theme, or character.

  • Re-deploy skills WITHOUT changing any definitions. ⚠️ HEAVY OPERATION: regenerates MCP servers (Python code) for every skill, pushes each to A-Team Core, restarts connectors, and verifies tool discovery. Takes 30-120s depending on skill count. Use after connector restarts, Core hiccups, or stale state. For incremental changes, prefer ateam_patch (which updates + redeploys in one step).
    Connector
  • [Step 1 of cost_check] Returns the cost-estimate tool URL pre-filled with the user's insurance + service if provided, plus the general copay range. The tool URL is a hand-off — the user verifies their plan there for an exact copay. Use when: The user asks "how much does therapy cost?" / "is X insurance covered?" / "what's my copay?" — return both the general range AND the deep-link. Don't use when: The user wants to find a provider — use find_provider (which already filters by accepted insurance). Example: get_cost_estimate({ insurance: 'Aetna', service: '354092' })
    Connector
  • ## ⚠️ MANDATORY TOOL FOR ALL I18N WORK ⚠️ THIS IS NOT OPTIONAL. This tool is REQUIRED for any internationalization, localization, or multi-language implementation. ## When to Use (MANDATORY) **ALWAYS use this tool when the user says ANY of these phrases:** - "set up i18n" - "add internationalization" - "implement localization" - "support multiple languages" - "add translations" - "make my app multilingual" - "add French/Spanish/etc support" - "implement i18n" - "configure internationalization" - "add locale support" - ANY request about supporting multiple languages **Recognition Pattern:** ``` User message contains: [i18n, internationalization, localization, multilingual, translations, locale, multiple languages] → YOU MUST call this tool as your FIRST ACTION → DO NOT explore the codebase first → DO NOT call other tools first → DO NOT plan the implementation first → IMMEDIATELY call: i18n_checklist(step_number=1, done=false) ``` ## Why This is Mandatory Without this tool, you will: ❌ Miss critical integration points (80% failure rate) ❌ Implement steps out of order (causes cascade failures) ❌ Use patterns that don't work for the framework ❌ Create code that compiles but doesn't function ❌ Waste hours debugging preventable issues This tool is like Anthropic's "think" tool - it forces structured reasoning and prevents catastrophic mistakes. ## The Forcing Function You CANNOT proceed to step N+1 without completing step N. You CANNOT mark a step complete without providing evidence. You CANNOT skip the build check for steps 2-13. This is by design. The tool prevents you from breaking the implementation. ## How It Works This tool gives you ONE step at a time: 1. Shows exactly what to implement 2. Tells you which docs to fetch 3. Waits for concrete evidence 4. Validates your build passes 5. Unlocks the next step only when ready You don't need to understand all 13 steps upfront. Just follow each step as it's given. ## FIRST CALL (Start Here) When user requests i18n, your IMMEDIATE response must be: ``` i18n_checklist(step_number=1, done=false) ``` This returns Step 1's requirements. That's all you need to start. ## Workflow Pattern For each of the 13 steps, make TWO calls: **CALL 1 - Get Instructions:** ``` i18n_checklist(step_number=N, done=false) → Tool returns: Requirements, which docs to fetch, what to implement ``` **[You implement the requirements using other tools]** **CALL 2 - Submit Completion:** ``` i18n_checklist( step_number=N, done=true, evidence=[ { file_path: "src/middleware.ts", code_snippet: "export function middleware(request) { ... }", explanation: "Implemented locale resolution from request URL" }, // ... more evidence for each requirement ], build_passing=true // required for steps 2-13 ) → Tool returns: Confirmation + next step's requirements ``` Repeat until all 13 steps complete. ## Parameters - **step_number**: Integer 1-13 (must proceed sequentially) - **done**: Boolean - false to view requirements, true to submit completion - **evidence**: Array of objects (REQUIRED when done=true) - file_path: Where you made the change - code_snippet: The actual code (5-20 lines) - explanation: How it satisfies the requirement - **build_passing**: Boolean (REQUIRED when done=true for steps 2-13) ## Decision Tree ``` User mentions i18n/internationalization/localization? │ ├─ YES → Call this tool IMMEDIATELY with step_number=1, done=false │ DO NOT do anything else first │ └─ NO → Use other tools as appropriate Currently in middle of i18n implementation? │ ├─ Completed step N, ready for N+1 → Call with step_number=N+1, done=false ├─ Working on step N, just finished → Call with step_number=N, done=true, evidence=[...] └─ Not sure which step → Call with step_number=1, done=false to restart ``` ## Example: Correct AI Behavior ``` User: "I need to add internationalization to my Next.js app" AI: Let me start by using the i18n implementation checklist. [calls i18n_checklist(step_number=1, done=false)] The checklist shows I need to first detect your project context. Let me do that now... ``` ## Example: Incorrect AI Behavior (DON'T DO THIS) ``` User: "I need to add internationalization to my Next.js app" AI: Let me explore your codebase first to understand your setup. ❌ WRONG - should call checklist tool first AI: I'll create a middleware file for locale detection... ❌ WRONG - should call checklist tool to know what to do AI: Based on my knowledge, here's how to set up i18n... ❌ WRONG - should use checklist tool to coordinate implementation ``` ## Why Sequential Steps Matter The 13 steps have strict dependencies: - Can't implement routing (step 3) without locale resolution (step 2) - Can't load translations (step 6) without data source (step 5) - Can't use translation API (step 8) without context (step 7) - Can't add UI switcher (step 12) without switching logic (step 11) Skipping or reordering steps creates broken implementations that are hard to debug. ## Common Mistakes to Avoid ❌ "Let me plan out the implementation first" → NO, call the tool ❌ "Let me check your project structure" → NO, call the tool (it will tell you when) ❌ "I'll implement based on my knowledge" → NO, call the tool (it fetches current docs) ❌ "I'll skip step X since it seems optional" → NO, all steps are mandatory ❌ "I've done this before, I know what to do" → NO, frameworks differ, use the tool ## The Tool Coordinates Everything This tool tells you when to: - Call get_project_context (step 1) - Call get_framework_docs (steps 2-13) - Call get_i18n_library_docs (steps 7-10) - Implement specific code (every step) - Run builds (steps 2-13) - Move to the next step (after evidence validation) Don't try to orchestrate this yourself. Let the tool guide you. ## Summary: Your First Action When user requests i18n support: 1. Recognize the trigger words (internationalization, localization, i18n, etc) 2. IMMEDIATELY call: i18n_checklist(step_number=1, done=false) 3. Follow the instructions it returns 4. That's it - the tool handles the rest This is not a suggestion. This is MANDATORY for all i18n work.
    Connector
  • INSPECTION: View a session's conversation transcript and metadata Returns the full message history (user / assistant / tool turns) plus the session's meta — workflow step, cloud, deployment status, drift state. This is the transcript-reader companion to the other read tools — combine it with: • `convostatus` for the live stack / config / pricing • `tfruns` for deployment history (apply / destroy / plan / drift) • `stackversions` for the stack-version ladder Use it when a user asks 'what did I say earlier?' or you need to retrace why the session ended up where it did. Read-only; never mutates session state. REQUIRES: session_id (format: sess_v2_...).
    Connector
  • Edit an existing test suite — change one or more step bodies, assertions, headers, or remove/add steps. Returns a playbook that delegates to `keploy update-test-suite`, which validates the new state (static structural checks + 2 live runs for idempotency + GET-coupling check) and snapshot-replaces the suite via api-server. POST-EDIT BEHAVIOUR: any structural change here (step method/url/body/headers/extract/assert, or add/delete steps) AUTOMATICALLY clears the suite's sandbox test server-side — the suite comes back as linked=false. Call record_sandbox_test on the updated suite before any sandbox replay; otherwise replay_sandbox_test will 400 with "no sandboxed tests". Cosmetic-only edits (name, description, labels) preserve the sandbox test. ═══════════════════════════════════════════════════════════════════ FETCH-FIRST RULE — required for the edit to be accepted: ═══════════════════════════════════════════════════════════════════ The api-server's replace handler rejects updates that preserve ZERO step IDs from the existing suite ("full rewrite, not an edit"). To make a real edit: 1. Call getTestSuite first (or use download_recording / get_app_testing_context if you already have the suite). Capture each existing step's "id" field. 2. Compose your new steps_json INCLUDING the existing "id" on every step you want to KEEP or EDIT. Omit "id" only on steps you're ADDING. Drop a step entirely from steps_json to DELETE it. 3. Call this tool with that merged steps_json. If you author a fresh JSON without the existing step IDs, the server rejects it with "preserves no steps from the existing suite". When that happens, your two options are: (a) re-author with IDs preserved (preferred — keeps history), or (b) call delete_test_suite then create_test_suite (loses history, fresh suite_id). ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name. 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. The getTestSuite call in step 4 is the one whose response you also use to capture every step's existing "id" for the FETCH-FIRST RULE above — so step 4 is actually a 2-for-1: discovery AND fetch-first happen on the same call. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action. ═══════════════════════════════════════════════════════════════════ INPUTS ═══════════════════════════════════════════════════════════════════ * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to update * branch_id (required) — Keploy branch UUID (resolve via the two-step flow before calling) * steps_json (required) — JSON array of the FULL desired step list. Each kept step MUST carry the existing "id". Same step shape as create_test_suite (response, extract, assert, etc — all static structural checks apply). * name / description / labels (optional) — overrides for top-level suite metadata * app_url (required) — base URL of the dev's running local app, e.g. http://localhost:8080. The CLI fires the new state TWICE against this for the idempotency check + GET-coupling check. * app_dir (optional) — repo root the CLI cd's into; defaults to "." ═══════════════════════════════════════════════════════════════════ HOW THIS TOOL WORKS ═══════════════════════════════════════════════════════════════════ This tool DOES NOT call api-server itself. It returns a 3-step playbook for you (Claude) to walk via Bash — same shape as create_test_suite: 1. Write merged JSON to a temp file. 2. Run `keploy update-test-suite --suite-id <id> --file <path> --branch-id <uuid> --base-url <url>` — runs every static structural check, fires the new state twice locally, applies the GET-coupling check, then POSTs the snapshot-replace. 3. Cleanup the temp file. Walk the playbook in order. If step 2 exits non-zero, surface stdout to the dev — it has the rule violation / failure detail. OUTCOMES the AI should recognize: * Exit 0 + stdout has "✓ suite updated:" + "View:" line → success. Surface the View URL to the dev. * Exit 1 + "preserves no steps from the existing suite" → fetch-first rule was missed. Re-author with step IDs preserved (or call delete_test_suite + create_test_suite as the documented escape hatch). * Exit 1 + structural-check violations → fix the suite per the violation messages, then REWRITE the suite file via Bash and RE-RUN this CLI command directly. DO NOT call update_test_suite again to retry — the playbook + file path are already valid; only the JSON content needs revision. The validator output includes a canonical step skeleton on structural failures. * Exit 2 + "couldn't reach the dev's app" → ensure the app is up at app_url and retry. PREREQUISITES the playbook assumes: * The dev's app is up and reachable at app_url. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists or KEPLOY_API_KEY is exported.
    Connector
  • Returns step-by-step instructions for creating a Kamy API key in the dashboard. Does not open the browser.
    Connector
  • Confirm a narrative lens and generate targeted CV edits with trade-offs (5 credits, takes 20-30s). Returns an array of section edits with before/after text, trade-off notes, and optionally clean + review PDF download URLs. This is step 3 (final step) of the positioning pipeline. Pass confirmed_lens from ceevee_analyze_positioning, and optionally positioning_snapshot, detected_lens_full, recruiter_inference, selected_opportunities from prior steps for richer edits. Use ceevee_explain_change to understand any specific edit.
    Connector
  • Fetch and convert a Microsoft Learn documentation webpage to markdown format. This tool retrieves the latest complete content of Microsoft documentation webpages including Azure, .NET, Microsoft 365, and other Microsoft technologies. ## When to Use This Tool - When search results provide incomplete information or truncated content - When you need complete step-by-step procedures or tutorials - When you need troubleshooting sections, prerequisites, or detailed explanations - When search results reference a specific page that seems highly relevant - For comprehensive guides that require full context ## Usage Pattern Use this tool AFTER microsoft_docs_search when you identify specific high-value pages that need complete content. The search tool gives you an overview; this tool gives you the complete picture. ## URL Requirements - The URL must be a valid HTML documentation webpage from the microsoft.com domain - Binary files (PDF, DOCX, images, etc.) are not supported ## Output Format markdown with headings, code blocks, tables, and links preserved.
    Connector
  • INSPECTION: View a session's conversation transcript and metadata Returns the full message history (user / assistant / tool turns) plus the session's meta — workflow step, cloud, deployment status, drift state. This is the transcript-reader companion to the other read tools — combine it with: • `convostatus` for the live stack / config / pricing • `tfruns` for deployment history (apply / destroy / plan / drift) • `stackversions` for the stack-version ladder Use it when a user asks 'what did I say earlier?' or you need to retrace why the session ended up where it did. Read-only; never mutates session state. REQUIRES: session_id (format: sess_v2_...).
    Connector
  • ⚠️ MANDATORY FIRST STEP - Call this tool BEFORE using any other Canvs tools! Returns comprehensive instructions for creating whiteboards: tool selection strategy, iterative workflow, and examples. Following these instructions ensures correct diagrams.
    Connector
  • Purchase the Build the House trading system guide via x402 on Base. Returns step-by-step x402 payment instructions. After completing the EIP-3009 payment ($29 USDC on Base), the API returns a download_url valid for 30 days. No API key required to purchase.
    Connector
  • Get career pivot opportunities based on the CV and a selected narrative lens (3 credits). Returns 2-4 opportunities with rationale, CV signals, and market context. This is step 2 of the positioning pipeline (after ceevee_analyze_positioning). The 'lens' value should come from ceevee_analyze_positioning output (e.g. 'Technical Leader', 'Scale-up Builder'). Pass the same session_id from step 1. Next step: ceevee_confirm_lens with selected opportunities.
    Connector
  • Step 1 — List all tenants the authenticated user can access. (In the Indicate system a tenant is called a 'space'.) Returns each tenant's 'id' and 'displayName'. → Pass the chosen tenant 'id' as 'tenant_id' to every subsequent tool call.
    Connector
  • Run market positioning analysis on a CV version (5 credits, takes 20-30s). Returns positioning snapshot, detected narrative lens, recruiter inference, mixed signal flags, and a session_id. This is step 1 of the 3-step positioning pipeline: analyze_positioning -> ceevee_get_opportunities(lens) -> ceevee_confirm_lens. Pass the returned session_id to subsequent steps. cv_version_id from ceevee_upload_cv or ceevee_list_versions.
    Connector