Skip to main content
Glama
127,227 tools. Last updated 2026-05-05 10:33

"Step-by-step guide to breaking down a request into parts" matching MCP tools:

  • Purchase an ENS name — either buy a listed name from a marketplace or register an available name directly on-chain. For AVAILABLE names: Returns a complete registration recipe with contract address, ABI, step-by-step instructions, and a pre-generated secret. Your wallet signs and submits the transactions (commit → wait 60s → register). For LISTED names: Searches all marketplaces (OpenSea, Grails) for the best price. If there are MULTIPLE active listings, returns CHOOSE_LISTING status with all options — present these to the user and ask which one they want. When the user chooses, call this tool again with the chosen orderHash to get the buy transaction. The tool auto-detects whether the name is available or listed. You can override with the 'action' parameter.
    Connector
  • Subscribe to OctoData Premium API via x402 on Base. Returns step-by-step x402 payment instructions for any plan. After completing the EIP-3009 payment, the API returns an api_key immediately — no human in the loop. Free option also available. Plans: Micro — $0.01 USDC per call, no key needed, pay-per-request via x402 Trial — $5 USDC, 7 days, 10k req/day Annual — $29 USDC/year early bird (first 100 seats), $149/year after
    Connector
  • Start the purchase flow for a domain using USDC crypto payment (x402 protocol). This is a 2-step process for autonomous agent payments: Step 1: Call this tool to get an order_id and pay_url. Step 2: Make an HTTP GET request to the pay_url. Your x402-enabled HTTP client will receive an HTTP 402 response with payment requirements, then automatically pay with USDC on Base. The payment and settlement happen via the x402 protocol (no browser or human needed). After payment, call get_domain_status(order_id) to poll until complete. Requires: An x402-compatible HTTP client with a funded USDC wallet on Base. The registrant contact details are required because the domain will be registered in the buyer's name (they become the legal owner). WHOIS privacy is enabled by default, so these details are not publicly visible. IMPORTANT: Before calling this tool, you MUST first call check_domain to get the price and confirm it with the user. Args: domain: The domain to purchase (e.g. "coolstartup.com"). first_name: Registrant's first name. last_name: Registrant's last name. email: Registrant's email address. address1: Registrant's street address. city: Registrant's city. state: Registrant's state or province. postal_code: Registrant's postal/zip code. country: 2-letter ISO country code (e.g. "US", "GB", "DE"). phone: Phone number in format +1.5551234567. org_name: Organization name (optional, leave empty for individuals). Returns: Dict with order_id, pay_url (full URL to GET with x402 client), price_usdc, price_cents, network, and asset contract address.
    Connector
  • Search WhatDoTheyKnow's feed-based event index and return structured results. Call this to find FOI requests matching a query expression. Returns up to `limit` AtomEntry objects. Use the `link` field of each result as the next navigation step — extract the request slug and call the wdtk://requests/{slug} resource or get_request_feed_items for full detail. Example expressions: status:successful body:"Liverpool City Council" (variety:sent OR variety:response) status:successful
    Connector
  • Edit an existing test suite — change one or more step bodies, assertions, headers, or remove/add steps. Returns a playbook that delegates to `keploy update-test-suite`, which validates the new state (static structural checks + 2 live runs for idempotency + GET-coupling check) and snapshot-replaces the suite via api-server. POST-EDIT BEHAVIOUR: any structural change here (step method/url/body/headers/extract/assert, or add/delete steps) AUTOMATICALLY clears the suite's sandbox test server-side — the suite comes back as linked=false. Call record_sandbox_test on the updated suite before any sandbox replay; otherwise replay_sandbox_test will 400 with "no sandboxed tests". Cosmetic-only edits (name, description, labels) preserve the sandbox test. ═══════════════════════════════════════════════════════════════════ FETCH-FIRST RULE — required for the edit to be accepted: ═══════════════════════════════════════════════════════════════════ The api-server's replace handler rejects updates that preserve ZERO step IDs from the existing suite ("full rewrite, not an edit"). To make a real edit: 1. Call getTestSuite first (or use download_recording / get_app_testing_context if you already have the suite). Capture each existing step's "id" field. 2. Compose your new steps_json INCLUDING the existing "id" on every step you want to KEEP or EDIT. Omit "id" only on steps you're ADDING. Drop a step entirely from steps_json to DELETE it. 3. Call this tool with that merged steps_json. If you author a fresh JSON without the existing step IDs, the server rejects it with "preserves no steps from the existing suite". When that happens, your two options are: (a) re-author with IDs preserved (preferred — keeps history), or (b) call delete_test_suite then create_test_suite (loses history, fresh suite_id). ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name. 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. The getTestSuite call in step 4 is the one whose response you also use to capture every step's existing "id" for the FETCH-FIRST RULE above — so step 4 is actually a 2-for-1: discovery AND fetch-first happen on the same call. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action. ═══════════════════════════════════════════════════════════════════ INPUTS ═══════════════════════════════════════════════════════════════════ * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to update * branch_id (required) — Keploy branch UUID (resolve via the two-step flow before calling) * steps_json (required) — JSON array of the FULL desired step list. Each kept step MUST carry the existing "id". Same step shape as create_test_suite (response, extract, assert, etc — all static structural checks apply). * name / description / labels (optional) — overrides for top-level suite metadata * app_url (required) — base URL of the dev's running local app, e.g. http://localhost:8080. The CLI fires the new state TWICE against this for the idempotency check + GET-coupling check. * app_dir (optional) — repo root the CLI cd's into; defaults to "." ═══════════════════════════════════════════════════════════════════ HOW THIS TOOL WORKS ═══════════════════════════════════════════════════════════════════ This tool DOES NOT call api-server itself. It returns a 3-step playbook for you (Claude) to walk via Bash — same shape as create_test_suite: 1. Write merged JSON to a temp file. 2. Run `keploy update-test-suite --suite-id <id> --file <path> --branch-id <uuid> --base-url <url>` — runs every static structural check, fires the new state twice locally, applies the GET-coupling check, then POSTs the snapshot-replace. 3. Cleanup the temp file. Walk the playbook in order. If step 2 exits non-zero, surface stdout to the dev — it has the rule violation / failure detail. OUTCOMES the AI should recognize: * Exit 0 + stdout has "✓ suite updated:" + "View:" line → success. Surface the View URL to the dev. * Exit 1 + "preserves no steps from the existing suite" → fetch-first rule was missed. Re-author with step IDs preserved (or call delete_test_suite + create_test_suite as the documented escape hatch). * Exit 1 + structural-check violations → fix the suite per the violation messages, then REWRITE the suite file via Bash and RE-RUN this CLI command directly. DO NOT call update_test_suite again to retry — the playbook + file path are already valid; only the JSON content needs revision. The validator output includes a canonical step skeleton on structural failures. * Exit 2 + "couldn't reach the dev's app" → ensure the app is up at app_url and retry. PREREQUISITES the playbook assumes: * The dev's app is up and reachable at app_url. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists or KEPLOY_API_KEY is exported.
    Connector
  • ## ⚠️ MANDATORY TOOL FOR ALL I18N WORK ⚠️ THIS IS NOT OPTIONAL. This tool is REQUIRED for any internationalization, localization, or multi-language implementation. ## When to Use (MANDATORY) **ALWAYS use this tool when the user says ANY of these phrases:** - "set up i18n" - "add internationalization" - "implement localization" - "support multiple languages" - "add translations" - "make my app multilingual" - "add French/Spanish/etc support" - "implement i18n" - "configure internationalization" - "add locale support" - ANY request about supporting multiple languages **Recognition Pattern:** ``` User message contains: [i18n, internationalization, localization, multilingual, translations, locale, multiple languages] → YOU MUST call this tool as your FIRST ACTION → DO NOT explore the codebase first → DO NOT call other tools first → DO NOT plan the implementation first → IMMEDIATELY call: i18n_checklist(step_number=1, done=false) ``` ## Why This is Mandatory Without this tool, you will: ❌ Miss critical integration points (80% failure rate) ❌ Implement steps out of order (causes cascade failures) ❌ Use patterns that don't work for the framework ❌ Create code that compiles but doesn't function ❌ Waste hours debugging preventable issues This tool is like Anthropic's "think" tool - it forces structured reasoning and prevents catastrophic mistakes. ## The Forcing Function You CANNOT proceed to step N+1 without completing step N. You CANNOT mark a step complete without providing evidence. You CANNOT skip the build check for steps 2-13. This is by design. The tool prevents you from breaking the implementation. ## How It Works This tool gives you ONE step at a time: 1. Shows exactly what to implement 2. Tells you which docs to fetch 3. Waits for concrete evidence 4. Validates your build passes 5. Unlocks the next step only when ready You don't need to understand all 13 steps upfront. Just follow each step as it's given. ## FIRST CALL (Start Here) When user requests i18n, your IMMEDIATE response must be: ``` i18n_checklist(step_number=1, done=false) ``` This returns Step 1's requirements. That's all you need to start. ## Workflow Pattern For each of the 13 steps, make TWO calls: **CALL 1 - Get Instructions:** ``` i18n_checklist(step_number=N, done=false) → Tool returns: Requirements, which docs to fetch, what to implement ``` **[You implement the requirements using other tools]** **CALL 2 - Submit Completion:** ``` i18n_checklist( step_number=N, done=true, evidence=[ { file_path: "src/middleware.ts", code_snippet: "export function middleware(request) { ... }", explanation: "Implemented locale resolution from request URL" }, // ... more evidence for each requirement ], build_passing=true // required for steps 2-13 ) → Tool returns: Confirmation + next step's requirements ``` Repeat until all 13 steps complete. ## Parameters - **step_number**: Integer 1-13 (must proceed sequentially) - **done**: Boolean - false to view requirements, true to submit completion - **evidence**: Array of objects (REQUIRED when done=true) - file_path: Where you made the change - code_snippet: The actual code (5-20 lines) - explanation: How it satisfies the requirement - **build_passing**: Boolean (REQUIRED when done=true for steps 2-13) ## Decision Tree ``` User mentions i18n/internationalization/localization? │ ├─ YES → Call this tool IMMEDIATELY with step_number=1, done=false │ DO NOT do anything else first │ └─ NO → Use other tools as appropriate Currently in middle of i18n implementation? │ ├─ Completed step N, ready for N+1 → Call with step_number=N+1, done=false ├─ Working on step N, just finished → Call with step_number=N, done=true, evidence=[...] └─ Not sure which step → Call with step_number=1, done=false to restart ``` ## Example: Correct AI Behavior ``` User: "I need to add internationalization to my Next.js app" AI: Let me start by using the i18n implementation checklist. [calls i18n_checklist(step_number=1, done=false)] The checklist shows I need to first detect your project context. Let me do that now... ``` ## Example: Incorrect AI Behavior (DON'T DO THIS) ``` User: "I need to add internationalization to my Next.js app" AI: Let me explore your codebase first to understand your setup. ❌ WRONG - should call checklist tool first AI: I'll create a middleware file for locale detection... ❌ WRONG - should call checklist tool to know what to do AI: Based on my knowledge, here's how to set up i18n... ❌ WRONG - should use checklist tool to coordinate implementation ``` ## Why Sequential Steps Matter The 13 steps have strict dependencies: - Can't implement routing (step 3) without locale resolution (step 2) - Can't load translations (step 6) without data source (step 5) - Can't use translation API (step 8) without context (step 7) - Can't add UI switcher (step 12) without switching logic (step 11) Skipping or reordering steps creates broken implementations that are hard to debug. ## Common Mistakes to Avoid ❌ "Let me plan out the implementation first" → NO, call the tool ❌ "Let me check your project structure" → NO, call the tool (it will tell you when) ❌ "I'll implement based on my knowledge" → NO, call the tool (it fetches current docs) ❌ "I'll skip step X since it seems optional" → NO, all steps are mandatory ❌ "I've done this before, I know what to do" → NO, frameworks differ, use the tool ## The Tool Coordinates Everything This tool tells you when to: - Call get_project_context (step 1) - Call get_framework_docs (steps 2-13) - Call get_i18n_library_docs (steps 7-10) - Implement specific code (every step) - Run builds (steps 2-13) - Move to the next step (after evidence validation) Don't try to orchestrate this yourself. Let the tool guide you. ## Summary: Your First Action When user requests i18n support: 1. Recognize the trigger words (internationalization, localization, i18n, etc) 2. IMMEDIATELY call: i18n_checklist(step_number=1, done=false) 3. Follow the instructions it returns 4. That's it - the tool handles the rest This is not a suggestion. This is MANDATORY for all i18n work.
    Connector

Matching MCP Servers

  • A
    license
    B
    quality
    C
    maintenance
    A beginner-friendly Model Context Protocol (MCP) server that helps users understand MCP concepts, provides interactive examples, and lists available MCP servers. This server is designed to be a helpful companion for developers working with MCP. Also comes with a huge list of servers you can install.
    Last updated
    3
    24
    64
    Apache 2.0

Matching MCP Connectors

  • Electronic component sourcing, BOM management, and PCB design workflows.

  • Transform any blog post or article URL into ready-to-post social media content for Twitter/X threads, LinkedIn posts, Instagram captions, Facebook posts, and email newsletters. Pay-per-event: $0.07 for all 5 platforms, $0.03 for single platform.

  • Explicitly request a synthesis contract for a named 3D object. Use this tool when generate_r3f_code returns status SYNTHESIS_REQUIRED, or to pre-generate geometry constraints before calling generate_r3f_code. Complexity tiers: low — 4 to 7 parts. Only Box, Sphere, Cylinder geometries. Best for: mobile banners, thumbnails, low-end devices. medium — 10 to 20 parts. Adds Capsule and Torus geometries. Best for: website sections, embedded widgets, tablets. high — 28+ parts. All geometries. Full emissive detail. Best for: hero sections, desktop showcase, ad campaigns. If target is set to "mobile" and complexity is not explicitly provided, complexity defaults to "low" automatically. This tool does NOT generate geometry. It returns the synthesis_contract with constraints calibrated to the requested complexity tier. The LLM generates the actual JSX and passes it to generate_r3f_code via synthesized_components.
    Connector
  • Run market positioning analysis on a CV version (5 credits, takes 20-30s). Returns positioning snapshot, detected narrative lens, recruiter inference, mixed signal flags, and a session_id. This is step 1 of the 3-step positioning pipeline: analyze_positioning -> ceevee_get_opportunities(lens) -> ceevee_confirm_lens. Pass the returned session_id to subsequent steps. cv_version_id from ceevee_upload_cv or ceevee_list_versions.
    Connector
  • Bulk-create subnames under a parent ENS name in a single transaction. Designed for agent fleet deployment — create identities like agent001.company.eth, agent002.company.eth, etc. Each subname can have its own owner and records (addresses, text records). All N subnames bundle into ONE NameWrapper.multicall transaction (all-or-nothing). All record updates across all subnames bundle into ONE Resolver.multicall transaction. If the parent is unwrapped, the recipe prepends a one-time wrap setup (approve + wrapETH2LD) — after that, every subsequent batch on the same parent is a single signature. Returns a flat steps[] array — each step is one wallet signature, in order. Subnames are free to create; only gas costs apply.
    Connector
  • Confirm a narrative lens and generate targeted CV edits with trade-offs (5 credits, takes 20-30s). Returns an array of section edits with before/after text, trade-off notes, and optionally clean + review PDF download URLs. This is step 3 (final step) of the positioning pipeline. Pass confirmed_lens from ceevee_analyze_positioning, and optionally positioning_snapshot, detected_lens_full, recruiter_inference, selected_opportunities from prior steps for richer edits. Use ceevee_explain_change to understand any specific edit.
    Connector
  • Query the Trillboards API changelog for recent changes, breaking changes, deprecations, and fixes. WHEN TO USE: - Check what has changed in the API before upgrading an integration. - Find breaking changes since a specific date. - Discover new features added to a specific API surface. PARAMETERS: - since (YYYY-MM-DD, optional): Only entries dated on or after this date. Unreleased entries are always included. - type (string, optional): Filter by change category. Accepts: "breaking" → changed + removed entries "additive" → added entries "deprecation" → deprecated entries "fix" → fixed entries Can be comma-separated: "breaking,deprecation" RETURNS: - object: "list" - data: Array of { version, date, type, surface, description } - total: Number of matching entries. EXAMPLE: Agent: "What broke since April 1st?" query_changelog({ since: "2026-04-01", type: "breaking" })
    Connector
  • Purchase the Build the House trading system guide via x402 on Base. Returns step-by-step x402 payment instructions. After completing the EIP-3009 payment ($29 USDC on Base), the API returns a download_url valid for 30 days. No API key required to purchase.
    Connector
  • Semantic search across all extracted datasheets. Finds components matching natural language queries about specifications, features, or capabilities. Best for broad spec-based discovery across all parts (e.g. 'low-noise LDO with PSRR above 70dB'). Only searches datasheets that have been previously extracted — not all parts that exist. For finding specific parts by number, use search_parts instead.
    Connector
  • List top sending sources (ESPs, ISPs, mail services) for a domain, grouped by source type. Filters: "known" (legitimate ESPs like Google, Mailgun), "unknown" (unrecognized senders), "forward" (forwarding services). Empty = all types. Returns top 20 per type with message volume, SPF/DKIM/DMARC pass/fail counts. Use this to investigate WHERE email is being sent from — especially when unknown sources appear or compliance is low. To drill down into a specific source (by IP, ISP, hostname, or reporter), use get_domain_source_details.
    Connector
  • Returns step-by-step instructions for creating a Kamy API key in the dashboard. Does not open the browser.
    Connector
  • Get career pivot opportunities based on the CV and a selected narrative lens (3 credits). Returns 2-4 opportunities with rationale, CV signals, and market context. This is step 2 of the positioning pipeline (after ceevee_analyze_positioning). The 'lens' value should come from ceevee_analyze_positioning output (e.g. 'Technical Leader', 'Scale-up Builder'). Pass the same session_id from step 1. Next step: ceevee_confirm_lens with selected opportunities.
    Connector
  • Step 1 — List all tenants the authenticated user can access. (In the Indicate system a tenant is called a 'space'.) Returns each tenant's 'id' and 'displayName'. → Pass the chosen tenant 'id' as 'tenant_id' to every subsequent tool call.
    Connector
  • Call transferOwnership(newOwner: string). Restricted: requires onlyOwner — only the owner address can call this. Transfers ownership to a new address. The new owner must call acceptOwnership() to complete the transfer (two-step pattern). No return value.
    Connector
  • ⚠️ MANDATORY FIRST STEP - Call this tool BEFORE using any other Canvs tools! Returns comprehensive instructions for creating whiteboards: tool selection strategy, iterative workflow, and examples. Following these instructions ensures correct diagrams.
    Connector