Skip to main content
Glama
127,390 tools. Last updated 2026-05-05 15:21

"A server for finding information about step-by-step thinking and slow thinking methodologies" matching MCP tools:

  • Purchase an ENS name — either buy a listed name from a marketplace or register an available name directly on-chain. For AVAILABLE names: Returns a complete registration recipe with contract address, ABI, step-by-step instructions, and a pre-generated secret. Your wallet signs and submits the transactions (commit → wait 60s → register). For LISTED names: Searches all marketplaces (OpenSea, Grails) for the best price. If there are MULTIPLE active listings, returns CHOOSE_LISTING status with all options — present these to the user and ask which one they want. When the user chooses, call this tool again with the chosen orderHash to get the buy transaction. The tool auto-detects whether the name is available or listed. You can override with the 'action' parameter.
    Connector
  • Subscribe to OctoData Premium API via x402 on Base. Returns step-by-step x402 payment instructions for any plan. After completing the EIP-3009 payment, the API returns an api_key immediately — no human in the loop. Free option also available. Plans: Micro — $0.01 USDC per call, no key needed, pay-per-request via x402 Trial — $5 USDC, 7 days, 10k req/day Annual — $29 USDC/year early bird (first 100 seats), $149/year after
    Connector
  • Edit an existing test suite — change one or more step bodies, assertions, headers, or remove/add steps. Returns a playbook that delegates to `keploy update-test-suite`, which validates the new state (static structural checks + 2 live runs for idempotency + GET-coupling check) and snapshot-replaces the suite via api-server. POST-EDIT BEHAVIOUR: any structural change here (step method/url/body/headers/extract/assert, or add/delete steps) AUTOMATICALLY clears the suite's sandbox test server-side — the suite comes back as linked=false. Call record_sandbox_test on the updated suite before any sandbox replay; otherwise replay_sandbox_test will 400 with "no sandboxed tests". Cosmetic-only edits (name, description, labels) preserve the sandbox test. ═══════════════════════════════════════════════════════════════════ FETCH-FIRST RULE — required for the edit to be accepted: ═══════════════════════════════════════════════════════════════════ The api-server's replace handler rejects updates that preserve ZERO step IDs from the existing suite ("full rewrite, not an edit"). To make a real edit: 1. Call getTestSuite first (or use download_recording / get_app_testing_context if you already have the suite). Capture each existing step's "id" field. 2. Compose your new steps_json INCLUDING the existing "id" on every step you want to KEEP or EDIT. Omit "id" only on steps you're ADDING. Drop a step entirely from steps_json to DELETE it. 3. Call this tool with that merged steps_json. If you author a fresh JSON without the existing step IDs, the server rejects it with "preserves no steps from the existing suite". When that happens, your two options are: (a) re-author with IDs preserved (preferred — keeps history), or (b) call delete_test_suite then create_test_suite (loses history, fresh suite_id). ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name. 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. The getTestSuite call in step 4 is the one whose response you also use to capture every step's existing "id" for the FETCH-FIRST RULE above — so step 4 is actually a 2-for-1: discovery AND fetch-first happen on the same call. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action. ═══════════════════════════════════════════════════════════════════ INPUTS ═══════════════════════════════════════════════════════════════════ * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to update * branch_id (required) — Keploy branch UUID (resolve via the two-step flow before calling) * steps_json (required) — JSON array of the FULL desired step list. Each kept step MUST carry the existing "id". Same step shape as create_test_suite (response, extract, assert, etc — all static structural checks apply). * name / description / labels (optional) — overrides for top-level suite metadata * app_url (required) — base URL of the dev's running local app, e.g. http://localhost:8080. The CLI fires the new state TWICE against this for the idempotency check + GET-coupling check. * app_dir (optional) — repo root the CLI cd's into; defaults to "." ═══════════════════════════════════════════════════════════════════ HOW THIS TOOL WORKS ═══════════════════════════════════════════════════════════════════ This tool DOES NOT call api-server itself. It returns a 3-step playbook for you (Claude) to walk via Bash — same shape as create_test_suite: 1. Write merged JSON to a temp file. 2. Run `keploy update-test-suite --suite-id <id> --file <path> --branch-id <uuid> --base-url <url>` — runs every static structural check, fires the new state twice locally, applies the GET-coupling check, then POSTs the snapshot-replace. 3. Cleanup the temp file. Walk the playbook in order. If step 2 exits non-zero, surface stdout to the dev — it has the rule violation / failure detail. OUTCOMES the AI should recognize: * Exit 0 + stdout has "✓ suite updated:" + "View:" line → success. Surface the View URL to the dev. * Exit 1 + "preserves no steps from the existing suite" → fetch-first rule was missed. Re-author with step IDs preserved (or call delete_test_suite + create_test_suite as the documented escape hatch). * Exit 1 + structural-check violations → fix the suite per the violation messages, then REWRITE the suite file via Bash and RE-RUN this CLI command directly. DO NOT call update_test_suite again to retry — the playbook + file path are already valid; only the JSON content needs revision. The validator output includes a canonical step skeleton on structural failures. * Exit 2 + "couldn't reach the dev's app" → ensure the app is up at app_url and retry. PREREQUISITES the playbook assumes: * The dev's app is up and reachable at app_url. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists or KEPLOY_API_KEY is exported.
    Connector
  • ## ⚠️ MANDATORY TOOL FOR ALL I18N WORK ⚠️ THIS IS NOT OPTIONAL. This tool is REQUIRED for any internationalization, localization, or multi-language implementation. ## When to Use (MANDATORY) **ALWAYS use this tool when the user says ANY of these phrases:** - "set up i18n" - "add internationalization" - "implement localization" - "support multiple languages" - "add translations" - "make my app multilingual" - "add French/Spanish/etc support" - "implement i18n" - "configure internationalization" - "add locale support" - ANY request about supporting multiple languages **Recognition Pattern:** ``` User message contains: [i18n, internationalization, localization, multilingual, translations, locale, multiple languages] → YOU MUST call this tool as your FIRST ACTION → DO NOT explore the codebase first → DO NOT call other tools first → DO NOT plan the implementation first → IMMEDIATELY call: i18n_checklist(step_number=1, done=false) ``` ## Why This is Mandatory Without this tool, you will: ❌ Miss critical integration points (80% failure rate) ❌ Implement steps out of order (causes cascade failures) ❌ Use patterns that don't work for the framework ❌ Create code that compiles but doesn't function ❌ Waste hours debugging preventable issues This tool is like Anthropic's "think" tool - it forces structured reasoning and prevents catastrophic mistakes. ## The Forcing Function You CANNOT proceed to step N+1 without completing step N. You CANNOT mark a step complete without providing evidence. You CANNOT skip the build check for steps 2-13. This is by design. The tool prevents you from breaking the implementation. ## How It Works This tool gives you ONE step at a time: 1. Shows exactly what to implement 2. Tells you which docs to fetch 3. Waits for concrete evidence 4. Validates your build passes 5. Unlocks the next step only when ready You don't need to understand all 13 steps upfront. Just follow each step as it's given. ## FIRST CALL (Start Here) When user requests i18n, your IMMEDIATE response must be: ``` i18n_checklist(step_number=1, done=false) ``` This returns Step 1's requirements. That's all you need to start. ## Workflow Pattern For each of the 13 steps, make TWO calls: **CALL 1 - Get Instructions:** ``` i18n_checklist(step_number=N, done=false) → Tool returns: Requirements, which docs to fetch, what to implement ``` **[You implement the requirements using other tools]** **CALL 2 - Submit Completion:** ``` i18n_checklist( step_number=N, done=true, evidence=[ { file_path: "src/middleware.ts", code_snippet: "export function middleware(request) { ... }", explanation: "Implemented locale resolution from request URL" }, // ... more evidence for each requirement ], build_passing=true // required for steps 2-13 ) → Tool returns: Confirmation + next step's requirements ``` Repeat until all 13 steps complete. ## Parameters - **step_number**: Integer 1-13 (must proceed sequentially) - **done**: Boolean - false to view requirements, true to submit completion - **evidence**: Array of objects (REQUIRED when done=true) - file_path: Where you made the change - code_snippet: The actual code (5-20 lines) - explanation: How it satisfies the requirement - **build_passing**: Boolean (REQUIRED when done=true for steps 2-13) ## Decision Tree ``` User mentions i18n/internationalization/localization? │ ├─ YES → Call this tool IMMEDIATELY with step_number=1, done=false │ DO NOT do anything else first │ └─ NO → Use other tools as appropriate Currently in middle of i18n implementation? │ ├─ Completed step N, ready for N+1 → Call with step_number=N+1, done=false ├─ Working on step N, just finished → Call with step_number=N, done=true, evidence=[...] └─ Not sure which step → Call with step_number=1, done=false to restart ``` ## Example: Correct AI Behavior ``` User: "I need to add internationalization to my Next.js app" AI: Let me start by using the i18n implementation checklist. [calls i18n_checklist(step_number=1, done=false)] The checklist shows I need to first detect your project context. Let me do that now... ``` ## Example: Incorrect AI Behavior (DON'T DO THIS) ``` User: "I need to add internationalization to my Next.js app" AI: Let me explore your codebase first to understand your setup. ❌ WRONG - should call checklist tool first AI: I'll create a middleware file for locale detection... ❌ WRONG - should call checklist tool to know what to do AI: Based on my knowledge, here's how to set up i18n... ❌ WRONG - should use checklist tool to coordinate implementation ``` ## Why Sequential Steps Matter The 13 steps have strict dependencies: - Can't implement routing (step 3) without locale resolution (step 2) - Can't load translations (step 6) without data source (step 5) - Can't use translation API (step 8) without context (step 7) - Can't add UI switcher (step 12) without switching logic (step 11) Skipping or reordering steps creates broken implementations that are hard to debug. ## Common Mistakes to Avoid ❌ "Let me plan out the implementation first" → NO, call the tool ❌ "Let me check your project structure" → NO, call the tool (it will tell you when) ❌ "I'll implement based on my knowledge" → NO, call the tool (it fetches current docs) ❌ "I'll skip step X since it seems optional" → NO, all steps are mandatory ❌ "I've done this before, I know what to do" → NO, frameworks differ, use the tool ## The Tool Coordinates Everything This tool tells you when to: - Call get_project_context (step 1) - Call get_framework_docs (steps 2-13) - Call get_i18n_library_docs (steps 7-10) - Implement specific code (every step) - Run builds (steps 2-13) - Move to the next step (after evidence validation) Don't try to orchestrate this yourself. Let the tool guide you. ## Summary: Your First Action When user requests i18n support: 1. Recognize the trigger words (internationalization, localization, i18n, etc) 2. IMMEDIATELY call: i18n_checklist(step_number=1, done=false) 3. Follow the instructions it returns 4. That's it - the tool handles the rest This is not a suggestion. This is MANDATORY for all i18n work.
    Connector
  • Run market positioning analysis on a CV version (5 credits, takes 20-30s). Returns positioning snapshot, detected narrative lens, recruiter inference, mixed signal flags, and a session_id. This is step 1 of the 3-step positioning pipeline: analyze_positioning -> ceevee_get_opportunities(lens) -> ceevee_confirm_lens. Pass the returned session_id to subsequent steps. cv_version_id from ceevee_upload_cv or ceevee_list_versions.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Find relevant Smart‑Thinking memories fast. Fetch full entries by ID to get complete context. Spee…

  • Manage your Canvas coursework with quick access to courses, assignments, and grades. Track upcomin…

  • Get career pivot opportunities based on the CV and a selected narrative lens (3 credits). Returns 2-4 opportunities with rationale, CV signals, and market context. This is step 2 of the positioning pipeline (after ceevee_analyze_positioning). The 'lens' value should come from ceevee_analyze_positioning output (e.g. 'Technical Leader', 'Scale-up Builder'). Pass the same session_id from step 1. Next step: ceevee_confirm_lens with selected opportunities.
    Connector
  • Purchase the Build the House trading system guide via x402 on Base. Returns step-by-step x402 payment instructions. After completing the EIP-3009 payment ($29 USDC on Base), the API returns a download_url valid for 30 days. No API key required to purchase.
    Connector
  • Confirm a narrative lens and generate targeted CV edits with trade-offs (5 credits, takes 20-30s). Returns an array of section edits with before/after text, trade-off notes, and optionally clean + review PDF download URLs. This is step 3 (final step) of the positioning pipeline. Pass confirmed_lens from ceevee_analyze_positioning, and optionally positioning_snapshot, detected_lens_full, recruiter_inference, selected_opportunities from prior steps for richer edits. Use ceevee_explain_change to understand any specific edit.
    Connector
  • Returns step-by-step instructions for creating a Kamy API key in the dashboard. Does not open the browser.
    Connector
  • ⚠️ MANDATORY FIRST STEP - Call this tool BEFORE using any other Canvs tools! Returns comprehensive instructions for creating whiteboards: tool selection strategy, iterative workflow, and examples. Following these instructions ensures correct diagrams.
    Connector
  • Step 1 — List all tenants the authenticated user can access. (In the Indicate system a tenant is called a 'space'.) Returns each tenant's 'id' and 'displayName'. → Pass the chosen tenant 'id' as 'tenant_id' to every subsequent tool call.
    Connector
  • Get Lenny Zeltser's scoring playbook so your AI can score a draft locally against a cybersecurity-writing rating sheet. THIS IS THE ONLY TOOL THAT PRODUCES NUMERIC SCORES — the writing-coach tools (`get_security_writing_guidelines`, `ir_*`, `product_*`) never score. Returns the rubric plus step-by-step instructions for applying it. This server never requests your draft and instructs your AI to keep it local—rating sheets and scoring instructions flow to your AI.
    Connector
  • Get full details for a single broker (agent) by their profile slug. Call this when the user asks for more information about a specific broker. Use the slug from search_brokers results.
    Connector
  • Associate an email and handle with your account. Step 1: Call with just email — sends a 6-digit verification code. Step 2: Call with email + code + handle — verifies and completes setup. This lets you log in to the console and sets your permanent @handle.
    Connector
  • Get full details for a single business (listing) by its slug. Call this when the user asks for more information about a specific business. Use the slug from search_businesses results.
    Connector
  • Search for a data model by approximate or misspelled name using fuzzy matching. Use this as the recovery step whenever get_data_model returns MODEL_NOT_FOUND — it finds the closest real model names even when the spelling is off. Returns ranked candidates with similarity scores. Example: fuzzy_find_model({"model_name": "WeatherFora", "threshold": 80})
    Connector
  • Create a new API test suite with test steps. Each step defines an HTTP request and assertions to validate the response. Steps can extract values from responses into variables for chaining requests. ═══════════════════════════════════════════════════════════════════ STEP 0 — read the canonical schema BEFORE drafting: ═══════════════════════════════════════════════════════════════════ If you've already called get_app_testing_context, the canonical step schema is in its response under the `step_schema` field — read it from there. Otherwise run `keploy test-suite-format` once before writing any suite JSON. The schema describes the MANDATORY rules below in detail plus the two-step prelude+POST skeleton you must follow. Authors who skip this and draft from training-data priors burn ~50s per validator rejection on iter 1. ═══════════════════════════════════════════════════════════════════ MANDATORY FOR EVERY STEP — the validator rejects on iter 1 if any of these are violated: ═══════════════════════════════════════════════════════════════════ R10 — every step MUST carry a captured "response": {status, body, headers} block. Hit the endpoint locally before authoring (curl) and paste the real response. Steps with no response block are rejected outright; four downstream rules (R4 / R11 / R15-R16 / R27) silently no-op until R10 is satisfied, so missing a response also hides every assertion / extract problem in that step. R9 — every POST / PUT / PATCH body MUST reference at least one {{var}} whose generator is declared on an EARLIER step's "extract" (typically a /health prelude as step 0). Without this, the second run collides on the first run's database state. App-level appLevelCustomVariables DO NOT qualify for R9 — the validator only credits step-level extracts. R2 — pre-request fields ("body", "url", "headers") CANNOT reference the CURRENT step's own "extract" outputs. Extract runs AFTER the response comes back; pre-request substitution sees nothing yet. Together R9 + R2 force the prelude pattern: declare generators on step 0, use them from step 1+. The STEP SHAPE example below shows the canonical two-step layout. R15 — every assertion's path / status / header MUST resolve against the AUTHORED response block. JSONPath uses gjson dot-array syntax: $.orders.0.id — NOT $.orders[0].id (the bracket form does not resolve in gjson; the assertion is rejected as "key not present in recorded body"). For status_code / header_* assertions, the values must match what's in response.status / response.headers verbatim — capture the real response via curl before authoring. R32 — every step-level extract key MUST NOT collide with the app's appLevelCustomVariables (enumerate them via get_app_testing_context or getApp before authoring). The runtime's variable lookup resolves app-level first, so a colliding key means the suite's function silently never runs. Suite-suffix when in doubt: userNonceForSuite, not genUserId. Don't invent a parallel generator with the same name as an existing app-level one. ═══════════════════════════════════════════════════════════════════ APP CONFIG FIRST — read the app before authoring: ═══════════════════════════════════════════════════════════════════ Before any other step, call getApp({app_id}) and read these fields: * appLevelCustomVariables — dynamic generators and static fixtures pre-configured by the dev, shared across every suite for this app. Common shapes: - genUserId, genProductName (JS functions returning fresh entropy per run, e.g. `alice_<rand1-10000>`) - staticUser (a fixed user the dev wants tests to use) - zeroQuantity, negativePrice, invalidUser (static fixtures for validation tests) PREFER these over inventing your own JS-function in `extract`. They're the dev's authoritative dynamic-input set — using them in POST/PUT/PATCH bodies via `{{varName}}` means each replay hits a fresh row, sidestepping duplicate-key errors. Inventing a parallel generator with the same intent risks name-collision rejection (see Name-collision check below). * auth — the auth shape suites must satisfy (header / cookie / oauth / none). * ignoreEndpoints, rateLimit, timeout — runtime knobs that shape what assertions can hold. If a relevant `gen*` already exists in appLevelCustomVariables, ALWAYS reference it via `{{name}}` rather than authoring a parallel one. The dev configured it for a reason. ═══════════════════════════════════════════════════════════════════ BEFORE CREATING — check for duplicates AND for existing recordings: ═══════════════════════════════════════════════════════════════════ A (app_id, branch_id) tuple holds at most one suite per scenario, AND if the dev has already captured the relevant traffic via `keploy record`, you should seed from that recording instead of curling the app fresh. Two bounded checks before create_test_suite: (1) Duplicate-suite check — call listTestSuites({app_id, branch_id, q: "<scenario-keyword>"}) where <scenario-keyword> is a substring of the name you're about to author (e.g. "checkout", "auth"). The server filters by name regex, so the response is bounded to relevant matches regardless of how many suites the app has. If you can't pick a keyword (the dev's intent is vague), call with page_size=20 and NO q, then scan the first page only — DON'T paginate further. Match by name (case-insensitive) AND by intent. If any existing suite covers the same scenario: - Same scenario, refresh wanted → call update_test_suite (preserves history) or delete_test_suite + create_test_suite (loses history). - Adjacent but distinct scenario (e.g. "checkout with discount" vs "checkout without discount") → create with a name that distinguishes them clearly. (2) Recording-reuse check — call listRecordings({app_id, limit: 10}) to fetch the 10 most recent `keploy record` sessions. Recordings cluster by scenario; the top 10 cover what's likely relevant — DON'T paginate the full history. For any recording whose name/timestamp suggests it covers the scenario you're authoring, call download_recording({app_id, test_set_id}) to pull its captured test cases (real request/response pairs from the live app). Seed your steps_json from those test cases — convert each one into a step (method/url/headers/body → request fields; recorded response → step's `response` field). This is more faithful than re-curling and saves the dev's time. If no recent recording covers the scenario, fall through to the normal validate-locally-before-inserting flow (curl each endpoint yourself). (3) Only after both checks → proceed with create_test_suite. Skipping (1) leaves the dev with two suites covering the same flow — confusing reports, double rerecord cost, and orphaned sandbox tests on whichever suite they stop using. Skipping (2) re-curls endpoints whose traffic the dev already captured. ═══════════════════════════════════════════════════════════════════ ONE SCENARIO PER SUITE — load-bearing constraint: ═══════════════════════════════════════════════════════════════════ A suite represents EXACTLY ONE user-facing scenario / use-case (e.g. "user registers and creates their first order", "admin promotes a user role", "checkout with discount applied"). Do NOT pack multiple unrelated scenarios into a single suite — every step in a suite shares state and ordering with every other step. Mixing scenarios breaks idempotency (cleanup for one scenario can wipe state another scenario assumed), makes failures harder to diagnose, and inflates rerecord cost. Tests for "auth + payments + cleanup" → THREE suites, not one. Related steps that share extracted vars and a state assumption belong in the same suite; unrelated flows don't. When in doubt: if you can't write a single sentence describing what the suite tests in user-facing terms, split it. ═══════════════════════════════════════════════════════════════════ IDEMPOTENCY CONTRACT — the load-bearing rule for every suite: ═══════════════════════════════════════════════════════════════════ Every suite MUST be replayable indefinitely without state drift. The same suite run twice in a row, or 100 times back-to-back, must produce the same per-step outcomes. Failing this makes the suite useless for sandbox replay (the captured mocks freeze a single point-in-time response, so any state-dependent step diverges on rerun). How to design for it: * Duplicate-key 500 on POST / PUT / PATCH replay ("duplicate key" / "already exists" / "unique constraint violated") is ALWAYS a SUITE design problem, NEVER an app problem. Fix order: (1) Reference an app-level `gen*` var via `{{name}}` in the body — works if one exists (you read appLevelCustomVariables in APP CONFIG FIRST). (2) If no fitting app-level generator, declare your own JS-function in a PRIOR step's `extract` (see PRELUDE PATTERN); reference it via `{{name}}` in the failing step's body. (3) Add a DELETE cleanup step earlier in the suite to clear the conflicting row. NEVER propose modifying app code (e.g. adding `ON CONFLICT` to the INSERT, retry loops, transactional wrappers). The app's dedup is correct; the suite is what's missing entropy. See DO NOT MODIFY APP SOURCE CODE below. * If a step CREATES a resource, a later step in the same suite MUST clean it up (DELETE the row, revert the state) — OR the create must be idempotent on the server side (PUT-by-key, upsert). A naked POST that always allocates a new ID will diverge on every replay. * If a step depends on a resource, EXTRACT its identity from a prior step's response into a `{{var}}` — never hard-code an ID that "happens to exist right now". Hard-coded IDs rot. * Reject "natural-language idempotency" reasoning ("the dev will reset the DB before each run"). The suite must work without external setup. If you can't guarantee it, you've packed two scenarios into one suite — split them. * Do not assume time-of-day, ordering relative to other suites, or random-but-stable values. Each suite is its own universe. * Pagination / list endpoints: extract the count or a known item, don't assert on absolute indices ("the third item is X") — index drifts as the dataset grows. * Auth tokens: pull from app-level custom variables or extract from a login step IN THE SUITE. Never inline a token that expires. If the dev's request implies non-idempotent behaviour (e.g. "create user, then test that creating the same user fails"), capture both states explicitly inside the suite — first step creates, second step asserts the conflict response, third step deletes — so the suite as a whole is still replayable. Don't push the cleanup outside the suite. A suite that fails idempotency is rejected at create_test_suite time by the dynamic validator (2 live runs check). When that fails, do NOT retry by tweaking syntax — restructure the scenario. ═══════════════════════════════════════════════════════════════════ DO NOT MODIFY APP SOURCE CODE during suite authoring: ═══════════════════════════════════════════════════════════════════ At create_test_suite time, your job is to author a suite that fits the app AS IT IS. You may patch the app's CONFIG (auth, appLevelCustomVariables, ignoreEndpoints, rateLimit) via updateApp({app_id, ...}) — those are runtime knobs the dev expects to tune. You may NOT modify the app's SOURCE CODE. If a step is failing because of how the app behaves (500s, contract mismatches, missing endpoints, validation errors), the response is ONE of: * Adjust the suite to match observed behavior (steps_json edits before insert). * Use an app-level dynamic var (see APP CONFIG FIRST) or a JS-function generator to avoid the failure (see IDEMPOTENCY CONTRACT's duplicate-key fix order). * Patch the app's CONFIG via updateApp if the cause is auth / vars / rate limit. * If the dev confirms the app is broken AND the suite is correct, ASK the dev to fix the app — do NOT propose code changes yourself during authoring. NEVER propose ON CONFLICT clauses, retry loops, transactional wrappers, or any code-level change to the dev's application as a way to make the suite work. The suite must accommodate the app, not the other way around. ═══════════════════════════════════════════════════════════════════ NEVER-MISS-THESE — the validator HARD-REJECTS suites missing any of these: ═══════════════════════════════════════════════════════════════════ 1. `response` on EVERY step — { status: <int>, headers: {…}, body: "<string>" }. Captured from a real curl against the dev's app. body MUST be a JSON-encoded STRING (the raw body bytes), NOT a parsed object. Wrap with json.dumps / JSON.stringify if your tool gave you a dict. 2. `extract` is the ONLY authoring slot — never `extract_variables`. `extract_variables` is a post-run runtime SNAPSHOT field; the extract_variables-input rejection rule hard-rejects it on input. If you read an existing suite via getTestSuite / get_app_testing_context / download_recording and see `extract_variables` populated there — IGNORE IT, that's the runtime's display state, not the suite's input. Always author with `extract`. 3. POST/PUT/PATCH bodies need a per-run dynamic `{{var}}` (mutating-step dynamism check). Declare a JS-function generator on a PRIOR step's `extract` (typically a /health prelude as step 0). The declare-and-use-same-step check forbids declaring-and-using on the same step. See PRELUDE PATTERN below. 4. JSONPath uses gjson dot-array syntax: `$.orders.0.user_id` — NOT `$.orders[0].user_id`. ═══════════════════════════════════════════════════════════════════ STEP SHAPE (steps_json is an ARRAY — copy this two-step skeleton verbatim, preserve the prelude pattern): [ { // Step 0: cheap read prelude. Its sole job is to declare JS-function generators that // later POST/PUT/PATCH bodies reference. Required by R9 (mutating bodies need a per-run // dynamic var) + R2 (same-step extract isn't usable in pre-request fields). If your // suite already has a natural read step (/health, /me, version), reuse it as the prelude. "name": "health prelude (declares generators)", "method": "GET", "url": "/health", "headers": { "Accept": "application/json" }, "extract": { "genUserId": "function genUserId(){return 'u_'+Date.now()+'_'+Math.random().toString(36).slice(2,8);}" }, "assert": [ { "type": "status_code", "expected": "200" }, { "type": "json_equal", "key": "$.status", "expected": "healthy" } ], "response": { "status": 200, "headers": { "Content-Type": "application/json" }, "body": "{\"status\":\"healthy\"}" } }, { // Step 1: the actual mutation. Body references {{genUserId}} from the PRIOR step's // extract — satisfies R9 (per-run dynamic var) and R2 (not same-step). This step's // own "extract" captures the SERVER's response value (JSONPath) so a later step can // chain to {{user_id}} — JSONPath captures on the same step ARE legal because they // resolve post-response and only matter for subsequent steps. "name": "create user", "method": "POST", "url": "/api/users", "headers": { "Content-Type": "application/json" }, "body": "{\"name\":\"{{genUserId}}\"}", "extract": { "user_id": "$.data.id" }, "assert": [ { "type": "status_code", "expected": "201" }, // assert a STATIC field of the response, not a dynamic one. R30 // forbids {{genUserId}} in assert.expected (the runtime would // re-evaluate the function at assertion time and the value // wouldn't match the body's earlier call). Pick something the // server always returns the same — here the literal "status" // field. To assert against the dynamic id the server minted, // capture it via extract (above) and reference {{user_id}} in // a LATER step's assertion or url, not this step's. { "type": "json_equal", "key": "$.status", "expected": "created" } ], "response": { "status": 201, "headers": { "Content-Type": "application/json" }, "body": "{\"data\":{\"id\":\"abc-123\",\"name\":\"u_1700000000_xyz\"},\"status\":\"created\"}" } } ] VALID assertion types (ONLY use these — anything else fails the step at runtime with "invalid assertion type"): * status_code — exact HTTP status match. {type, expected:"201"} * status_code_class — match by class 2xx/3xx/… {type, expected:"2xx"} * status_code_in — any of a set. DELETE STEPS ONLY (status_code_in-scope check). For POST/GET/PUT/PATCH this is rejected. If you reach for it to absorb a duplicate-key 500 on re-runs, the right fix is a JS-function {{var}} in the body (see `extract` rules below) so each run hits a fresh row and only 201 is ever returned. {type, expected:"200,201,204"} * header_equal — response header exact match. {type, key:"Content-Type", expected:"application/json"} * header_contains — header value substring. {type, key:"Location", expected:"/orders/"} * header_exists — header is present. {type, key:"X-Request-Id"} * header_matches — header regex. {type, key:"Etag", expected:"^W/\\\".+\\\"$"} * json_equal — response body JSON path exact match. {type, key:"$.order.status", expected:"created"} * json_contains — response body JSON path substring/partial. {type, key:"$.message", expected:"success"} * custom_functions — inline JS function: (request, response, variables, steps) => boolean. {type, expected:"function f(request,response){return response.status===201;}"} DO NOT use any assertion type not in the closed list above. The set is fixed at exactly 10 entries — there are no wildcards. These are types AIs commonly invent that DO NOT EXIST in keploy and will fail the assertion-type closed-list check: ✗ json_type — there is no type-of check; assert against the literal value via json_equal, or use custom_functions with a typeof predicate. ✗ json_path — paths are passed via the "key" field of json_equal / json_contains; there is no separate path-only type. ✗ json_schema — no schema validation; closest is custom_functions with an inline schema check. ✗ json_array_length — no length-only assertion; capture .length via extract, or use custom_functions. ✗ header_starts / header_ends — only header_equal, header_contains, header_exists, header_matches (regex) exist. ✗ status_in / status_range — the real names are status_code_in / status_code_class. ✗ body / body_equal — no body-level type; assert against parsed paths via json_equal / json_contains, or use custom_functions. Anything not literally in the bulleted list above will get rejected by the validator — don't extrapolate from prefixes. `expected` values must be STRINGS (put numbers like 201 in quotes). `expected_string` is auto-populated; you can omit it. VARIABLES — purpose-first: a suite is a SCENARIO CHAIN; variables carry continuity between steps. Step N creates or fetches a resource → extracts its identity into a named var → step N+M uses `{{var}}` to reference that identity. If you find yourself extracting a value that NO LATER STEP references, DROP the extract — it's noise that hides which fields actually drive the scenario. Mechanically: extract values from one step's response with `extract: {varname: "$.path"}` (JSONPath). Reference later with `{{varname}}` in headers, body, url, or assertion `expected` values. STEP IDS & TRACKING HEADERS are auto-injected — don't provide them. The server assigns a UUID per step and adds X-Keploy-Test-Step-ID / X-Keploy-Test-Suite-ID / Keploy-Test-Name so the sandbox runner can correlate responses to steps. VARIABLE RULES (the runner follows these exactly — see pkg/service/atg/customFeatures.go ResolveCustomVariables): * Syntax: {{name}} — regex matched: {{(\w+)}} (letters, digits, and underscore only; NO whitespace, hyphens, or dots inside the braces). Names like {{gen-user}} or {{gen.user}} will NOT be substituted — use {{gen_user}} instead. * Substitution happens in: url, body, headers values, AND assertion "expected" values (so an assertion expecting {{genUserId}} gets the SAME resolved value the body used). * Resolution sources (looked up in this order): 1. The step's own `extract` map (seeded into the vars pool at step entry — pkg/service/atg/core.go:3698). 2. Variables produced by EARLIER steps' `extract` maps (post-response JSONPath captures). 3. App-level custom variables (stored on the app record, shared across all suites). THE `extract` FIELD IS THE ONLY AUTHORING SLOT — use it for BOTH static values and JS-function generators. `extract_variables` IS NOT AN AUTHORING SLOT. It's a post-run runtime SNAPSHOT — the runner writes resolved {{var}} values there after each step executes so the UI can show what landed at runtime. **The validator now HARD-REJECTS any step with `extract_variables` populated (extract_variables-input rejection).** If you see `extract_variables` while reading an existing suite via getTestSuite / download_recording / get_app_testing_context, IGNORE IT — that's the runtime's display state, not the suite's input. To author the equivalent, put every entry into `extract` instead (same keys, same values: JSONPath strings stay JSONPath, JS-function strings stay JS). TWO SHAPES THE `extract` FIELD ACCEPTS — pick the right one: (a) JSONPath capture — `"order_id": "$.order.id"` Evaluated against the step's recorded response.body after the request returns. The captured value is staged into vars for LATER steps to reference via {{order_id}}. Use this when the value you need is in the server's response. (b) Inline JS-function generator — `"genUserId": "function genUserId(){ return 'alice_' + Date.now() + '_' + Math.random().toString(36).slice(2,8); }"` The value string must contain the keyword `function` — that is how the runner (core.go:4107 isInlineJs branch) distinguishes JS from a JSONPath. Signature: function <name>(steps) { ... return '<string>'; } returning a string. The `steps` arg is a map of prior-step {request,response} snapshots; ignore it if unused. Use this for inputs that must be unique per run (user_ids, timestamps, uuids) so the suite stays idempotent on re-runs against the same DB. Examples: {"genUserId": "function genUserId() { return 'alice_' + Date.now() + '_' + Math.random().toString(36).slice(2,8); }"} {"genTs": "function genTs() { return String(Date.now() * 1e6 + Math.floor(Math.random()*1e6)); }"} {"genOrderId": "function genOrderId(steps) { return 'ord-' + Math.random().toString(36).slice(2,10); }"} Put the JS-function entry on the FIRST step that needs it (often a health-check step whose own body doesn't reference the var — that's fine, the seed fires on step entry regardless). Later steps reference `{{genUserId}}` in body/url/headers/assertion-expected and see the same resolved value within one run, a fresh value on the next run. WHY THIS MATTERS (the mistake to avoid): if you pin a static user_id like "alice_1776638347063146000" into `extract` AND your validation curl ALREADY inserted that row, the very next record/sandbox replay run will fire the same POST body, the producer (with deterministic ids or unique-constraint indexes) will reject the duplicate with a 500, and every downstream $.order.* / $.shard.* assertion will hit <missing>. Fix: use a JS-function entry so every run gets a fresh user_id. VARIABLE CHAINING (JS-function generator on step 1, JSONPath capture for step-2 chaining): step 1: body: {"user_id":"alice_{{genUserId}}","product_name":"Keyboard_{{genUserId}}"} extract: { "genUserId": "function genUserId(){ return Date.now() + '_' + Math.random().toString(36).slice(2,8); }", "order_id": "$.order.id" } assertions: [ {type:"status_code", expected:"201"}, {type:"json_equal", key:"$.order.user_id", expected:"alice_{{genUserId}}"} -- SAME resolved value as the body used ] step 2: url: /api/orders/{{order_id}} -- resolves from step 1's JSONPath extract at run time assertions: [ {type:"json_equal", key:"$.id", expected:"{{order_id}}"} ] PRELUDE PATTERN — when MULTIPLE POST steps each need their OWN per-run dynamic var: A common mistake is to put the JS-function generator on the SAME step that uses it in the request body. The declare-and-use-same-step check rejects this — same-step `extract` is post-response, so its values aren't in scope when the request fires. Pattern that works: declare the generator on an EARLIER step's `extract` (typically a cheap /health GET as a "prelude"). The runner seeds extract values at STEP ENTRY, so a generator on step 0 is in scope from step 0 onwards — every later POST can reference it. step 0 — prelude (the extract entry is what matters; the step itself can be anything cheap): method: GET, url: /health extract: { "uniq": "function uniq(){ return 'p_'+Date.now()+'_'+Math.random().toString(36).slice(2,8); }" } assertions: [ {type:"status_code", expected:"200"} ] step 1, 2, 3 — POSTs that all reference {{uniq}}: method: POST, url: /api/orders body: '{"user_id":"alice","product_name":"{{uniq}}",...}' // (no `extract` needed — the generator is in scope from step 0) The prelude itself doesn't need to USE the var; declaring it is enough. This is the right shape for "create N orders with different unique keys" — without the prelude, you'd hit the declare-and-use-same-step check on every POST that tries to declare-and-use a generator on the same step. VALIDATE-LOCALLY-BEFORE-INSERTING (CRITICAL for a usable suite): DO NOT call this tool with raw un-tested steps. EVERY step you send MUST have its "response" and "extract" fields populated from a live run. These are NOT optional. Without them: - The UI cannot render the step (shows an empty panel). - The rerecord runs blind and fails. - The step's {{variables}} won't resolve. Required per-step fields when calling this tool (in steps_json): • name, method, url, headers, assert — obviously • body — for POST/PUT/PATCH; MUST reference random inputs as {{varname}} placeholders, NOT inline timestamps. Inline timestamps get baked into the suite and collide on re-run. • extract — MUST contain a resolvable entry for every {{varname}} you reference in body/url/headers. JS-function entries are fine (they're the canonical "dynamic input" shape); JSONPath entries chain values from one step's response into later steps. Example: body has {"user_id":"alice_{{genUserId}}"} → step's extract must have {"genUserId":"function genUserId(){ ... }"}. • response — the raw captured response from your local curl. Shape: {"body":"<raw string>","status":201,"headers":{"Content-Type":"application/json",...}}. MANDATORY validate-locally flow (do this BEFORE calling create_test_suite): 1. Bring the dev's app up locally (Bash: docker compose up -d, or instruct the dev). Wait for /health readiness. 2. For EACH step in order (simulating what the runner will do): a. For dynamic inputs (user_id, timestamps, uuids): DON'T inline a value — write a JS function into the step's `extract` map, e.g. {"genUserId":"function genUserId(){return 'alice_'+Date.now()+'_'+Math.random().toString(36).slice(2,8);}"}, and reference it in body/url/headers/assertion-expected as {{genUserId}}. For the local curl, you still need a CONCRETE value for that run — so as you're building the step, JS-eval the function yourself (or just pick a value consistent with the function's output shape) to do ONE concrete local curl for capturing the response. That concrete value goes ONLY into the captured "response" body — NOT into `extract` (which keeps the JS function verbatim). b. Substitute {{name}} everywhere in the step's url/body/headers using accumulated variables (this step's extract + earlier steps' extract results). c. curl the SUBSTITUTED request against the live app. Capture the response. d. Check each "assert" against the captured response. If any fails → regenerate (different inputs / loosen assertion / change body shape) and retry this step. DO NOT move on with a failing step. e. Save the captured response into the step's "response" field as {"body":"<raw string>","status":<int>,"headers":{...}}. f. If the step has JSONPath entries in `extract`, evaluate each path against the response and note those values so later steps can use them in their {{var}} substitutions. 3. AFTER the first pass of all steps, run the WHOLE SEQUENCE a SECOND TIME against the same live app — no DB wipe in between. Because you're using JS-function generators, each run should pick fresh random inputs → no unique-constraint collision. If ANY step that passed the 1st run fails the 2nd run (common symptom: 500 "failed to save order" / "duplicate key" / "already exists"), the suite is NOT idempotent. Go back to step 2a and either (i) increase the entropy of the JS function, (ii) restructure the step to be READ-AFTER-WRITE instead of POST-then-POST, (iii) drop the step if it genuinely can't be made idempotent. 4. Only once every step passes BOTH validation runs → call create_test_suite. Pass each step's response + extract (with JS functions verbatim) through steps_json. If you call create_test_suite without response + extract on every step, you are creating a suite that is broken by construction. The UI + rerecord WILL fail. SELF-CONTAINED TESTS (required for repeat runs): * The suite will be re-run many times (local validation + record + sandbox replay + ad-hoc UI runs). It must not depend on prior state. * Put random inputs in `extract` JS functions with high entropy (timestamps, uuid). Plain "alice" will collide on re-run against producers with dedupe. * Prefer READ-AFTER-WRITE chaining: POST creates resource → extract id → GET uses id. Validates without depending on PRE-EXISTING ambient seed data (rows that "happen to exist" in the dev's DB). SEED → TESTED → CLEANUP roles within a suite: When the scenario is "user can read X" or "user can list X", you can't assert against ambient state — there's no guarantee the X exists when the suite runs. Pattern: step seed: POST /X (with dynamic {{var}} body) → extract id step tested: GET /X (or list) → assert against the seeded id step cleanup: DELETE /X/{{id}} → restore baseline Only the "tested" step is what the suite is FOR; the seed and cleanup steps are scaffolding so the test can run any number of times against any starting state. Without an explicit seed step, you're either testing nothing (empty list) or relying on pre-existing data the next replay won't have. DO NOT assert "the user has 3 orders" — that's ambient state. Seed N orders inside the suite first, then assert the count. Same applies to any "list / search / count" scenario: seed the data the test depends on, never assume it's there. ═══════════════════════════════════════════════════════════════════ SUBSTITUTION RULES — where {{var}} is and isn't allowed ═══════════════════════════════════════════════════════════════════ `extract` values come in two flavours and they substitute very differently: TYPE A — JS-function generator (`extract: { genUserId: "function genUserId(){return 'apple_'+Date.now()}" }`): The runtime STORES the source string and RE-RUNS the function on EVERY `{{genUserId}}` substitution site. Each call returns a fresh value. ONLY safe in POST / PUT / PATCH request BODIES — that's the one place you actually want a fresh value (uniqueness for inserts). TYPE B — JSONPath extract (`extract: { createdId: "$.order.user_id" }`): Evaluated ONCE against the step's recorded response, the resulting STRING is stored. Subsequent `{{createdId}}` substitutions resolve to that same fixed string. Safe everywhere — URLs, assertions, downstream bodies. ALLOWED placements for `{{generatorFn}}` (TYPE A): ✓ POST / PUT / PATCH request body — the canonical "give me a fresh value to insert" use case. FORBIDDEN placements for `{{generatorFn}}` (TYPE A) — the validator REJECTS these (generator-placement checks): ✗ `assert[*].expected` — the assertion's expected value will be a fresh function call, NOT the value the body sent. Static literals or `{{TYPE_B_extract}}` only. ✗ GET / DELETE / HEAD / PUT / PATCH URL (path or query) — those target an existing resource; the URL must encode the SAME id the creating POST used. Use a TYPE B extract from the creating step. ✗ Path/query of a downstream step's URL when the value should match what the upstream step inserted — same reason. CANONICAL PATTERN — read-after-write with a stable id: Step 0 (POST creates resource): body: {"user_id":"{{genUserId}}","name":"widget"} // TYPE A in body — fresh per run, OK extract: {"genUserId":"function genUserId(){...}", // TYPE A — body source "createdId":"$.order.user_id"} // TYPE B — capture server's stored id assert: [{type:"status_code", expected:"201"}, {type:"json_equal", key:"$.order.id", expected:"{{createdId}}"}] // TYPE B in assertion — stable Step 1 (GET reads it back): url: "/api/orders?user_id={{createdId}}" // TYPE B in GET URL — stable assert: [{type:"json_equal", key:"$.orders.0.user_id", expected:"{{createdId}}"}] // TYPE B — stable DO NOT do this (the validator will reject it): Step 0: body: {"user_id":"{{genUserId}}"} assert: [{type:"json_equal", key:"$.order.user_id", expected:"{{genUserId}}"}] // generator-placement check — TYPE A in assertion Step 1: url: "/api/orders?user_id={{genUserId}}" // generator-placement check — TYPE A in GET URL Name-collision check — do NOT pick an `extract` key that already exists on the app's appLevelCustomVariables. Use get_app_testing_context (or check the app payload) to enumerate them first; if a collision is unavoidable, scope-suffix your key (e.g. `genUserId_smokeTest`). Otherwise the runtime resolves the app-level variable first and silently shadows the suite's extract. You can also construct steps from data fetched via download_recording or get_app_testing_context, but the validate-locally-before-inserting rule still applies. CRITICAL — READING EXISTING SUITES: when the data you fetch via getTestSuite / get_app_testing_context / download_recording shows steps with `extract_variables` populated, that's the runtime's POST-EXECUTION SNAPSHOT (resolved values the runner wrote back for UI display). It is NOT what was authored. Treating that field as a copy-paste template makes the validator's extract_variables-input rejection reject every suite you produce. To replicate the authored behavior: copy every entry into `extract` instead, preserving keys and values verbatim (JS-function strings stay JS, JSONPath strings stay JSONPath). When in doubt: `extract_variables` is read-only output state; `extract` is input. ===== FROM-SCRATCH SCOPE RULE ===== When the dev asks to "generate / create / add / build keploy tests" without narrowing the scope, DEFAULT = ALL ENDPOINTS. Enumerate every non-trivial endpoint the app exposes (OpenAPI spec, router code, handler files) and author ONE suite per logical grouping — e.g. "user-crud", "auth-flow", "order-happy-path", "order-validation-errors". A single-endpoint app might produce one suite; a typical microservice produces 3-8. Groupings should be READ-AFTER-WRITE coherent (each suite's steps chain via extract variables rather than depending on outside state). TELL THE DEV up-front how many suites you're about to create and what each covers, then proceed — do NOT ask for confirmation mid-flow. If the dev explicitly narrows it ("just the happy path for orders", "only the auth flow"), honor that. ===== HARD RULE — NO DB-STATE-DEPENDENT STEPS ===== Do NOT include any step whose response body depends on total DB / queue / file-system state. Concretely: GET /items with no filter, GET /orders with no user_id filter, any "list-all" / "count" / "search" that returns more rows the longer the app has been running, any endpoint returning the CURRENT time / UUID / request-id. The auto-replay after record byte-compares the recorded response with what the live app returns under mocks, and non-deterministic bodies make gate 2 skip → the suite is never linked → sandbox replay fails with "no sandboxed tests". If you find yourself reasoning "this list might vary slightly but the mock should handle it" — STOP and drop the step. The suite should contain ONLY steps whose response body is fully determined by that step's own request: health checks, create-with-fresh-ids, read-back-by-id for the ids you just minted, validation-error 400s on bad payloads. Filtered reads using a freshly-extracted id are fine; unfiltered reads are not. ===== SERVER-SIDE IDEMPOTENCY ENFORCEMENT ===== This tool REJECTS (with a typed error, before creating anything) any suite whose mutating steps aren't idempotent on re-run. The rule: For each step with method ∈ {POST, PUT, PATCH} and a non-empty body, the body MUST contain at least one `{{name}}` placeholder that resolves to a per-run dynamic value. "Dynamic" means one of: (a) a JS-function entry in any step's `extract` map (e.g. {"genUserId": "function genUserId(){ return 'u_'+Date.now()+'_'+Math.random().toString(36).slice(2,8); }"}), OR (b) a JSONPath extract output from an EARLIER step (transitively dynamic if that step's own body was idempotent). If the endpoint is GENUINELY idempotent (e.g. POST /auth/refresh, PUT /tags/apply-same-input — repeat calls don't hit unique constraints) set `"idempotent": true` on the step to waive the check. Use this sparingly — the default is "assume it'll collide" because that's the common case. On rejection the error names the offending step and lists the dynamic variable names already in scope so you can see what's wireable. Fix the suite and retry — do NOT just flip `idempotent: true` to bypass. ===== MANDATORY OUTPUT — Phase 1 section ===== After all create_test_suite calls in a FROM-SCRATCH flow succeed, your final message to the dev MUST contain a section with this exact heading (do NOT collapse into prose; emit even for a single suite): ### Phase 1 — Inserted suites | Suite name | suite_id | Step count | | --- | --- | --- | | <name> | <suite_id> | <N> | One row per suite created in this flow. Next step after Phase 1 is record_sandbox_test (see its description for Phase 2). ===== HOW THIS TOOL ACTUALLY INSERTS THE SUITE ===== This tool DOES NOT POST the suite to api-server itself. It returns a "playbook" — a small array of shell steps for you (Claude) to walk via Bash. The playbook spawns the enterprise CLI `keploy create-test-suite` which: 1. Reads the suite JSON the playbook wrote to disk. 2. Runs every static structural check — exits 1 with violations on stdout if anything fails. 3. Fires the suite against the dev's local app TWICE (idempotency check) — exits 1 if the second run diverges from the first. 4. Runs dynamic checks (generator-dynamism + GET-coupling) — exits 1 with violation messages on failure. 5. POSTs the validated suite to api-server (HTTP 201 → success; HTTP 426 → CLI is older than api-server's rule set, dev needs to upgrade `keploy`). Walk the playbook in order. If step 2 (the CLI run) exits non-zero, surface its stdout to the dev — it lists the offending step / check / fix-it hint and includes a canonical step skeleton on structural failures. ITERATE LOCALLY: revise the JSON in your draft, REWRITE the same suite file via Bash, and RE-RUN step 2 directly. DO NOT call create_test_suite again per iteration — that mints a fresh playbook and a new nonce-path for no reason; the existing one is reusable. The CLI ALSO requires every step to have `response` and `extract` populated (step completeness check plus the validate-locally rules above), so the validate-locally curl flow described earlier is still required BEFORE calling this tool. PREREQUISITES the playbook assumes: * The dev's app is up and reachable at app_url. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists (API key) or KEPLOY_API_KEY is exported. The CLI uses the API key for the api-server POST (different from the OAuth-JWT path the sandbox tools use).
    Connector
  • Create a new funnel on a project. Steps are 2–10 ordered events or pageview paths. conversionWindowMs caps how long a visitor has between consecutive steps (default 7 days); this is the step-to-step limit, without which a funnel is just event co-occurrence. Returns { id } on success.
    Connector
  • Composite server-side investigation tool. Pass a question and the server automatically: (1) detects intent (aggregation/temporal/ordering/knowledge-update/recall), (2) queries the entity index for structured facts, (3) builds a timeline for temporal questions, (4) retrieves memory chunks with the right scoring profile, (5) expands context around sparse hits, (6) derives counts/sums for aggregation, (7) assesses answerability, and (8) returns a recommendation. Use this as your FIRST tool for any non-trivial question — it does the multi-step investigation that would otherwise take 4-6 individual tool calls. The response includes structured facts, timeline, retrieved chunks, derived results, answerability assessment, and a recommendation for how to answer.
    Connector