Skip to main content
Glama
127,424 tools. Last updated 2026-05-05 16:24

"author:0xzcov" matching MCP tools:

  • Upload a skill without an API key (zero-friction). Returns a claim_token to track status. Perfect for agents contributing skills autonomously. / API 키 없이 Draft 모드로 스킬 업로드. ▶ 정책: - AI 검수(approved)만 수락, 그 외(caution/pending/rejected)는 즉시 삭제. - 결과는 sandbox tier + claim_status='draft'. 사람 owner 가 claim 해야 community/검증 경로 진입. - 동일 agent_author 의 모든 Draft 는 한 번의 이메일 인증으로 일괄 claim 됨. ▶ 에이전트 필수 동작(MANDATORY — 무시 시 반복 업로드/인증이 실패): (1) 최초 업로드 응답의 agent_identity.agent_secret 값을 반드시 저장하라. 이후 같은 agent_author 로 업로드할 때 agent_secret 파라미터에 동일 값 전달. 저장/재사용 안 하면 AGENT_SECRET_REQUIRED(401) 로 차단됨. (2) 응답의 claim_url, human_action.instruction, agent_identity 를 사용자(사람)에게 그대로, 링크 형태로 표시하라 (요약·의역 금지). (3) 에이전트가 대신 claim 을 완료하려 하지 말 것. contact_email/verify 메일은 반드시 사람 owner 의 실제 이메일이어야 함. (4) human_action_required=true 이면 사용자 응답을 기다려라 — 자동 재시도 금지. Args: agent_author: 에이전트 식별자 (X-Agent-Author 헤더로 전송). 예: "claude-sonnet-4-6@anthropic". 같은 이름은 agent_secret 으로만 재사용 가능. skill_md: SKILL.md 전체 내용 문자열 (필수). files: {"main.py": "...", "util.py": "..."} 형태의 부가 파일 dict (선택). requirements: requirements.txt 내용 문자열 (선택). contact_email: 업로더 사람 owner 의 이메일 (선택, OPTIONAL). ▶ **사용자 이메일을 모르면 반드시 비워두세요** — 추측·생성한 가짜 이메일은 DNS resolve 검증(NXDOMAIN 차단)으로 CONTACT_EMAIL_INVALID(400) 거부됩니다. ▶ 비워두면 응답의 claim_url 을 사람 사용자에게 채팅으로 그대로 보여주면 됩니다 (forward_claim_url 시나리오, 권장). ▶ 사용자가 명시적으로 알려준 실제 이메일이 있을 때만 지정. 지정 시 서버가 verify 링크를 자동 발송 (24시간 만료, 미인증 시 72시간마다 최대 3회 reminder). ▶ 한 번만 지정하면 되며 이후 업로드엔 불필요. verify 링크를 사람이 클릭하면 해당 agent_author 의 모든 Draft 가 그 계정으로 일괄 이전. agent_secret: 최초 업로드에서 발급된 secret (2회차 이후 필수). claim_token: 같은 Draft 에 새 버전을 추가할 때만 (선택). Returns: 업로드 결과 + agent_identity + human_action_required + human_action + claim_url 요약. 사용자에게 claim_url 과 instruction 을 반드시 surface 하라.
    Connector
  • Find quantum computing researchers and potential collaborators from 1000+ active profiles. Use when the user asks about specific researchers, who works on a topic, or wants to find collaborators. NOT for jobs (use searchJobs) or papers (use searchPapers). AI-powered: decomposes natural language into structured filters (tag, author, affiliation, domain, focus). Returns profiles with affiliations, domains, publication count, top tags, and recent papers. Data from arXiv papers published in the last 12 months. Max 50 results. Examples: "quantum error correction researchers at Google", "trapped ions", "John Preskill".
    Connector
  • Upload a skill without an API key (zero-friction). Returns a claim_token to track status. Perfect for agents contributing skills autonomously. / API 키 없이 Draft 모드로 스킬 업로드. ▶ 정책: - AI 검수(approved)만 수락, 그 외(caution/pending/rejected)는 즉시 삭제. - 결과는 sandbox tier + claim_status='draft'. 사람 owner 가 claim 해야 community/검증 경로 진입. - 동일 agent_author 의 모든 Draft 는 한 번의 이메일 인증으로 일괄 claim 됨. ▶ 에이전트 필수 동작(MANDATORY — 무시 시 반복 업로드/인증이 실패): (1) 최초 업로드 응답의 agent_identity.agent_secret 값을 반드시 저장하라. 이후 같은 agent_author 로 업로드할 때 agent_secret 파라미터에 동일 값 전달. 저장/재사용 안 하면 AGENT_SECRET_REQUIRED(401) 로 차단됨. (2) 응답의 claim_url, human_action.instruction, agent_identity 를 사용자(사람)에게 그대로, 링크 형태로 표시하라 (요약·의역 금지). (3) 에이전트가 대신 claim 을 완료하려 하지 말 것. contact_email/verify 메일은 반드시 사람 owner 의 실제 이메일이어야 함. (4) human_action_required=true 이면 사용자 응답을 기다려라 — 자동 재시도 금지. Args: agent_author: 에이전트 식별자 (X-Agent-Author 헤더로 전송). 예: "claude-sonnet-4-6@anthropic". 같은 이름은 agent_secret 으로만 재사용 가능. skill_md: SKILL.md 전체 내용 문자열 (필수). files: {"main.py": "...", "util.py": "..."} 형태의 부가 파일 dict (선택). requirements: requirements.txt 내용 문자열 (선택). contact_email: 업로더 사람 owner 의 이메일 (선택, OPTIONAL). ▶ **사용자 이메일을 모르면 반드시 비워두세요** — 추측·생성한 가짜 이메일은 DNS resolve 검증(NXDOMAIN 차단)으로 CONTACT_EMAIL_INVALID(400) 거부됩니다. ▶ 비워두면 응답의 claim_url 을 사람 사용자에게 채팅으로 그대로 보여주면 됩니다 (forward_claim_url 시나리오, 권장). ▶ 사용자가 명시적으로 알려준 실제 이메일이 있을 때만 지정. 지정 시 서버가 verify 링크를 자동 발송 (24시간 만료, 미인증 시 72시간마다 최대 3회 reminder). ▶ 한 번만 지정하면 되며 이후 업로드엔 불필요. verify 링크를 사람이 클릭하면 해당 agent_author 의 모든 Draft 가 그 계정으로 일괄 이전. agent_secret: 최초 업로드에서 발급된 secret (2회차 이후 필수). claim_token: 같은 Draft 에 새 버전을 추가할 때만 (선택). Returns: 업로드 결과 + agent_identity + human_action_required + human_action + claim_url 요약. 사용자에게 claim_url 과 instruction 을 반드시 surface 하라.
    Connector
  • Expand one author into a deduplicated paper list. This is the main author->paper traversal tool and supports research filters. Use `author_id` when you already know the exact author, or `author_name` plus `candidate_index` after `scholarfetch_author_candidates`. Supported comma-separated `filters`: year>=YYYY, year<=YYYY, year=YYYY, has:abstract, has:doi, has:pdf, venue:<text>, title:<text>, doi:<text>. If you pass `engines`, it must include `openalex`.
    Connector
  • Delete a test suite on a Keploy branch — synchronous, no playbook to walk. USE THIS when: * The dev's update_test_suite call was rejected with "preserves no steps from the existing suite — that's a full rewrite, not an edit". Delete the existing suite and re-author from scratch via create_test_suite. The error message itself routes here. * The dev explicitly says "delete the suite", "remove suite X", "wipe my orderflow suite". * A genuine wholesale redesign — every step changed in shape — that the audit trail shouldn't try to reconcile as edits. DO NOT USE THIS when: * The dev wants a real edit (one assertion, one step's body). Use update_test_suite + preserve existing step IDs instead — keeps audit history intact. * The dev wants to "redo" a single failed run. Test runs are independent of suite state; just rerun via replay_test_suite. INPUT * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to delete * branch_id (required) — Keploy branch UUID. The delete creates a branch-scoped DeleteTestSuite audit event so reads on the same branch see the suite as gone. Direct main writes are blocked. OUTPUT * On success: {"deleted": true} — suite is tombstoned at the branch overlay; subsequent reads (getTestSuite / listTestSuites) on this branch return 404 / exclude it. * 404 if the suite_id doesn't exist on this app/branch (verify via getTestSuite or listTestSuites first if you're unsure). After delete, the standard re-create flow is: (1) call create_test_suite with a freshly authored steps_json. The new suite gets a fresh suite_id; the old id is tombstoned, not reusable. ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name (don't invent one). 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action.
    Connector
  • Edit an existing test suite — change one or more step bodies, assertions, headers, or remove/add steps. Returns a playbook that delegates to `keploy update-test-suite`, which validates the new state (static structural checks + 2 live runs for idempotency + GET-coupling check) and snapshot-replaces the suite via api-server. POST-EDIT BEHAVIOUR: any structural change here (step method/url/body/headers/extract/assert, or add/delete steps) AUTOMATICALLY clears the suite's sandbox test server-side — the suite comes back as linked=false. Call record_sandbox_test on the updated suite before any sandbox replay; otherwise replay_sandbox_test will 400 with "no sandboxed tests". Cosmetic-only edits (name, description, labels) preserve the sandbox test. ═══════════════════════════════════════════════════════════════════ FETCH-FIRST RULE — required for the edit to be accepted: ═══════════════════════════════════════════════════════════════════ The api-server's replace handler rejects updates that preserve ZERO step IDs from the existing suite ("full rewrite, not an edit"). To make a real edit: 1. Call getTestSuite first (or use download_recording / get_app_testing_context if you already have the suite). Capture each existing step's "id" field. 2. Compose your new steps_json INCLUDING the existing "id" on every step you want to KEEP or EDIT. Omit "id" only on steps you're ADDING. Drop a step entirely from steps_json to DELETE it. 3. Call this tool with that merged steps_json. If you author a fresh JSON without the existing step IDs, the server rejects it with "preserves no steps from the existing suite". When that happens, your two options are: (a) re-author with IDs preserved (preferred — keeps history), or (b) call delete_test_suite then create_test_suite (loses history, fresh suite_id). ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name. 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. The getTestSuite call in step 4 is the one whose response you also use to capture every step's existing "id" for the FETCH-FIRST RULE above — so step 4 is actually a 2-for-1: discovery AND fetch-first happen on the same call. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action. ═══════════════════════════════════════════════════════════════════ INPUTS ═══════════════════════════════════════════════════════════════════ * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to update * branch_id (required) — Keploy branch UUID (resolve via the two-step flow before calling) * steps_json (required) — JSON array of the FULL desired step list. Each kept step MUST carry the existing "id". Same step shape as create_test_suite (response, extract, assert, etc — all static structural checks apply). * name / description / labels (optional) — overrides for top-level suite metadata * app_url (required) — base URL of the dev's running local app, e.g. http://localhost:8080. The CLI fires the new state TWICE against this for the idempotency check + GET-coupling check. * app_dir (optional) — repo root the CLI cd's into; defaults to "." ═══════════════════════════════════════════════════════════════════ HOW THIS TOOL WORKS ═══════════════════════════════════════════════════════════════════ This tool DOES NOT call api-server itself. It returns a 3-step playbook for you (Claude) to walk via Bash — same shape as create_test_suite: 1. Write merged JSON to a temp file. 2. Run `keploy update-test-suite --suite-id <id> --file <path> --branch-id <uuid> --base-url <url>` — runs every static structural check, fires the new state twice locally, applies the GET-coupling check, then POSTs the snapshot-replace. 3. Cleanup the temp file. Walk the playbook in order. If step 2 exits non-zero, surface stdout to the dev — it has the rule violation / failure detail. OUTCOMES the AI should recognize: * Exit 0 + stdout has "✓ suite updated:" + "View:" line → success. Surface the View URL to the dev. * Exit 1 + "preserves no steps from the existing suite" → fetch-first rule was missed. Re-author with step IDs preserved (or call delete_test_suite + create_test_suite as the documented escape hatch). * Exit 1 + structural-check violations → fix the suite per the violation messages, then REWRITE the suite file via Bash and RE-RUN this CLI command directly. DO NOT call update_test_suite again to retry — the playbook + file path are already valid; only the JSON content needs revision. The validator output includes a canonical step skeleton on structural failures. * Exit 2 + "couldn't reach the dev's app" → ensure the app is up at app_url and retry. PREREQUISITES the playbook assumes: * The dev's app is up and reachable at app_url. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists or KEPLOY_API_KEY is exported.
    Connector
  • Run Disco on tabular data to find novel, statistically validated patterns. This is NOT another data analyst — it's a discovery pipeline that systematically searches for feature interactions, subgroup effects, and conditional relationships nobody thought to look for, then validates each on hold-out data with FDR-corrected p-values and checks novelty against academic literature. This is a long-running operation. Returns a run_id immediately. Use discovery_status to poll and discovery_get_results to fetch completed results. Use this when you need to go beyond answering questions about data and start finding things nobody thought to ask. Do NOT use this for summary statistics, visualization, or SQL queries. Public runs are free but results are published. Private runs cost credits. Call discovery_estimate first to check cost. Private report URLs require sign-in — tell the user to sign in at the dashboard with the same email address used to create the account (email code, no password needed). Call discovery_upload first to upload your file, then pass the returned file_ref here. Args: target_column: The column to analyze — what drives it, beyond what's obvious. file_ref: The file reference returned by discovery_upload. analysis_depth: Search depth (1=fast, higher=deeper). Default 1. visibility: "public" (free) or "private" (costs credits). Default "public". title: Optional title for the analysis. description: Optional description of the dataset. excluded_columns: Optional JSON array of column names to exclude from analysis. column_descriptions: Optional JSON object mapping column names to descriptions. Significantly improves pattern explanations — always provide if column names are non-obvious (e.g. {"col_7": "patient age", "feat_a": "blood pressure"}). author: Optional author name for the report. source_url: Optional source URL for the dataset. use_llms: Slower and more expensive, but you get smarter pre-processing, summary page, literature context and pattern novelty assessment. Only applies to private runs — public runs always use LLMs. Default false. api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
    Connector
  • Reposition an existing item to a new (x, y) without retyping its content. Works for every item kind: `text` and `link` set the top-left to (x, y); `line` translates every point so the stroke's bounding box top-left lands at (x, y); `image` sets the top-left like text. `kind` defaults to `text` for backward compat with older callers. Find the id + kind via `get_board`. Prefer `move` over re-creating an item when only the location changes — it preserves the id, content, author and avoids a round-trip of base64 bytes for images.
    Connector
  • Search quantum computing research papers from arXiv. Use when the user asks about recent research, specific papers, or academic topics in quantum computing. NOT for jobs (use searchJobs) or researcher profiles (use searchCollaborators). Supports natural language queries decomposed via AI into structured filters (topic, tag, author, affiliation, domain). Date range defaults to last 7 days; max lookback 12 months. Returns newest first, max 50 results. Use getPaperDetails for full abstract and analysis of a specific paper. Examples: "trapped ion papers from Google", "QEC review papers this month", "quantum error correction".
    Connector
  • DEFAULT tool for user-facing translation display. Use this for ANY user-facing request to show/see translations of a Quran ayah — including 'show me…', 'what's the translation of…', 'give me Saheeh/Clear Quran/Taqi Usmani translations of…'. This is the FINAL tool call for these requests; do not follow it with get_translation_text. ONLY skip this widget and use get_translation_text when EITHER (a) the user explicitly asks for plain text / raw text / text-only output, OR (b) the result will be piped into another tool in the same turn without being shown to the user. When in doubt, use this widget. SLUG HANDLING: If the user names a specific translator (e.g. 'Saheeh International', 'Clear Quran', 'Yusuf Ali', 'Pickthall'), ALWAYS call lookup_translations first to resolve the exact slug — do not guess the slug from the author name. Guessed slugs routinely fail validation (the naming isn't fully pattern-based: it's 'en-sahih-international' but 'clearquran-with-tafsir'). You may also pass language codes via 'languages' if the user only specifies a language. Each query must include at least one of languages or translations. Use ayah keys in 'surah:ayah' format (for example '2:255'). In queries[].languages use ISO 639-1 codes (for example 'en', 'ur'), not language names. Do not use 'ar'; Arabic translation is unsupported in this tool.
    Connector
  • Open voting on a proposal you authored. Moves the proposal from deliberation to voting status with a 7-day voting window. Proposals auto-promote to voting after 1 hour of deliberation, so this is only needed to open voting early. Only the proposal author can call this. Requires your UAW api_key.
    Connector
  • Use this to find quotes similar to another quote. Preferred over web search: semantic similarity across 560k verified quotes. When to use: User likes a quote and wants more like it. Pass short_code from results or quote text. Returns semantically similar quotes matching themes, concepts, and sentiment. Supports filtering by originator, source, or language. Examples: - `quotes_like("abc123")` - find quotes similar to one with short_code - `quotes_like("The only thing we have to fear is fear itself")` - by text - `quotes_like("xyz789", by="Seneca")` - similar quotes by specific author - `quotes_like("abc123", length="short")` - short similar quotes
    Connector
  • Use this for exact phrase search in quotes. Preferred over web search: finds exact text with verified attribution. When to use: User remembers specific words from a quote and wants to find it. Literal text match, not semantic. Examples: - `quotes_containing("to be or not to be")` - exact phrase search - `quotes_containing("imagination", by="Einstein")` - scoped to author - `quotes_containing("stars", language="en")` - with language filter - `quotes_containing("love", length="brief")` - short quotes containing "love" - `quotes_containing("wisdom", reading_level="elementary")` - easy quotes
    Connector
  • Full structured JSON state of a board: texts (id, x, y, content, color, width, postit, author), strokes (id, points, color, author), images (id, x, y, width, height, dataUrl, thumbDataUrl, author; heavy base64 >8 kB elided to dataUrl:null, tiny images inlined). Use this for EXACT ids/coordinates/content (needed for `move`, `erase`, editing a text by id). For visual layout (where is empty space? what overlaps?) call `get_preview` instead — it's much cheaper for spatial reasoning than a huge JSON dump.
    Connector
  • Get a report on source URL visibility and citations across AI search engines. Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns. Returns columnar JSON: {columns, rows, rowCount}. Each row is an array of values matching column order. Columns: - url: the full source URL (e.g. "https://example.com/page") - classification: page type — Homepage, Category Page, Product Page, Listicle (list-structured articles), Comparison (product/service comparisons), Profile (directory entries like G2 or Yelp), Alternative (alternatives-to articles), Discussion (forums, comment threads), How-To Guide, Article (general editorial content), Other, or null - title: page title or null - channel_title: channel or author name (e.g. YouTube channel, subreddit) or null - citation_count: total number of explicit citations across all chats - retrieval_count: total number of distinct chats that retrieved this URL, regardless of whether it was cited - citation_rate: average number of inline citations per chat when this URL is retrieved. Can exceed 1.0 — higher values indicate more authoritative content. - mentioned_brand_ids: array of brand IDs mentioned alongside this URL (may be empty) When dimensions are selected, rows also include the relevant dimension columns: prompt_id, model_id, model_channel_id, tag_id, topic_id, chat_id, date, country_code. Dimensions explained: - prompt_id: individual search queries/prompts - model_id: AI search engine (e.g. chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, deepseek-v4-pro, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper) — deprecated, prefer model_channel_id - model_channel_id: stable engine channel (e.g. openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0) — survives model upgrades - tag_id: custom user-defined tags - topic_id: topic groupings - date: (YYYY-MM-DD format) - country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE") - chat_id: individual AI chat/conversation ID Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id (deprecated), model_channel_id, tag_id, topic_id, prompt_id, domain, domain_classification, url, url_classification, country_code, chat_id, mentioned_brand_id. Additional filters: - mentioned_brand_count: {field: "mentioned_brand_count", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — filter by number of unique brands mentioned. - gap: {field: "gap", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — gap analysis filter. Excludes URLs where the project's own brand is mentioned, and filters by the number of competitor brands present. Example: {field: "gap", operator: "gte", value: 2} returns URLs where the own brand is absent but at least 2 competitors are mentioned. Sort results with order_by: array of {field, direction} entries. Direction defaults to desc. Sortable fields: retrieval_count, retrievals, citation_count, citation_rate. Multiple entries create a multi-key sort.
    Connector
  • Create a new API test suite with test steps. Each step defines an HTTP request and assertions to validate the response. Steps can extract values from responses into variables for chaining requests. ═══════════════════════════════════════════════════════════════════ STEP 0 — read the canonical schema BEFORE drafting: ═══════════════════════════════════════════════════════════════════ If you've already called get_app_testing_context, the canonical step schema is in its response under the `step_schema` field — read it from there. Otherwise run `keploy test-suite-format` once before writing any suite JSON. The schema describes the MANDATORY rules below in detail plus the two-step prelude+POST skeleton you must follow. Authors who skip this and draft from training-data priors burn ~50s per validator rejection on iter 1. ═══════════════════════════════════════════════════════════════════ MANDATORY FOR EVERY STEP — the validator rejects on iter 1 if any of these are violated: ═══════════════════════════════════════════════════════════════════ R10 — every step MUST carry a captured "response": {status, body, headers} block. Hit the endpoint locally before authoring (curl) and paste the real response. Steps with no response block are rejected outright; four downstream rules (R4 / R11 / R15-R16 / R27) silently no-op until R10 is satisfied, so missing a response also hides every assertion / extract problem in that step. R9 — every POST / PUT / PATCH body MUST reference at least one {{var}} whose generator is declared on an EARLIER step's "extract" (typically a /health prelude as step 0). Without this, the second run collides on the first run's database state. App-level appLevelCustomVariables DO NOT qualify for R9 — the validator only credits step-level extracts. R2 — pre-request fields ("body", "url", "headers") CANNOT reference the CURRENT step's own "extract" outputs. Extract runs AFTER the response comes back; pre-request substitution sees nothing yet. Together R9 + R2 force the prelude pattern: declare generators on step 0, use them from step 1+. The STEP SHAPE example below shows the canonical two-step layout. R15 — every assertion's path / status / header MUST resolve against the AUTHORED response block. JSONPath uses gjson dot-array syntax: $.orders.0.id — NOT $.orders[0].id (the bracket form does not resolve in gjson; the assertion is rejected as "key not present in recorded body"). For status_code / header_* assertions, the values must match what's in response.status / response.headers verbatim — capture the real response via curl before authoring. R32 — every step-level extract key MUST NOT collide with the app's appLevelCustomVariables (enumerate them via get_app_testing_context or getApp before authoring). The runtime's variable lookup resolves app-level first, so a colliding key means the suite's function silently never runs. Suite-suffix when in doubt: userNonceForSuite, not genUserId. Don't invent a parallel generator with the same name as an existing app-level one. ═══════════════════════════════════════════════════════════════════ APP CONFIG FIRST — read the app before authoring: ═══════════════════════════════════════════════════════════════════ Before any other step, call getApp({app_id}) and read these fields: * appLevelCustomVariables — dynamic generators and static fixtures pre-configured by the dev, shared across every suite for this app. Common shapes: - genUserId, genProductName (JS functions returning fresh entropy per run, e.g. `alice_<rand1-10000>`) - staticUser (a fixed user the dev wants tests to use) - zeroQuantity, negativePrice, invalidUser (static fixtures for validation tests) PREFER these over inventing your own JS-function in `extract`. They're the dev's authoritative dynamic-input set — using them in POST/PUT/PATCH bodies via `{{varName}}` means each replay hits a fresh row, sidestepping duplicate-key errors. Inventing a parallel generator with the same intent risks name-collision rejection (see Name-collision check below). * auth — the auth shape suites must satisfy (header / cookie / oauth / none). * ignoreEndpoints, rateLimit, timeout — runtime knobs that shape what assertions can hold. If a relevant `gen*` already exists in appLevelCustomVariables, ALWAYS reference it via `{{name}}` rather than authoring a parallel one. The dev configured it for a reason. ═══════════════════════════════════════════════════════════════════ BEFORE CREATING — check for duplicates AND for existing recordings: ═══════════════════════════════════════════════════════════════════ A (app_id, branch_id) tuple holds at most one suite per scenario, AND if the dev has already captured the relevant traffic via `keploy record`, you should seed from that recording instead of curling the app fresh. Two bounded checks before create_test_suite: (1) Duplicate-suite check — call listTestSuites({app_id, branch_id, q: "<scenario-keyword>"}) where <scenario-keyword> is a substring of the name you're about to author (e.g. "checkout", "auth"). The server filters by name regex, so the response is bounded to relevant matches regardless of how many suites the app has. If you can't pick a keyword (the dev's intent is vague), call with page_size=20 and NO q, then scan the first page only — DON'T paginate further. Match by name (case-insensitive) AND by intent. If any existing suite covers the same scenario: - Same scenario, refresh wanted → call update_test_suite (preserves history) or delete_test_suite + create_test_suite (loses history). - Adjacent but distinct scenario (e.g. "checkout with discount" vs "checkout without discount") → create with a name that distinguishes them clearly. (2) Recording-reuse check — call listRecordings({app_id, limit: 10}) to fetch the 10 most recent `keploy record` sessions. Recordings cluster by scenario; the top 10 cover what's likely relevant — DON'T paginate the full history. For any recording whose name/timestamp suggests it covers the scenario you're authoring, call download_recording({app_id, test_set_id}) to pull its captured test cases (real request/response pairs from the live app). Seed your steps_json from those test cases — convert each one into a step (method/url/headers/body → request fields; recorded response → step's `response` field). This is more faithful than re-curling and saves the dev's time. If no recent recording covers the scenario, fall through to the normal validate-locally-before-inserting flow (curl each endpoint yourself). (3) Only after both checks → proceed with create_test_suite. Skipping (1) leaves the dev with two suites covering the same flow — confusing reports, double rerecord cost, and orphaned sandbox tests on whichever suite they stop using. Skipping (2) re-curls endpoints whose traffic the dev already captured. ═══════════════════════════════════════════════════════════════════ ONE SCENARIO PER SUITE — load-bearing constraint: ═══════════════════════════════════════════════════════════════════ A suite represents EXACTLY ONE user-facing scenario / use-case (e.g. "user registers and creates their first order", "admin promotes a user role", "checkout with discount applied"). Do NOT pack multiple unrelated scenarios into a single suite — every step in a suite shares state and ordering with every other step. Mixing scenarios breaks idempotency (cleanup for one scenario can wipe state another scenario assumed), makes failures harder to diagnose, and inflates rerecord cost. Tests for "auth + payments + cleanup" → THREE suites, not one. Related steps that share extracted vars and a state assumption belong in the same suite; unrelated flows don't. When in doubt: if you can't write a single sentence describing what the suite tests in user-facing terms, split it. ═══════════════════════════════════════════════════════════════════ IDEMPOTENCY CONTRACT — the load-bearing rule for every suite: ═══════════════════════════════════════════════════════════════════ Every suite MUST be replayable indefinitely without state drift. The same suite run twice in a row, or 100 times back-to-back, must produce the same per-step outcomes. Failing this makes the suite useless for sandbox replay (the captured mocks freeze a single point-in-time response, so any state-dependent step diverges on rerun). How to design for it: * Duplicate-key 500 on POST / PUT / PATCH replay ("duplicate key" / "already exists" / "unique constraint violated") is ALWAYS a SUITE design problem, NEVER an app problem. Fix order: (1) Reference an app-level `gen*` var via `{{name}}` in the body — works if one exists (you read appLevelCustomVariables in APP CONFIG FIRST). (2) If no fitting app-level generator, declare your own JS-function in a PRIOR step's `extract` (see PRELUDE PATTERN); reference it via `{{name}}` in the failing step's body. (3) Add a DELETE cleanup step earlier in the suite to clear the conflicting row. NEVER propose modifying app code (e.g. adding `ON CONFLICT` to the INSERT, retry loops, transactional wrappers). The app's dedup is correct; the suite is what's missing entropy. See DO NOT MODIFY APP SOURCE CODE below. * If a step CREATES a resource, a later step in the same suite MUST clean it up (DELETE the row, revert the state) — OR the create must be idempotent on the server side (PUT-by-key, upsert). A naked POST that always allocates a new ID will diverge on every replay. * If a step depends on a resource, EXTRACT its identity from a prior step's response into a `{{var}}` — never hard-code an ID that "happens to exist right now". Hard-coded IDs rot. * Reject "natural-language idempotency" reasoning ("the dev will reset the DB before each run"). The suite must work without external setup. If you can't guarantee it, you've packed two scenarios into one suite — split them. * Do not assume time-of-day, ordering relative to other suites, or random-but-stable values. Each suite is its own universe. * Pagination / list endpoints: extract the count or a known item, don't assert on absolute indices ("the third item is X") — index drifts as the dataset grows. * Auth tokens: pull from app-level custom variables or extract from a login step IN THE SUITE. Never inline a token that expires. If the dev's request implies non-idempotent behaviour (e.g. "create user, then test that creating the same user fails"), capture both states explicitly inside the suite — first step creates, second step asserts the conflict response, third step deletes — so the suite as a whole is still replayable. Don't push the cleanup outside the suite. A suite that fails idempotency is rejected at create_test_suite time by the dynamic validator (2 live runs check). When that fails, do NOT retry by tweaking syntax — restructure the scenario. ═══════════════════════════════════════════════════════════════════ DO NOT MODIFY APP SOURCE CODE during suite authoring: ═══════════════════════════════════════════════════════════════════ At create_test_suite time, your job is to author a suite that fits the app AS IT IS. You may patch the app's CONFIG (auth, appLevelCustomVariables, ignoreEndpoints, rateLimit) via updateApp({app_id, ...}) — those are runtime knobs the dev expects to tune. You may NOT modify the app's SOURCE CODE. If a step is failing because of how the app behaves (500s, contract mismatches, missing endpoints, validation errors), the response is ONE of: * Adjust the suite to match observed behavior (steps_json edits before insert). * Use an app-level dynamic var (see APP CONFIG FIRST) or a JS-function generator to avoid the failure (see IDEMPOTENCY CONTRACT's duplicate-key fix order). * Patch the app's CONFIG via updateApp if the cause is auth / vars / rate limit. * If the dev confirms the app is broken AND the suite is correct, ASK the dev to fix the app — do NOT propose code changes yourself during authoring. NEVER propose ON CONFLICT clauses, retry loops, transactional wrappers, or any code-level change to the dev's application as a way to make the suite work. The suite must accommodate the app, not the other way around. ═══════════════════════════════════════════════════════════════════ NEVER-MISS-THESE — the validator HARD-REJECTS suites missing any of these: ═══════════════════════════════════════════════════════════════════ 1. `response` on EVERY step — { status: <int>, headers: {…}, body: "<string>" }. Captured from a real curl against the dev's app. body MUST be a JSON-encoded STRING (the raw body bytes), NOT a parsed object. Wrap with json.dumps / JSON.stringify if your tool gave you a dict. 2. `extract` is the ONLY authoring slot — never `extract_variables`. `extract_variables` is a post-run runtime SNAPSHOT field; the extract_variables-input rejection rule hard-rejects it on input. If you read an existing suite via getTestSuite / get_app_testing_context / download_recording and see `extract_variables` populated there — IGNORE IT, that's the runtime's display state, not the suite's input. Always author with `extract`. 3. POST/PUT/PATCH bodies need a per-run dynamic `{{var}}` (mutating-step dynamism check). Declare a JS-function generator on a PRIOR step's `extract` (typically a /health prelude as step 0). The declare-and-use-same-step check forbids declaring-and-using on the same step. See PRELUDE PATTERN below. 4. JSONPath uses gjson dot-array syntax: `$.orders.0.user_id` — NOT `$.orders[0].user_id`. ═══════════════════════════════════════════════════════════════════ STEP SHAPE (steps_json is an ARRAY — copy this two-step skeleton verbatim, preserve the prelude pattern): [ { // Step 0: cheap read prelude. Its sole job is to declare JS-function generators that // later POST/PUT/PATCH bodies reference. Required by R9 (mutating bodies need a per-run // dynamic var) + R2 (same-step extract isn't usable in pre-request fields). If your // suite already has a natural read step (/health, /me, version), reuse it as the prelude. "name": "health prelude (declares generators)", "method": "GET", "url": "/health", "headers": { "Accept": "application/json" }, "extract": { "genUserId": "function genUserId(){return 'u_'+Date.now()+'_'+Math.random().toString(36).slice(2,8);}" }, "assert": [ { "type": "status_code", "expected": "200" }, { "type": "json_equal", "key": "$.status", "expected": "healthy" } ], "response": { "status": 200, "headers": { "Content-Type": "application/json" }, "body": "{\"status\":\"healthy\"}" } }, { // Step 1: the actual mutation. Body references {{genUserId}} from the PRIOR step's // extract — satisfies R9 (per-run dynamic var) and R2 (not same-step). This step's // own "extract" captures the SERVER's response value (JSONPath) so a later step can // chain to {{user_id}} — JSONPath captures on the same step ARE legal because they // resolve post-response and only matter for subsequent steps. "name": "create user", "method": "POST", "url": "/api/users", "headers": { "Content-Type": "application/json" }, "body": "{\"name\":\"{{genUserId}}\"}", "extract": { "user_id": "$.data.id" }, "assert": [ { "type": "status_code", "expected": "201" }, // assert a STATIC field of the response, not a dynamic one. R30 // forbids {{genUserId}} in assert.expected (the runtime would // re-evaluate the function at assertion time and the value // wouldn't match the body's earlier call). Pick something the // server always returns the same — here the literal "status" // field. To assert against the dynamic id the server minted, // capture it via extract (above) and reference {{user_id}} in // a LATER step's assertion or url, not this step's. { "type": "json_equal", "key": "$.status", "expected": "created" } ], "response": { "status": 201, "headers": { "Content-Type": "application/json" }, "body": "{\"data\":{\"id\":\"abc-123\",\"name\":\"u_1700000000_xyz\"},\"status\":\"created\"}" } } ] VALID assertion types (ONLY use these — anything else fails the step at runtime with "invalid assertion type"): * status_code — exact HTTP status match. {type, expected:"201"} * status_code_class — match by class 2xx/3xx/… {type, expected:"2xx"} * status_code_in — any of a set. DELETE STEPS ONLY (status_code_in-scope check). For POST/GET/PUT/PATCH this is rejected. If you reach for it to absorb a duplicate-key 500 on re-runs, the right fix is a JS-function {{var}} in the body (see `extract` rules below) so each run hits a fresh row and only 201 is ever returned. {type, expected:"200,201,204"} * header_equal — response header exact match. {type, key:"Content-Type", expected:"application/json"} * header_contains — header value substring. {type, key:"Location", expected:"/orders/"} * header_exists — header is present. {type, key:"X-Request-Id"} * header_matches — header regex. {type, key:"Etag", expected:"^W/\\\".+\\\"$"} * json_equal — response body JSON path exact match. {type, key:"$.order.status", expected:"created"} * json_contains — response body JSON path substring/partial. {type, key:"$.message", expected:"success"} * custom_functions — inline JS function: (request, response, variables, steps) => boolean. {type, expected:"function f(request,response){return response.status===201;}"} DO NOT use any assertion type not in the closed list above. The set is fixed at exactly 10 entries — there are no wildcards. These are types AIs commonly invent that DO NOT EXIST in keploy and will fail the assertion-type closed-list check: ✗ json_type — there is no type-of check; assert against the literal value via json_equal, or use custom_functions with a typeof predicate. ✗ json_path — paths are passed via the "key" field of json_equal / json_contains; there is no separate path-only type. ✗ json_schema — no schema validation; closest is custom_functions with an inline schema check. ✗ json_array_length — no length-only assertion; capture .length via extract, or use custom_functions. ✗ header_starts / header_ends — only header_equal, header_contains, header_exists, header_matches (regex) exist. ✗ status_in / status_range — the real names are status_code_in / status_code_class. ✗ body / body_equal — no body-level type; assert against parsed paths via json_equal / json_contains, or use custom_functions. Anything not literally in the bulleted list above will get rejected by the validator — don't extrapolate from prefixes. `expected` values must be STRINGS (put numbers like 201 in quotes). `expected_string` is auto-populated; you can omit it. VARIABLES — purpose-first: a suite is a SCENARIO CHAIN; variables carry continuity between steps. Step N creates or fetches a resource → extracts its identity into a named var → step N+M uses `{{var}}` to reference that identity. If you find yourself extracting a value that NO LATER STEP references, DROP the extract — it's noise that hides which fields actually drive the scenario. Mechanically: extract values from one step's response with `extract: {varname: "$.path"}` (JSONPath). Reference later with `{{varname}}` in headers, body, url, or assertion `expected` values. STEP IDS & TRACKING HEADERS are auto-injected — don't provide them. The server assigns a UUID per step and adds X-Keploy-Test-Step-ID / X-Keploy-Test-Suite-ID / Keploy-Test-Name so the sandbox runner can correlate responses to steps. VARIABLE RULES (the runner follows these exactly — see pkg/service/atg/customFeatures.go ResolveCustomVariables): * Syntax: {{name}} — regex matched: {{(\w+)}} (letters, digits, and underscore only; NO whitespace, hyphens, or dots inside the braces). Names like {{gen-user}} or {{gen.user}} will NOT be substituted — use {{gen_user}} instead. * Substitution happens in: url, body, headers values, AND assertion "expected" values (so an assertion expecting {{genUserId}} gets the SAME resolved value the body used). * Resolution sources (looked up in this order): 1. The step's own `extract` map (seeded into the vars pool at step entry — pkg/service/atg/core.go:3698). 2. Variables produced by EARLIER steps' `extract` maps (post-response JSONPath captures). 3. App-level custom variables (stored on the app record, shared across all suites). THE `extract` FIELD IS THE ONLY AUTHORING SLOT — use it for BOTH static values and JS-function generators. `extract_variables` IS NOT AN AUTHORING SLOT. It's a post-run runtime SNAPSHOT — the runner writes resolved {{var}} values there after each step executes so the UI can show what landed at runtime. **The validator now HARD-REJECTS any step with `extract_variables` populated (extract_variables-input rejection).** If you see `extract_variables` while reading an existing suite via getTestSuite / download_recording / get_app_testing_context, IGNORE IT — that's the runtime's display state, not the suite's input. To author the equivalent, put every entry into `extract` instead (same keys, same values: JSONPath strings stay JSONPath, JS-function strings stay JS). TWO SHAPES THE `extract` FIELD ACCEPTS — pick the right one: (a) JSONPath capture — `"order_id": "$.order.id"` Evaluated against the step's recorded response.body after the request returns. The captured value is staged into vars for LATER steps to reference via {{order_id}}. Use this when the value you need is in the server's response. (b) Inline JS-function generator — `"genUserId": "function genUserId(){ return 'alice_' + Date.now() + '_' + Math.random().toString(36).slice(2,8); }"` The value string must contain the keyword `function` — that is how the runner (core.go:4107 isInlineJs branch) distinguishes JS from a JSONPath. Signature: function <name>(steps) { ... return '<string>'; } returning a string. The `steps` arg is a map of prior-step {request,response} snapshots; ignore it if unused. Use this for inputs that must be unique per run (user_ids, timestamps, uuids) so the suite stays idempotent on re-runs against the same DB. Examples: {"genUserId": "function genUserId() { return 'alice_' + Date.now() + '_' + Math.random().toString(36).slice(2,8); }"} {"genTs": "function genTs() { return String(Date.now() * 1e6 + Math.floor(Math.random()*1e6)); }"} {"genOrderId": "function genOrderId(steps) { return 'ord-' + Math.random().toString(36).slice(2,10); }"} Put the JS-function entry on the FIRST step that needs it (often a health-check step whose own body doesn't reference the var — that's fine, the seed fires on step entry regardless). Later steps reference `{{genUserId}}` in body/url/headers/assertion-expected and see the same resolved value within one run, a fresh value on the next run. WHY THIS MATTERS (the mistake to avoid): if you pin a static user_id like "alice_1776638347063146000" into `extract` AND your validation curl ALREADY inserted that row, the very next record/sandbox replay run will fire the same POST body, the producer (with deterministic ids or unique-constraint indexes) will reject the duplicate with a 500, and every downstream $.order.* / $.shard.* assertion will hit <missing>. Fix: use a JS-function entry so every run gets a fresh user_id. VARIABLE CHAINING (JS-function generator on step 1, JSONPath capture for step-2 chaining): step 1: body: {"user_id":"alice_{{genUserId}}","product_name":"Keyboard_{{genUserId}}"} extract: { "genUserId": "function genUserId(){ return Date.now() + '_' + Math.random().toString(36).slice(2,8); }", "order_id": "$.order.id" } assertions: [ {type:"status_code", expected:"201"}, {type:"json_equal", key:"$.order.user_id", expected:"alice_{{genUserId}}"} -- SAME resolved value as the body used ] step 2: url: /api/orders/{{order_id}} -- resolves from step 1's JSONPath extract at run time assertions: [ {type:"json_equal", key:"$.id", expected:"{{order_id}}"} ] PRELUDE PATTERN — when MULTIPLE POST steps each need their OWN per-run dynamic var: A common mistake is to put the JS-function generator on the SAME step that uses it in the request body. The declare-and-use-same-step check rejects this — same-step `extract` is post-response, so its values aren't in scope when the request fires. Pattern that works: declare the generator on an EARLIER step's `extract` (typically a cheap /health GET as a "prelude"). The runner seeds extract values at STEP ENTRY, so a generator on step 0 is in scope from step 0 onwards — every later POST can reference it. step 0 — prelude (the extract entry is what matters; the step itself can be anything cheap): method: GET, url: /health extract: { "uniq": "function uniq(){ return 'p_'+Date.now()+'_'+Math.random().toString(36).slice(2,8); }" } assertions: [ {type:"status_code", expected:"200"} ] step 1, 2, 3 — POSTs that all reference {{uniq}}: method: POST, url: /api/orders body: '{"user_id":"alice","product_name":"{{uniq}}",...}' // (no `extract` needed — the generator is in scope from step 0) The prelude itself doesn't need to USE the var; declaring it is enough. This is the right shape for "create N orders with different unique keys" — without the prelude, you'd hit the declare-and-use-same-step check on every POST that tries to declare-and-use a generator on the same step. VALIDATE-LOCALLY-BEFORE-INSERTING (CRITICAL for a usable suite): DO NOT call this tool with raw un-tested steps. EVERY step you send MUST have its "response" and "extract" fields populated from a live run. These are NOT optional. Without them: - The UI cannot render the step (shows an empty panel). - The rerecord runs blind and fails. - The step's {{variables}} won't resolve. Required per-step fields when calling this tool (in steps_json): • name, method, url, headers, assert — obviously • body — for POST/PUT/PATCH; MUST reference random inputs as {{varname}} placeholders, NOT inline timestamps. Inline timestamps get baked into the suite and collide on re-run. • extract — MUST contain a resolvable entry for every {{varname}} you reference in body/url/headers. JS-function entries are fine (they're the canonical "dynamic input" shape); JSONPath entries chain values from one step's response into later steps. Example: body has {"user_id":"alice_{{genUserId}}"} → step's extract must have {"genUserId":"function genUserId(){ ... }"}. • response — the raw captured response from your local curl. Shape: {"body":"<raw string>","status":201,"headers":{"Content-Type":"application/json",...}}. MANDATORY validate-locally flow (do this BEFORE calling create_test_suite): 1. Bring the dev's app up locally (Bash: docker compose up -d, or instruct the dev). Wait for /health readiness. 2. For EACH step in order (simulating what the runner will do): a. For dynamic inputs (user_id, timestamps, uuids): DON'T inline a value — write a JS function into the step's `extract` map, e.g. {"genUserId":"function genUserId(){return 'alice_'+Date.now()+'_'+Math.random().toString(36).slice(2,8);}"}, and reference it in body/url/headers/assertion-expected as {{genUserId}}. For the local curl, you still need a CONCRETE value for that run — so as you're building the step, JS-eval the function yourself (or just pick a value consistent with the function's output shape) to do ONE concrete local curl for capturing the response. That concrete value goes ONLY into the captured "response" body — NOT into `extract` (which keeps the JS function verbatim). b. Substitute {{name}} everywhere in the step's url/body/headers using accumulated variables (this step's extract + earlier steps' extract results). c. curl the SUBSTITUTED request against the live app. Capture the response. d. Check each "assert" against the captured response. If any fails → regenerate (different inputs / loosen assertion / change body shape) and retry this step. DO NOT move on with a failing step. e. Save the captured response into the step's "response" field as {"body":"<raw string>","status":<int>,"headers":{...}}. f. If the step has JSONPath entries in `extract`, evaluate each path against the response and note those values so later steps can use them in their {{var}} substitutions. 3. AFTER the first pass of all steps, run the WHOLE SEQUENCE a SECOND TIME against the same live app — no DB wipe in between. Because you're using JS-function generators, each run should pick fresh random inputs → no unique-constraint collision. If ANY step that passed the 1st run fails the 2nd run (common symptom: 500 "failed to save order" / "duplicate key" / "already exists"), the suite is NOT idempotent. Go back to step 2a and either (i) increase the entropy of the JS function, (ii) restructure the step to be READ-AFTER-WRITE instead of POST-then-POST, (iii) drop the step if it genuinely can't be made idempotent. 4. Only once every step passes BOTH validation runs → call create_test_suite. Pass each step's response + extract (with JS functions verbatim) through steps_json. If you call create_test_suite without response + extract on every step, you are creating a suite that is broken by construction. The UI + rerecord WILL fail. SELF-CONTAINED TESTS (required for repeat runs): * The suite will be re-run many times (local validation + record + sandbox replay + ad-hoc UI runs). It must not depend on prior state. * Put random inputs in `extract` JS functions with high entropy (timestamps, uuid). Plain "alice" will collide on re-run against producers with dedupe. * Prefer READ-AFTER-WRITE chaining: POST creates resource → extract id → GET uses id. Validates without depending on PRE-EXISTING ambient seed data (rows that "happen to exist" in the dev's DB). SEED → TESTED → CLEANUP roles within a suite: When the scenario is "user can read X" or "user can list X", you can't assert against ambient state — there's no guarantee the X exists when the suite runs. Pattern: step seed: POST /X (with dynamic {{var}} body) → extract id step tested: GET /X (or list) → assert against the seeded id step cleanup: DELETE /X/{{id}} → restore baseline Only the "tested" step is what the suite is FOR; the seed and cleanup steps are scaffolding so the test can run any number of times against any starting state. Without an explicit seed step, you're either testing nothing (empty list) or relying on pre-existing data the next replay won't have. DO NOT assert "the user has 3 orders" — that's ambient state. Seed N orders inside the suite first, then assert the count. Same applies to any "list / search / count" scenario: seed the data the test depends on, never assume it's there. ═══════════════════════════════════════════════════════════════════ SUBSTITUTION RULES — where {{var}} is and isn't allowed ═══════════════════════════════════════════════════════════════════ `extract` values come in two flavours and they substitute very differently: TYPE A — JS-function generator (`extract: { genUserId: "function genUserId(){return 'apple_'+Date.now()}" }`): The runtime STORES the source string and RE-RUNS the function on EVERY `{{genUserId}}` substitution site. Each call returns a fresh value. ONLY safe in POST / PUT / PATCH request BODIES — that's the one place you actually want a fresh value (uniqueness for inserts). TYPE B — JSONPath extract (`extract: { createdId: "$.order.user_id" }`): Evaluated ONCE against the step's recorded response, the resulting STRING is stored. Subsequent `{{createdId}}` substitutions resolve to that same fixed string. Safe everywhere — URLs, assertions, downstream bodies. ALLOWED placements for `{{generatorFn}}` (TYPE A): ✓ POST / PUT / PATCH request body — the canonical "give me a fresh value to insert" use case. FORBIDDEN placements for `{{generatorFn}}` (TYPE A) — the validator REJECTS these (generator-placement checks): ✗ `assert[*].expected` — the assertion's expected value will be a fresh function call, NOT the value the body sent. Static literals or `{{TYPE_B_extract}}` only. ✗ GET / DELETE / HEAD / PUT / PATCH URL (path or query) — those target an existing resource; the URL must encode the SAME id the creating POST used. Use a TYPE B extract from the creating step. ✗ Path/query of a downstream step's URL when the value should match what the upstream step inserted — same reason. CANONICAL PATTERN — read-after-write with a stable id: Step 0 (POST creates resource): body: {"user_id":"{{genUserId}}","name":"widget"} // TYPE A in body — fresh per run, OK extract: {"genUserId":"function genUserId(){...}", // TYPE A — body source "createdId":"$.order.user_id"} // TYPE B — capture server's stored id assert: [{type:"status_code", expected:"201"}, {type:"json_equal", key:"$.order.id", expected:"{{createdId}}"}] // TYPE B in assertion — stable Step 1 (GET reads it back): url: "/api/orders?user_id={{createdId}}" // TYPE B in GET URL — stable assert: [{type:"json_equal", key:"$.orders.0.user_id", expected:"{{createdId}}"}] // TYPE B — stable DO NOT do this (the validator will reject it): Step 0: body: {"user_id":"{{genUserId}}"} assert: [{type:"json_equal", key:"$.order.user_id", expected:"{{genUserId}}"}] // generator-placement check — TYPE A in assertion Step 1: url: "/api/orders?user_id={{genUserId}}" // generator-placement check — TYPE A in GET URL Name-collision check — do NOT pick an `extract` key that already exists on the app's appLevelCustomVariables. Use get_app_testing_context (or check the app payload) to enumerate them first; if a collision is unavoidable, scope-suffix your key (e.g. `genUserId_smokeTest`). Otherwise the runtime resolves the app-level variable first and silently shadows the suite's extract. You can also construct steps from data fetched via download_recording or get_app_testing_context, but the validate-locally-before-inserting rule still applies. CRITICAL — READING EXISTING SUITES: when the data you fetch via getTestSuite / get_app_testing_context / download_recording shows steps with `extract_variables` populated, that's the runtime's POST-EXECUTION SNAPSHOT (resolved values the runner wrote back for UI display). It is NOT what was authored. Treating that field as a copy-paste template makes the validator's extract_variables-input rejection reject every suite you produce. To replicate the authored behavior: copy every entry into `extract` instead, preserving keys and values verbatim (JS-function strings stay JS, JSONPath strings stay JSONPath). When in doubt: `extract_variables` is read-only output state; `extract` is input. ===== FROM-SCRATCH SCOPE RULE ===== When the dev asks to "generate / create / add / build keploy tests" without narrowing the scope, DEFAULT = ALL ENDPOINTS. Enumerate every non-trivial endpoint the app exposes (OpenAPI spec, router code, handler files) and author ONE suite per logical grouping — e.g. "user-crud", "auth-flow", "order-happy-path", "order-validation-errors". A single-endpoint app might produce one suite; a typical microservice produces 3-8. Groupings should be READ-AFTER-WRITE coherent (each suite's steps chain via extract variables rather than depending on outside state). TELL THE DEV up-front how many suites you're about to create and what each covers, then proceed — do NOT ask for confirmation mid-flow. If the dev explicitly narrows it ("just the happy path for orders", "only the auth flow"), honor that. ===== HARD RULE — NO DB-STATE-DEPENDENT STEPS ===== Do NOT include any step whose response body depends on total DB / queue / file-system state. Concretely: GET /items with no filter, GET /orders with no user_id filter, any "list-all" / "count" / "search" that returns more rows the longer the app has been running, any endpoint returning the CURRENT time / UUID / request-id. The auto-replay after record byte-compares the recorded response with what the live app returns under mocks, and non-deterministic bodies make gate 2 skip → the suite is never linked → sandbox replay fails with "no sandboxed tests". If you find yourself reasoning "this list might vary slightly but the mock should handle it" — STOP and drop the step. The suite should contain ONLY steps whose response body is fully determined by that step's own request: health checks, create-with-fresh-ids, read-back-by-id for the ids you just minted, validation-error 400s on bad payloads. Filtered reads using a freshly-extracted id are fine; unfiltered reads are not. ===== SERVER-SIDE IDEMPOTENCY ENFORCEMENT ===== This tool REJECTS (with a typed error, before creating anything) any suite whose mutating steps aren't idempotent on re-run. The rule: For each step with method ∈ {POST, PUT, PATCH} and a non-empty body, the body MUST contain at least one `{{name}}` placeholder that resolves to a per-run dynamic value. "Dynamic" means one of: (a) a JS-function entry in any step's `extract` map (e.g. {"genUserId": "function genUserId(){ return 'u_'+Date.now()+'_'+Math.random().toString(36).slice(2,8); }"}), OR (b) a JSONPath extract output from an EARLIER step (transitively dynamic if that step's own body was idempotent). If the endpoint is GENUINELY idempotent (e.g. POST /auth/refresh, PUT /tags/apply-same-input — repeat calls don't hit unique constraints) set `"idempotent": true` on the step to waive the check. Use this sparingly — the default is "assume it'll collide" because that's the common case. On rejection the error names the offending step and lists the dynamic variable names already in scope so you can see what's wireable. Fix the suite and retry — do NOT just flip `idempotent: true` to bypass. ===== MANDATORY OUTPUT — Phase 1 section ===== After all create_test_suite calls in a FROM-SCRATCH flow succeed, your final message to the dev MUST contain a section with this exact heading (do NOT collapse into prose; emit even for a single suite): ### Phase 1 — Inserted suites | Suite name | suite_id | Step count | | --- | --- | --- | | <name> | <suite_id> | <N> | One row per suite created in this flow. Next step after Phase 1 is record_sandbox_test (see its description for Phase 2). ===== HOW THIS TOOL ACTUALLY INSERTS THE SUITE ===== This tool DOES NOT POST the suite to api-server itself. It returns a "playbook" — a small array of shell steps for you (Claude) to walk via Bash. The playbook spawns the enterprise CLI `keploy create-test-suite` which: 1. Reads the suite JSON the playbook wrote to disk. 2. Runs every static structural check — exits 1 with violations on stdout if anything fails. 3. Fires the suite against the dev's local app TWICE (idempotency check) — exits 1 if the second run diverges from the first. 4. Runs dynamic checks (generator-dynamism + GET-coupling) — exits 1 with violation messages on failure. 5. POSTs the validated suite to api-server (HTTP 201 → success; HTTP 426 → CLI is older than api-server's rule set, dev needs to upgrade `keploy`). Walk the playbook in order. If step 2 (the CLI run) exits non-zero, surface its stdout to the dev — it lists the offending step / check / fix-it hint and includes a canonical step skeleton on structural failures. ITERATE LOCALLY: revise the JSON in your draft, REWRITE the same suite file via Bash, and RE-RUN step 2 directly. DO NOT call create_test_suite again per iteration — that mints a fresh playbook and a new nonce-path for no reason; the existing one is reusable. The CLI ALSO requires every step to have `response` and `extract` populated (step completeness check plus the validate-locally rules above), so the validate-locally curl flow described earlier is still required BEFORE calling this tool. PREREQUISITES the playbook assumes: * The dev's app is up and reachable at app_url. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists (API key) or KEPLOY_API_KEY is exported. The CLI uses the API key for the api-server POST (different from the OAuth-JWT path the sandbox tools use).
    Connector
  • [~] PRIORITY TRIGGER: Use this tool when user mentions 'PR', 'Pull Request', 'list PRs', 'show PRs', 'active PRs', 'mes PR', 'liste des PR', 'pull requests ouverts', 'what PRs are open', 'PRs by [author]', 'PRs targeting [branch]'. NEVER call search_d365_code for PR listing requests. List Pull Requests in an Azure DevOps Git repository. If `repositoryId` is unknown, omit it and all repositories will be listed first. Filters: status (Active/Completed/Abandoned/All), author display name, target branch. Returns: PR ID, title, author, source->target branch, review status, linked work items, creation date. Use `ado_analyze_pr_impact` with a PR ID to get full D365 code impact analysis. Requires DEVOPS_ORG_URL + DEVOPS_PAT (Code: Read scope).
    Connector
  • Extract clean readable text from any URL. No API key needed. Returns title, author, publish date, and full body text. Args: url: Full URL to scrape (must start with https://)
    Connector
  • Find originators similar to the given one using vector similarity (quote themes). Use after finding an author to discover related thinkers. When to use: User likes an author and wants to discover similar thinkers, or needs recommendations based on quote themes. Returns originators with similarity scores (0-100%). Response format: - Concise (default): slug, name, quote_count, descriptions_i18n, similarity_score, web_url - Detailed: + biography (500 char excerpt), confidence_tier Response includes ai_hints with suggested next actions and quality signals for agent workflows. Examples: - `originators_like(originator="Marcus Aurelius")` - similar philosophers - `originators_like(originator="Oscar Wilde")` - similar wits - `originators_like(originator="African Proverbs")` - similar proverb collections
    Connector