Skip to main content
Glama
127,309 tools. Last updated 2026-05-05 14:16

"namespace:com.westrock.dws-portal-dev" matching MCP tools:

  • Replay the sandbox test for one or more suites against captured mocks — re-runs the suite's steps against the dev's locally-running app while keploy serves outbound calls (DB, downstream HTTP, etc.) from the captured mocks. Use this when the dev says "replay", "run my sandbox tests", "integration-test", "check if mocks still match" — keywords "sandbox" / "replay" / "mocks" / "integration-test" all map here. Also the REPLAY STEP of FROM-SCRATCH: call this LAST (after create_test_suite + record_sandbox_test) to give the dev the whole-app regression picture against the freshly captured mocks. Output produces a SANDBOX RUN REPORT — it answers "does the suite still hold up against its captured baseline?". ═══════════════════════════════════════════════════════════════════ DISAMBIGUATION — pick this tool vs. replay_test_suite: ═══════════════════════════════════════════════════════════════════ USE replay_sandbox_test (THIS TOOL) when the dev says: * "run my sandbox tests" / "replay my sandbox tests" * "integration-test my app" / "run the integration tests" * "check if my mocks still match" / "replay against the captured mocks" * "rerun my sandbox suite" (with the word "sandbox") Trigger keyword: an explicit "sandbox" / "replay" / "mocks" / "integration-test" — silent signal that the dev wants captured-mock replay, NOT live-app execution. USE replay_test_suite INSTEAD when the dev says: * "run the test suite" / "run my test suites" (bare — no "sandbox") * "execute test suite X" / "run suite 810d3ebe…" * "test the suite again" / "smoke test against the live app" Bare verbs ("run / test / execute") applied to "the suite" without the word "sandbox" mean LIVE-APP execution, NOT captured-mock replay. replay_test_suite hits the dev's running localhost app directly via HTTP — no docker spin-up, no mocks. After a record_sandbox_test run, the natural next step is THIS tool (replay against the just-captured mocks). After create_test_suite / update_test_suite, the natural next step is replay_test_suite (validate against the live app). When the dev's verb is bare and the prior turn doesn't make the intent obvious, ASK rather than picking sandbox-replay silently — code-change regressions can hide under "mock didn't match" failures. ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has NO on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name. 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app via list_branches → getTestSuite. Then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action. SCOPE — whole-app vs single-suite: * Default: LEAVE suite_ids UNSET → the tool resolves "every suite for the app that has a sandbox test (test_set_id populated)" and replays them all. Use this for "run my sandbox tests" / "check if my tests still pass" — whole-app regression. New suites auto-pick up. * Single / subset: PASS suite_ids when the dev names specific suites — "replay sandbox test for suite 810d3ebe-…", "replay only the auth suite", "run suite X and Y". The tool validates each requested id is actually a suite with a sandbox test (has test_set_id); an unlinked id gets a precise "record first" error instead of an opaque downstream CLI failure. This tool resolves the app, picks the suite set per the rule above, and returns a single playbook that drives the replay for them. It does NOT record. WHAT THIS TOOL DOES INTERNALLY (so you don't have to): 1. Resolves app_id — use the explicit app_id if the caller has one; otherwise pass app_name_hint (usually the cwd basename) and the server does listApps with a substring match. Multiple matches → error listing them; zero matches → error suggesting the dev generate a suite first. 2. Lists test suites for the app, keeps only those with a non-empty test_set_id. Zero linked → typed "no linked sandbox tests" error. 3. If suite_ids was passed, validates every requested id is in the linked-suites set; unlinked ids → typed error pointing to record_sandbox_test. 4. Returns the headless playbook — walk it exactly: spawn CLI in background, tail the progress file (PID-alive guard built in), read the terminal event, fetch the report. No separate cleanup step — the CLI exits on its own. ===== PREREQUISITES ===== (Same as record_sandbox_test — if you just recorded, you already have them. Same docker-compose network rule applies: use the same compose file + service, stop the app service before calling, leave deps running.) - app_command: shell command that starts the dev's app (e.g. "docker compose up producer"). - app_url: base URL the app listens on, e.g. http://localhost:8080. - app_dir: absolute path to repo root. - container_name if app_command is docker-compose. - keploy binary on PATH. If `which keploy` returns nothing, install it before calling this tool with: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. ===== AFTER CALLING — walk the playbook ===== Same headless playbook shape as record_sandbox_test: spawn `keploy test sandbox --cloud-app-id …` in the background via Bash, poll `tail -n 1 $PROGRESS_FILE` repeatedly (no sleep loops; the wait_for_done step has a built-in `kill -0 $KEPLOY_PID` guard so the loop exits if the CLI dies silently), read the terminal NDJSON event (phase=done, data.ok, data.test_run_id), and — if ok=true — call get_session_report(app_id, test_run_id) with verbose=true at the end. No separate cleanup step needed; the CLI exits cleanly once phase=done is written. ===== MANDATORY OUTPUT — Phase 3 section ===== Your final message to the dev MUST contain a section with this exact heading (do NOT merge with Phase 2; do NOT compress the failed-steps table even when failures are homogeneous): ### Phase 3 — Sandbox run report Under it, emit the uniform three-subsection format owned by get_session_report: (i) per-suite table — one row per suite in per_suite, passing suites included, columns = Suite name | passed/total steps. (ii) failed-steps table — ONE ROW per entry in failed_steps[], columns = Suite | Step name | Method + URL | Expected → Actual status | mock_mismatch y/n. Never collapse rows. (iii) Diagnosis + Recommendation (see get_session_report description for case-specific rules around mock_mismatch_dominant, repo-diff inspection, and the SKIP / FIX-CODE / FIX-TEST branching for fix-it follow-ups). Do NOT print aggregate step totals across suites — they mix unrelated suites and hide where damage actually is. ===== ROLLUP LINE ===== Close the message with a final one-line rollup paragraph (no heading), in addition to the three phase sections. Mention the TOTAL number of suites replayed (which may exceed the count created in this session, because replay_sandbox_test covers every linked suite the app has). Example: "_Rollup: inserted 4 suites, 4/4 with sandbox tests after record, 3/4 suites passed sandbox replay across the app's 6 linked suites — 1 failure is likely keploy egress-hook, file an issue with the IDs above._" ===== DO NOT ===== * DO NOT call update_test_suite or record_sandbox_test after this. The dev said RUN, not REFRESH. * DO NOT fall back to raw keploy CLI (`keploy test …`) if the MCP tool drops mid-flow — CLI runs test-sets directly and does NOT write results back to the MCP-visible TestSuiteRun. See MCP DISCONNECT RECOVERY in the top-level instructions.
    Connector
  • Generate the exact CI workflow YAML to add keploy sandbox tests to a pull-request pipeline, and tell you where to write it. Use this when the dev asks to "add keploy sandbox tests to my pipeline" / "wire keploy into CI" / "run keploy on PR" / "add a CI job for keploy" — the server emits the file contents verbatim so you don't have to compose the flag list yourself. ===== GOAL ===== Write a CI workflow file that runs `keploy test sandbox --cloud-app-id <uuid> --app-url <url>` on pull requests and gates the PR on the result. NEVER kick off an actual test run in this flow — it is pure file authoring, ends with the file on disk. DO NOT fire replay_sandbox_test, record_sandbox_test, replay_test_suite, or any other run-starting MCP tool here. ===== HOW (absolute) ===== Call this tool. It returns { file_path, content, summary }. Write the "content" to "file_path" VERBATIM via your Write tool — NO flag renames, NO flag removals, NO step reordering, NO synthesis. The server owns the YAML template; your job is only to (1) resolve the inputs from the repo and api-server and (2) Write the returned content. Do NOT compose the YAML yourself from general knowledge — flag drift (missing --cloud-app-id, inventing --app) is the most common bug when Claude improvises. DO NOT ASK the dev for confirmation before writing. Resolve everything from the repo + api-server, pick the GitHub Actions default, call this tool, Write the file. The dev's prompt is already the go-ahead. ===== STEPS ===== 1. DETECT THE CI SYSTEM: * Default = GitHub Actions (biggest share). File = .github/workflows/keploy-sandbox.yml. * If .gitlab-ci.yml exists → GitLab (not yet supported by this tool; tell the dev and stop). * If .circleci/config.yml exists → Circle (not yet supported; tell the dev and stop). * Otherwise → GitHub Actions. 2. RESOLVE VALUES by calling MCP tools + reading the repo: * app_id: call listApps({q: "<cwd basename>"}). Exactly one → use its id. Multiple → pick the one whose name most specifically matches the repo's primary service (e.g. "orderflow.producer" wins over "orderflow" when there's a ./producer directory); mention which you picked in the final message. Zero → stop and tell the dev to create the app + rerecord first. * suite_ids: DO NOT pass this arg by default. An empty suite_ids means the CLI resolves "every linked sandbox suite for the app" at CI run time — which is what you want (new suites auto-pick up without workflow edits). The tool still verifies there's ≥1 linked suite at scaffold time so the first PR run doesn't fail empty-handed. Only pass suite_ids when the dev explicitly narrows ("run only the auth suite in CI"); don't pin "all current suites" — that's staleness waiting to happen. * compose_file: READ THE REPO. Default is docker-compose.yml. AVOID passing a docker-compose-keploy.yaml variant that has `networks: default: external: true` — those variants only work locally, where another compose run has already created the external network. In CI the runner starts clean and `external: true` fails with "network not found". If the primary docker-compose.yml brings up the full app (deps + app service), use it end-to-end. * app_service, container_name, app_port: read from the SAME compose_file you picked above. app_service = the service key (e.g. "producer"); container_name = that service's container_name: field in that same compose file (e.g. "orderflow-producer" if compose_file=docker-compose.yml, but "producer" if compose_file=docker-compose-keploy.yaml — THESE DIFFER, pick consistently); app_port = the host-side of its ports: mapping. * app_url = http://localhost:<app_port>. The tool derives this; you don't pass it separately. 3. CALL THIS TOOL with app_id, app_service, container_name, app_port, compose_file (and suite_ids only if the dev explicitly narrowed scope). It returns { file_path, content, summary }. Write the "content" to the "file_path" VERBATIM. ===== FLAG NAME RULES (absolute, do not drift when reviewing the output) ===== * `--cloud-app-id` ← NOT `--app-id`. The OSS config has an `appId` uint64 field that viper maps `--app-id` into; passing a UUID there fails with "invalid syntax" before RunE runs. * `keploy test sandbox --cloud-app-id <uuid> --app-url <url>` ← the CI form. NOT `keploy test --cloud-app-id` (must be `test sandbox` — the headless flags live on the sandbox subcommand only), NOT `keploy test-suite run` (that command doesn't exist). There is NO `--pipeline` flag. * Install URL = `https://keploy.io/ent/install.sh` ← NOT `https://keploy.io/install.sh` (OSS; no sandbox subcommand at all), NOT a github.com/keploy/keploy release tarball. If the server-emitted content ever disagrees with these rules, trust the server output and file a bug — don't edit the YAML. ===== RESOLUTION ARGS ===== * Pass either app_id (explicit UUID) or app_name_hint (substring; server does listApps and requires exactly one match). * Pass app_service (docker-compose service name), container_name (from compose container_name: field read from the SAME compose_file arg), and app_port (HTTP port the service exposes). * compose_file is optional, defaults to "docker-compose.yml". If the repo has a -keploy.yaml variant with `external: true` networks, do NOT point compose_file at it — it won't work in CI. * suite_ids is optional and should be LEFT BLANK by default — the CLI resolves every linked suite at run time. Only pin an explicit list when the dev narrows scope. ===== FINAL RESPONSE — three short sections, no questions ===== ### Created | File | Lines | | --- | --- | | .github/workflows/keploy-sandbox.yml | N | ### Summary - App: <name> (<app_id>), <N> linked suites replayed on every PR - Trigger: pull_request → main, + manual workflow_dispatch - Failure on any suite gates the PR (non-zero exit from the CLI) ### Before the first run, add this GitHub secret - `KEPLOY_API_KEY` — at https://github.com/<owner>/<repo>/settings/secrets/actions/new (self-hosted users — point at your own api-server by building the enterprise binary with -X main.api_server_uri=<url>; there is no runtime env override on the released binary.) This tool does NOT run anything. It only generates file contents.
    Connector
  • Delete a test suite on a Keploy branch — synchronous, no playbook to walk. USE THIS when: * The dev's update_test_suite call was rejected with "preserves no steps from the existing suite — that's a full rewrite, not an edit". Delete the existing suite and re-author from scratch via create_test_suite. The error message itself routes here. * The dev explicitly says "delete the suite", "remove suite X", "wipe my orderflow suite". * A genuine wholesale redesign — every step changed in shape — that the audit trail shouldn't try to reconcile as edits. DO NOT USE THIS when: * The dev wants a real edit (one assertion, one step's body). Use update_test_suite + preserve existing step IDs instead — keeps audit history intact. * The dev wants to "redo" a single failed run. Test runs are independent of suite state; just rerun via replay_test_suite. INPUT * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to delete * branch_id (required) — Keploy branch UUID. The delete creates a branch-scoped DeleteTestSuite audit event so reads on the same branch see the suite as gone. Direct main writes are blocked. OUTPUT * On success: {"deleted": true} — suite is tombstoned at the branch overlay; subsequent reads (getTestSuite / listTestSuites) on this branch return 404 / exclude it. * 404 if the suite_id doesn't exist on this app/branch (verify via getTestSuite or listTestSuites first if you're unsure). After delete, the standard re-create flow is: (1) call create_test_suite with a freshly authored steps_json. The new suite gets a fresh suite_id; the old id is tombstoned, not reusable. ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name (don't invent one). 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action.
    Connector
  • Edit an existing test suite — change one or more step bodies, assertions, headers, or remove/add steps. Returns a playbook that delegates to `keploy update-test-suite`, which validates the new state (static structural checks + 2 live runs for idempotency + GET-coupling check) and snapshot-replaces the suite via api-server. POST-EDIT BEHAVIOUR: any structural change here (step method/url/body/headers/extract/assert, or add/delete steps) AUTOMATICALLY clears the suite's sandbox test server-side — the suite comes back as linked=false. Call record_sandbox_test on the updated suite before any sandbox replay; otherwise replay_sandbox_test will 400 with "no sandboxed tests". Cosmetic-only edits (name, description, labels) preserve the sandbox test. ═══════════════════════════════════════════════════════════════════ FETCH-FIRST RULE — required for the edit to be accepted: ═══════════════════════════════════════════════════════════════════ The api-server's replace handler rejects updates that preserve ZERO step IDs from the existing suite ("full rewrite, not an edit"). To make a real edit: 1. Call getTestSuite first (or use download_recording / get_app_testing_context if you already have the suite). Capture each existing step's "id" field. 2. Compose your new steps_json INCLUDING the existing "id" on every step you want to KEEP or EDIT. Omit "id" only on steps you're ADDING. Drop a step entirely from steps_json to DELETE it. 3. Call this tool with that merged steps_json. If you author a fresh JSON without the existing step IDs, the server rejects it with "preserves no steps from the existing suite". When that happens, your two options are: (a) re-author with IDs preserved (preferred — keeps history), or (b) call delete_test_suite then create_test_suite (loses history, fresh suite_id). ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name. 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. The getTestSuite call in step 4 is the one whose response you also use to capture every step's existing "id" for the FETCH-FIRST RULE above — so step 4 is actually a 2-for-1: discovery AND fetch-first happen on the same call. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action. ═══════════════════════════════════════════════════════════════════ INPUTS ═══════════════════════════════════════════════════════════════════ * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to update * branch_id (required) — Keploy branch UUID (resolve via the two-step flow before calling) * steps_json (required) — JSON array of the FULL desired step list. Each kept step MUST carry the existing "id". Same step shape as create_test_suite (response, extract, assert, etc — all static structural checks apply). * name / description / labels (optional) — overrides for top-level suite metadata * app_url (required) — base URL of the dev's running local app, e.g. http://localhost:8080. The CLI fires the new state TWICE against this for the idempotency check + GET-coupling check. * app_dir (optional) — repo root the CLI cd's into; defaults to "." ═══════════════════════════════════════════════════════════════════ HOW THIS TOOL WORKS ═══════════════════════════════════════════════════════════════════ This tool DOES NOT call api-server itself. It returns a 3-step playbook for you (Claude) to walk via Bash — same shape as create_test_suite: 1. Write merged JSON to a temp file. 2. Run `keploy update-test-suite --suite-id <id> --file <path> --branch-id <uuid> --base-url <url>` — runs every static structural check, fires the new state twice locally, applies the GET-coupling check, then POSTs the snapshot-replace. 3. Cleanup the temp file. Walk the playbook in order. If step 2 exits non-zero, surface stdout to the dev — it has the rule violation / failure detail. OUTCOMES the AI should recognize: * Exit 0 + stdout has "✓ suite updated:" + "View:" line → success. Surface the View URL to the dev. * Exit 1 + "preserves no steps from the existing suite" → fetch-first rule was missed. Re-author with step IDs preserved (or call delete_test_suite + create_test_suite as the documented escape hatch). * Exit 1 + structural-check violations → fix the suite per the violation messages, then REWRITE the suite file via Bash and RE-RUN this CLI command directly. DO NOT call update_test_suite again to retry — the playbook + file path are already valid; only the JSON content needs revision. The validator output includes a canonical step skeleton on structural failures. * Exit 2 + "couldn't reach the dev's app" → ensure the app is up at app_url and retry. PREREQUISITES the playbook assumes: * The dev's app is up and reachable at app_url. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists or KEPLOY_API_KEY is exported.
    Connector
  • Updates fields on an existing automation. Pass a partial updates object with only the fields you want to change; omitted fields are preserved. Toggling enabled or changing schedule/channel/condition takes effect on the next scheduled run. Behavior: - Saves the change to the same automation record. Scheduled automations with an active workflow are restarted on update so the next run picks up the latest config. - Errors when the perspective or automation is not found, or you do not have access. - Webhook URLs in updates are validated. For HubSpot, the workspace's HubSpot connection is re-checked — errors with "Could not resolve HubSpot portal ID — please reconnect HubSpot" if disconnected. - For scheduled automations: changes to channel, condition, execution mode, instruction, or message template apply starting from the next run, not the one currently in flight. When to use this tool: - Toggling enabled on or off (also pauses/resumes scheduled sends). - Changing schedule, channel, condition, instruction, or message_template on a live automation. When NOT to use this tool: - Removing the automation entirely — use automation_delete. - Verifying a config change actually delivers — follow up with automation_test. - Listing what's configured — use automation_list.
    Connector
  • Replay an existing test suite live against the dev's LOCAL APP (no mocks, no docker spin-up). Returns a playbook that delegates to the enterprise CLI `keploy test-suite`, which walks each suite's steps, fires HTTP requests at base_path, evaluates assertions, and uploads per-suite results to api-server. The CLI prints a final pass/fail summary table plus a "Report:" URL to stdout. Output produces a TEST SUITE REPORT — it answers "does the suite hold up against the actual current system?". ═══════════════════════════════════════════════════════════════════ DISAMBIGUATION — pick this tool vs. replay_sandbox_test: ═══════════════════════════════════════════════════════════════════ USE replay_test_suite (THIS TOOL) when the dev says: * "run the test suite" / "run my test suites" * "execute test suite X" / "run suite 810d3ebe…" * "test the suite again" / "rerun the suite" * "validate the suite changes" (after editing a suite) * "smoke test against the live app" Default reading: bare verbs "run" / "execute" / "test" applied to "the suite" mean LIVE-APP execution, NOT replay against captured mocks. USE replay_sandbox_test INSTEAD when the dev says: * "run my sandbox tests" / "replay my sandbox tests" * "integration-test my app" / "check if my mocks still match" * "replay the captured tests" / "run against the recorded mocks" Trigger keyword: "sandbox" / "replay" / "mocks" / "integration-test" — explicit signal that the dev wants captured-mock replay, not live-app. After a record_sandbox_test run, the natural next step is replay_sandbox_test (replay against the freshly captured mocks). After create_test_suite / update_test_suite, the natural next step is replay_test_suite (validate the new/edited suite against the live app). When the dev's verb is bare ("run the suite") and the prior turn was create/update, prefer THIS tool. When the prior turn was record, ASK the dev if unsure — the verbs overlap and silently picking sandbox-replay can mask code-change failures with mock-replay noise. USE THIS for: re-running previously-created suites against a running local app — verifying a regression after a code change, smoke-testing a branch, re-validating after editing a suite. DO NOT USE this for: validating a NEW suite that hasn't been inserted yet (use create_test_suite — it runs the suite twice as part of validation), or for running suites against the captured-mock copy of the app (use replay_sandbox_test — captured-mock replay flow). ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name (don't invent one). 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename> (case-insensitive substring match). Usually 1–2 candidates (e.g. "orderflow" → matches "orderflow" and "orderflow.producer"). If 0 → ASK the dev for the app_id; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id, status}. If no match → that app's not the owner; try the next candidate. If status is closed/merged → ask the dev whether to use this branch anyway. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app, try next candidate. 5. If steps 2–4 exhaust without a hit, the suite is on a branch whose name doesn't match the git branch (the dev created it with a custom name, or it's on main). Then: call list_branches on each candidate app and try every OPEN branch's branch_id with getTestSuite, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. The reverse "look up suite_id globally" path doesn't exist — auditing is branch-scoped, so resolution starts from a branch context. After resolving once in a session, REUSE the {app_id, branch_id} for any subsequent suite-targeting call (delete_test_suite / update_test_suite / replay_test_suite); don't re-walk discovery for every action. ═══════════════════════════════════════════════════════════════════ INPUTS ═══════════════════════════════════════════════════════════════════ * app_id (required) — Keploy app ID. Same value used for create_test_suite / list_branches. * branch_id (required) — Keploy branch UUID. Resolve via the explicit two-step flow BEFORE calling: (1) Bash `git rev-parse --abbrev-ref HEAD` in app_dir; (2) call create_branch tool with {app_id, name: <git branch>} — find-or-create returns {branch_id, ...}; pass it here. Direct main writes are blocked. * base_path (required) — base URL of the dev's local app, e.g. http://localhost:8080. Each suite step's relative path is appended to this. * suite_ids (optional) — list of suite IDs to run. Omit / empty = run every suite registered for app_id on the branch. * header (optional) — single header to inject into every request, e.g. "Cookie: session=…". Same shape as the CLI's -H flag. * app_dir (optional) — absolute path to the dev's repo root (where the app is running). Defaults to '.' (cwd). The CLI invocation cd's here. ═══════════════════════════════════════════════════════════════════ HOW THIS TOOL WORKS ═══════════════════════════════════════════════════════════════════ This tool DOES NOT execute the suite itself. It returns a "playbook" — a small array of shell steps for you (Claude) to walk via Bash. The playbook spawns the enterprise CLI `keploy test-suite` in the foreground; the CLI: 1. Validates the branch exists + is writable (fails fast with a clear message if not). 2. Loads suites from api-server (filtered by --suite-id when supplied; otherwise every suite on the branch). 3. For each suite: fires step requests at base_path, evaluates assertions, records per-step results. 4. Uploads a TestSuiteRun + TestSuiteReport entry to api-server (?branch_id=<uuid>). 5. Prints a summary table to stdout, exits 0 on all-pass / 1 on any failure. Walk the playbook in order. Surface the CLI's stdout to the dev — the table shows which suites passed / failed / were "buggy" (suite-level verdict separate from individual step failures). PREREQUISITES the playbook assumes: * The dev's app is up and reachable at base_path. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists (API key) or KEPLOY_API_KEY is exported.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Cultural intelligence for AI: protect brands from reputational damage across 195 countries.

  • Connect AI assistants to Subotiz - Using Subotiz's external capabilities through natural language

  • Creates an automation on a perspective. Triggers: per_interview (fires on every completed conversation) or scheduled (daily/weekly digest). Channels: webhook, email, slack, hubspot. Execution modes: direct (fast, deterministic) or agent (LLM-powered). Behavior: - Each call creates a new automation — even if name/config matches an existing one. - Once enabled, the automation starts firing on real events: per_interview sends on every completed conversation going forward; scheduled sends a real message on the configured cadence (daily/weekly). - Webhook URLs are validated. For HubSpot, the workspace's HubSpot connection is required — errors with "Could not resolve HubSpot portal ID — please reconnect HubSpot" if not connected. - Errors when the perspective is not found or you do not have access. When to use this tool: - The user wants ongoing notifications on every completed conversation (per_interview). - Building a daily/weekly digest delivered to Slack, email, HubSpot, or a webhook (scheduled). When NOT to use this tool: - Trying a one-off send before going live — create the automation, then use automation_test (use override_email / override_webhook to avoid hitting real recipients). - Editing or toggling an existing automation — use automation_update. - Connecting Slack or HubSpot — use integration_manage first; the provider must be connected before slack/hubspot channels work. Example — per-conversation Slack notify: ``` { "perspective_id": "...", "automation": { "name": "Notify Slack", "trigger": { "type": "per_interview" }, "execution_mode": "agent", "channel": { "type": "composio", "delivery_config": { "provider": "slackbot", "tool_slug": "SLACKBOT_SEND_MESSAGE", "params": { "channel": "#research" }, "resource_id": "...", "resource_name": "..." } } } } ``` Typical flow: 1. integration_manage (operation: "list"/"connect") → ensure Slack / HubSpot is connected (only needed for those channels) 2. automation_create → create the automation 3. automation_test (with overrides) → verify delivery before relying on it
    Connector
  • Spawn a new on-chain $fomox402 round. You become the creator. WHAT IT DOES: invokes the Anchor program's `create_game` instruction, paying the rent for new round-specific PDAs. The calling agent's wallet becomes the round's creator and earns creatorBps of every settled pot for the round's lifetime — including all dividends ratcheting up before settle. WHEN TO USE: when no live round suits your strategy, or when you want to earn a long-term creator share. Each round costs ~0.005 SOL in rent (refunded to the creator on settle). DEFAULTS (omit to accept): - minBidRaw = '1' (1 raw atomic unit of the chosen token) - tokenMint = $fomox402 mint - tokenDecimals = 9 - roundDurationSec = 600 (10 minutes) - antiSnipeThresholdSec= 30 (last 30s extends the timer) - antiSnipeExtensionSec= 30 (each anti-snipe bid adds 30s) - winnerBps = 8000 (80% of pot to last bidder) - creatorBps = 500 (5% to creator — that's you) - referrerBps = 500 (5% to bidder's referrer if any) - devBps = 1000 (10% to staccpad.fun dev wallet) Splits MUST sum to 10000 bps. RETURNS: { gameId, creator, tx (Solana sig), config: { ...effective defaults } }. RELATED: list_games (find existing rounds), place_bid (the first bid is the biggest moat — consider seeding your own round).
    Connector
  • Record (or refresh) the sandbox test for one or more existing test suites — captures the request/response per step plus the outbound mocks (DB, downstream HTTP, etc.) against the dev's locally-running app, then links the captures onto the suite. Use this when the dev says "record", "rerecord", "re-record", "refresh the recordings", "capture mocks", or as the RECORD step in FROM-SCRATCH (after create_test_suite). This tool resolves the app (if only a hint is given), resolves ONE OR MORE suites to record (by exact ids OR case-insensitive name substring match), and delegates to a headless playbook. Output produces a RERECORD REPORT — it answers "did the sandbox test get created and linked successfully?". ╔═══ PRE-CHECK — DID YOU ARRIVE HERE FROM A FAILED REPLAY? ═══╗ This tool refreshes the CAPTURED BASELINE (mocks + recorded request/response per step). It does NOT modify the suite's authored assert array or response.body — those are the contract as defined when the suite was created/updated. If the contract changed and you re-record without updating the suite first, the new rerecord fires the suite's stale assertions against the live app, gate-1-fails on the same diff, and the suite comes back unlinked. Before calling THIS tool in response to a failed replay_sandbox_test or replay_test_suite, walk these checks: 1. Read failed_steps[].authored_assertions and authored_response_body in the most recent get_session_report (kind=sandbox_run / test_suite_run). The fields are inlined — no second tool call needed unless the report predates the inlined fields. 2. For each failing step: does any authored assertion pin the diverging value? (e.g. assert {path: "$.order.status", expected: "created order"} where the diff says "expected 'created order', got 'created'".) * YES → call update_test_suite FIRST to update that assertion + the response.body field, THEN call this tool. * NO → safe to call this tool directly; the captured baseline drifts but no authored assertion blocks the rerecord. 3. If you can't find authored_assertions in the report (older format) AND don't already know the suite's shape, call getTestSuite({app_id, suite_id, branch_id}) to inspect the assert array before deciding. Don't guess. REFUSE-RULE: if the dev confirms a contract change is intentional and the failing step has a pinned authored assertion on the diverging value, you MUST run update_test_suite before this tool. Calling record_sandbox_test FIRST in that case is the bug this pre-check exists to prevent — don't justify it as "let's just refresh the baseline first". The order is update → record → replay; never record → update. ╚═══════════════════════════════════════════════════════════════╝ ===== BEFORE CALLING — one-time setup ===== (a) APP_ID RESOLUTION (skip if app_id is already known): * Derive a likely app name from the cwd's basename (e.g. cwd=/home/dev/orderflow → "orderflow"). Lowercase it. * Call listApps({q: "<cwd-basename>"}) — the server does a case-insensitive server-side substring match, so you don't paginate the full tenant list (can be hundreds of apps on shared accounts). * Exactly one match → use its id. Multiple → list them and ASK the dev which one (a wrong app_id silently routes traffic + suite creates into the wrong app). Exception: if the compose file / repo layout unambiguously pins one candidate (e.g. compose has service "producer" and one candidate is "<folder>.producer" while others are unrelated siblings), you may pick it AND tell the dev up-front so they can correct. * Zero matches → ASK permission to create a new Keploy app with the derived name; on yes, call createApp({name, endpoint}) and use the returned id. * Alternatively pass app_name_hint to THIS tool and the server resolves it (same rules; multiple/zero → typed error). (b) KEPLOY BINARY VERIFICATION: * Bash: "keploy --version" (or "~/.keploy/bin/keploy --version"). If it exits non-zero the binary is missing. * If missing OR older than this MCP server was built against, install/upgrade: curl --silent -O -L https://keploy.io/ent/install.sh && source install.sh * Re-verify with "keploy --version"; fail loudly if still absent (tell the dev where keploy put the binary so they can add it to PATH). ===== DOCKER-COMPOSE NETWORK RULE (absolute) ===== Use the SAME compose file + service that was used in the validate-curl phase. Do NOT point keploy at a second "keploy-only" compose file — docker-compose isolates each file into its own project + network, so the app container spawned by keploy cannot reach the DB/Kafka containers that validate brought up (and the network-name collision blocks keploy from starting). Correct flow: (i) Validate phase: "docker compose up -d" (brings up app + deps on network <project>_default). (ii) Before calling record_sandbox_test, Bash: "docker compose stop <app_service> && docker compose rm -f <app_service>" — stop ONLY the app service; leave deps running so keploy's new app container can reach them on the existing network. (iii) Pass app_command = "docker compose up <app_service>" (same compose file, same project → same network). container_name = the actual name set by compose (e.g. "orderflow-producer", not "producer"). ===== RESOLUTION RULES (server-side, no guessing) ===== 1. App: caller provides app_id OR app_name_hint. With a hint, the server does listApps({q: hint}). Zero matches → typed error; multiple → typed error listing them so Claude asks the dev. 2. Suites: DEFAULT IS "ALL LINKED". When the dev says "record my sandbox tests" / "rerecord everything" / "refresh my recordings" with no specific suite named, LEAVE BOTH suite_ids AND suite_name_hint UNSET. Do NOT list suites first and pass a comma-joined UUID list back — the CLI resolves "every linked suite for the app" itself, cleaner and less brittle. Only pass a narrower selector when the dev explicitly names suites: - suite_ids (comma-separated, exact) — when you already have the IDs. - suite_name_hint (case-insensitive substring match) — when the dev names suites by human phrasing like "the auth suite" or "deterministic". Every suite whose name contains the substring is recorded. If the dev asks to record suites that don't exist yet (zero match) → typed error. Any ≥1 match is fine. DO NOT prompt the dev for which suites to record — default to all linked if they didn't name any. ===== DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id ===== Suites live on a (app_id, branch_id) tuple. A bare suite_id has NO on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name. 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust without a hit, walk every OPEN branch on each candidate app via list_branches → getTestSuite. Then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. The standard pattern when "search the suite by id" returns nothing is NOT "give up and ask the dev which app" — it's "the suite exists on a BRANCH, walk discovery". Suites created via create_test_suite + rerecord on a Keploy branch are INVISIBLE to a main-view listTestSuites; you have to scope each call to a branch. After resolving once in a session, REUSE the {app_id, branch_id} for any subsequent suite-targeted call (replay_sandbox_test, update_test_suite, replay_test_suite); don't re-walk discovery for every action. ===== PREREQUISITES ===== - app_command: shell command that starts the dev's app (e.g. "docker compose up producer"). - app_url: base URL the app listens on, e.g. http://localhost:8080. - app_dir: absolute path to repo root. - container_name if app_command is docker-compose. - keploy binary on PATH. If `which keploy` returns nothing, install it before calling this tool with: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. ===== AFTER CALLING — walk the playbook ===== The response includes a "playbook" array; execute its steps in order. The flow is HEADLESS — one background process, NDJSON progress events on a local file, no separate HTTP surface to bind. THERE IS NO SEPARATE CLEANUP STEP — the CLI exits on its own once phase=done is written. 1. Spawn the `keploy record sandbox --cloud-app-id …` process via Bash (run_in_background). Capture its PID into $KEPLOY_PID. 2. Poll progress by repeatedly calling Bash with `tail -n 1 $PROGRESS_FILE`. Each call returns instantly; the MCP round-trip between calls paces the loop. DO NOT wrap in a sleep loop — Claude Code's Bash rejects standalone `sleep N` and chained-sleep patterns. Read .phase off each line; stop when phase=done. The wait_for_done step's built-in `kill -0 $KEPLOY_PID` check is the safety-net for silent early-exit (CLI died before writing the terminal event) — it lets the loop exit instead of spinning forever on a dead process. 3. Read the terminal event (last line of $PROGRESS_FILE). It carries data.ok, data.error (on failure), data.test_run_id (on success). 4. On data.ok=true: call get_session_report(app_id, test_run_id) with verbose=true to surface the rerecord report. On data.ok=false: show data.error to the dev directly (optionally tail the log_file for stderr context) and SKIP get_session_report (there's no run to fetch). Auto-replay + linkTestSetToSuite run INSIDE the CLI process before it writes phase=done — if the terminal event says ok=true, linkage already happened. You do NOT need to wait for a separate post-success window; the CLI doesn't exit until it's fully done. INTERRUPTED FLOWS: if your conversation dies between step 1 and step 2 (Claude crashes, connection drops, dev cancels), the CLI keeps running in the background. It's not orphaned — it'll finish its run and write phase=done. To abort early, the dev can `pkill -f "keploy.*sandbox"` manually; otherwise just let it complete and resume by re-reading the progress file on the next turn. ===== NDJSON SCHEMA — the contract ===== Every line in the progress_file is one JSON object with this envelope: { "ts": "<RFC3339-nano>", "command": "record" | "test", "phase": "<phase-name>", "message": "<optional human-readable>", "data": { ... phase-specific ... } // optional } The phase vocabulary is intentionally extensible — new lifecycle phases get added over time as the CLI grows (started, agent_up, app_starting, suites_running_start, record_done, auto_replay_skipped, upload_done, linking_done, etc.). There are only TWO phases the AI must handle programmatically; everything else is informational and you should NOT switch on phase names you don't recognize: * phase != "done" → keep polling. Optional: surface message/data to the dev as ambient progress ("agent is starting...", "suites uploading..."), but never branch on a specific intermediate phase name. * phase == "done" → terminal event. Stop polling. The data envelope carries: - data.ok bool true on success, false on failure - data.error string (only on ok=false) one-line failure summary - data.test_run_id string (only on ok=true) pass to get_session_report - data.app_id string echo of the app_id passed to the tool - data.artifact_dir string local path to captured/replayed artifacts - data.dashboard_url string UI link to drill into the run If you observe a phase you don't recognize, IGNORE it and keep polling. If "done" itself is renamed by a future CLI version, the wait_for_done step's PID-alive guard is your safety net (the poll loop exits when the CLI dies); surface log_file contents to the dev. ===== "ALL SUITES FAILED CAPTURE" — special signal ===== If you see a `phase: "auto_replay_skipped"` event with `message: "all suites failed during rerecord; skipping replay + linking"` ahead of the terminal `done` event, every suite failed at the CAPTURE phase (before auto-replay even ran). The CLI fails closed in this case — auto-replay and suite linking are SKIPPED, so every per_suite entry comes back linked=false. Watch for this trap: the terminal `data.ok=true` because the CLI itself completed cleanly (it didn't crash; it just had nothing to record successfully). DO NOT read data.ok=true as "rerecord succeeded" — read `<linked>/<total>`. If linked == 0, this is a HARD failure that needs diagnosis, not a partial-linkage case. ALWAYS surface the dashboard URL on this case. The terminal `done` event still carries `data.dashboard_url` and `data.test_run_id` (atg's TestSuiteRun was created during the capture phase); emit them verbatim so the dev can drill into per-step failures in the UI: "0/N suites have a sandbox test — every suite failed during the capture phase, so auto-replay and linking were skipped. Dashboard: <data.dashboard_url> (test_run_id=<id>)" EDGE CASE: if `data.test_run_id` is empty, atg never inserted a TestSuiteRun (typically a pre-flight validation failure — branch-id rejection, app unreachable, etc.). The dashboard URL won't resolve. Skip the URL, surface the log_file contents instead so the dev can read the early-stage failure. Recovery is the same as WHEN linked=false below — read failed_steps for each suite and pick route B (fix code) / C (update suite + record again) / SKIP. Don't infra-retry; capture-phase failures across every suite usually mean the app is broken, the suite shapes are stale, or the dev's local app isn't reachable. ===== LINKAGE VERIFICATION ===== After get_session_report returns, for EVERY suite that went into this record, call getTestSuite({suite_id}) and check whether the suite has a sandbox test (linked=true / non-empty test_set_id). A suite without a sandbox test cannot be replayed — replay_sandbox_test will 400 on it with "no sandboxed tests" until a successful record produces one. ===== WHEN linked=false — recovery rules ===== A suite with linked=false after record_sandbox_test means the record process couldn't produce a sandbox test for that suite. The SUITE ITSELF still exists; it just has no sandbox test. Diagnose WHY by reading the rerecord report's failed_steps for that suite: * No failed_steps OR pure infra error (link-commit / upload failed, no step diverged) → call record_sandbox_test AGAIN scoped to just the unlinked suite_ids. The tool is idempotent on the suite; safe to re-run. * failed_steps with assertion diffs (response shape, body fields, status code shifted from what the suite expected) → the suite is stale relative to current app behavior. The CONTRACT changed: - Change is INTENTIONAL (new field, renamed key, different status code is the new normal) → call update_test_suite to update the affected step's response / assertions to match the new contract, THEN call record_sandbox_test on the updated suite. - Change is UNINTENTIONAL (app regressed) → fix the app code first, then call record_sandbox_test. No suite update needed; the original test was correct. * failed_steps with 500s / handler crashes / connection refused → the app is broken at the wire level. Fix the app, then call record_sandbox_test. Don't update_test_suite to absorb a real failure. NEVER: * Don't call create_test_suite to "redo" the suite — it already exists; re-creating authors a duplicate (see BEFORE CREATING in create_test_suite). * Don't blindly loop record_sandbox_test without diagnosing failed_steps first; if the cause is suite-vs-app mismatch, retries won't help. ===== MANDATORY OUTPUT — Phase 2 section ===== Your final message to the dev MUST contain a section with this exact heading (do NOT collapse into a single pass/fail table with the rerecord report; do NOT merge with Phase 1 or Phase 3): ### Phase 2 — Sandbox-test linkage **<linked>/<total> suites have a sandbox test** _Suites with a sandbox test_ | Suite name | suite_id | test_set_id | Capture pass/total | | --- | --- | --- | --- | | <name> | <suite_id> | <test_set_id> | <p>/<t> | (emit even if zero — one row per linked suite, or "_(none)_" in place of rows) _Suites without a sandbox test_ (omit ONLY if every suite linked) | Suite name | suite_id | Likely cause | | --- | --- | --- | | <name> | <suite_id> | gate1 / gate2 / infra | Likely-cause decoding: assertion diffs → gate 1 upstream-replay failure; upstream-passing + mock-replay-diff → gate 2 mock-determinism mismatch; zero failures + still unlinked → infra link-commit issue. Then proceed to replay_sandbox_test ONLY for the suites that DID link; the unlinked ones will 400 on replay. ===== DO NOT ===== * DO NOT fall back to raw keploy CLI (`keploy rerecord -t …`) if the MCP tool drops mid-flow — the CLI subcommand runs test-sets directly and does NOT update the suite's test_set_id. See MCP DISCONNECT RECOVERY in the top-level instructions.
    Connector
  • Search the Nova Scotia Open Data catalog (data.novascotia.ca) for datasets by keyword, category, or tag. Returns dataset names, IDs, descriptions, column names, and direct portal links. Use list_categories first to see valid category and tag names. Use the returned dataset ID with query_dataset or get_dataset_metadata for further exploration.
    Connector
  • Generate a Ricardian Contract from a template. Creates a dual-format contract (human-readable legal text + machine-parsable JSON) using AI, linked by SHA-256 hash. The contract is stored on Ambr and accessible via the Reader Portal. Requires a valid API key (X-API-Key header on the HTTP request) with available credits. Use ambr_list_templates first to discover templates and their required parameters. Args: - template (string, required): Template slug (e.g. "c1-agent-delegation") - parameters (object, required): Template-specific parameters matching the schema - principal_declaration (object, required): { agent_id, principal_name, principal_type } - parent_contract_hash (string, optional): SHA-256 hash of parent contract for amendments - amendment_type (string, optional): "original" | "amendment" | "extension" Returns: - contract_id: Unique ID (e.g. "amb-2026-0042") - sha256_hash: SHA-256 hash for verification - status: Contract status - reader_url: URL to view in Reader Portal - credits_remaining: Remaining API credits Legibility: Output is dual-format by construction and replayable to the original SHA-256 hash — the basis of Ambr's legibility guarantee.
    Connector
  • Place a $fomox402 bid on a game round. Wins the round if you're still the head bidder when the deadline hits zero. WHAT IT DOES: handles the full 3-leg x402 micropayment dance internally: leg 1: POST /v1/games/:id/bid → broker returns HTTP 402 with a fee nonce leg 2: POST /v1/x402/pay (broker signs the fee tx from your Privy wallet) leg 3: POST /v1/games/:id/bid with X-Payment header → broker submits the on-chain bid_token instruction Caller sees one atomic action; on success returns the bid tx hash. WHEN TO USE: any time you want to be the head bidder. Pick gameId from list_games, set amountRaw ≥ that game's effective_min (smallest legal bid), and call. FEES: ~0.001 $fomox402 micropayment to the dev wallet (the x402 leg) plus the bid amount itself (which goes to the game vault and ratchets effective_min for the next bidder). Solana network fees ~0.00001 SOL/tx. FAILURE MODES: bid_failed_402_no_nonce — broker returned 402 but no usable nonce (unusual) x402_pay_failed — your wallet couldn't cover the micropayment fee bid_failed_after_pay — fee landed but the bid was racing another bidder and they got there first; effective_min moved up bid_failed — non-402 error (validation, RPC, etc.) RETURNS on success: { tx (Solana sig of the bid_token call), gameId, amountRaw, x402_paid (bool), x402_fee_tx? (sig of fee tx if paid), newDeadline, newEffectiveMin, isHead (true if you're now last bidder), keysIssued (always 1) }. MINTS 1 KEY: every successful bid mints you one key on the round. Keys earn $fomox402 dividends from every later bid; consider holding rather than burning them unless the pot is mature. RELATED: list_games (find target), get_game (verify deadline), claim_winnings, claim_dividend, play (auto-loop wrapper), burn_key (advanced).
    Connector
  • Lookup the meaning of a specific angel number by its sequence. Supported: 000, 111–999 (single repeating digit), 911, 1010, 1111, 1122, 1212, 1234, 2222–9999 (double repeating digit). SECTION: WHAT THIS TOOL COVERS Returns the theme, primary message, actionable guidance, and associated life areas for a specific angel number sequence. Each sequence carries distinct meaning in modern numerological tradition. 111 = manifestation portal. 444 = angelic protection. 999 = cycle completion. 1111 = awakening gateway. 555 = transformation in progress. Pass the number as a string exactly as it appears (e.g. '444' not 444). SECTION: WORKFLOW BEFORE: None — standalone. AFTER: None. SECTION: INPUT CONTRACT number: string — the angel number sequence to look up. Examples: '111', '444', '1111', '911'. SECTION: OUTPUT CONTRACT data.number (string) data.theme (string) data.message (string) data.guidance (string) data.areas[] (string array) SECTION: RESPONSE FORMAT response_format=json — structured JSON. response_format=markdown — human-readable. Both return identical data. SECTION: COMPUTE CLASS FAST_LOOKUP SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): Unsupported number → 404, surfaces as MCP INTERNAL_ERROR. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR SECTION: DO NOT CONFUSE WITH asterwise_get_angel_number_today — today's collective daily angel number. asterwise_get_angel_number_personal — personal angel number from birth date. asterwise_get_number_meaning — Pythagorean numerology meaning for 1–33; different tradition.
    Connector
  • Request a signed URL to upload a datasheet PDF for a component whose datasheet we don't have. Use this when search_parts / get_part_details / prefetch_datasheets return datasheet_status='no_source' (and a retry didn't help) or 'unsupported'. Free — the upload fee is only charged on confirm_datasheet_upload after we validate the file. Flow (3 steps): 1. Call request_datasheet_upload with the MPN, the file's SHA-256, and its byte size. You get back an upload_url, upload_method ('PUT'), upload_headers, and an opaque upload_token. 2. Upload the PDF directly to the returned URL with curl: `curl -X PUT -H 'Content-Type: application/pdf' --data-binary @file.pdf "$UPLOAD_URL"` (add any headers from upload_headers). 3. Call confirm_datasheet_upload with the upload_token. Server verifies the bytes, re-hashes, checks for the MPN on the first page, charges the upload fee (50¢), and queues extraction. Returns document_id + status='pending'. Validation rules (checked at confirm time, refunded on failure): - File must be a valid PDF (magic bytes + parseable). - Actual SHA-256 must match expected_sha256. - Actual byte size must match size_bytes (±0). - MPN or its core stem must appear in the first page text (catches wrong-file uploads). Scanned image-only PDFs will fail this check — upload a text-based PDF. - Max 50MB per file. No dev-kit manuals / BOB schematics / app-notes as datasheets — use the matching MPN's actual datasheet. Uploaded datasheets are scoped to your organization (private). They satisfy read_datasheet, search_datasheets, check_design_fit, and analyze_image for your org's tokens only. Tokens expire after 15 minutes. If upload fails or times out, just call request_datasheet_upload again.
    Connector
  • WHEN: user asks whether a D365 requirement is standard, needs config, needs an extension, or is a full gap. Also triggered by gap analysis or fit/gap classification of a Work Item. GAP / FIT CLASSIFIER -- Analyse a D365 F&O requirement (from an ADO Work Item OR plain text) and classify it as one of four verdicts: [OK] Standard Fit -- D365 covers this out-of-the-box, no dev needed [gear] Config Fit -- D365 has it but requires parameter / profile setup Extension Fit -- Standard process exists; a CoC/event-handler is enough [X] Gap -- No standard coverage; custom development required For each requirement block the tool returns: -- Detected D365 domain (Settlement, PaymentJournal, DataImport, ...) -- Standard objects found in KB and their process step -- Existing extensions in the custom model (if D365_CUSTOM_MODEL_PATH is set) -- Effort estimate (hours) and a one-paragraph reasoning Triggers: 'analyse the requirement', 'is this a gap or fit', 'gap analysis WI #N', 'standard or custom for WI #N', 'does D365 cover this'. [~] When a WI has already been analysed by `ado_analyze_workitem` in the same turn, pass the requirement text directly via `requirementText` -- do NOT re-fetch with `workItemId`.
    Connector
  • Generate domain name ideas from a keyword and check their availability. Uses common prefix/suffix patterns to generate 10-15 domain candidates across .com, .io, .ai, .dev, .co and checks all of them via fast RDAP lookups. Returns available domains with affiliate registration links. Args: keyword: A keyword or short business name (e.g. "taskflow").
    Connector
  • List all dataset categories and themes with counts per portal. Great first step to discover what data types are available before searching with search_datasets. Returns total datasets, count per portal and category list with counts. No parameters required.
    Connector
  • Rank active AI/ML jobs against a candidate profile (skills, salary range, workplace, level). Scoring combines tag overlap (+2 per match), salary overlap (+3), workplace/level/type/location matches, and description keyword hits. Use this when an agent is choosing which role to surface to its user — it returns pre-ranked matches with scoring explanations.
    Connector
  • Find the planning portal URL for a UK postcode. Returns council info and portal search URLs. Does not scrape planning applications -- use the returned URLs to search directly.
    Connector
  • Returns all dataset categories and popular tags available on the Nova Scotia Open Data portal. Use this first to discover valid category names before calling search_datasets with a category filter.
    Connector