Skip to main content
Glama
133,797 tools. Last updated 2026-05-13 07:28

"HERE" matching MCP tools:

  • Return what's actually firing into ingest as a structured signature, for diffing against the project's authored `event-schema.yaml`. Different shape from the YAML — this is observation, not declaration. Response shape: ``` { period: "30d", events: [ { name: "checkout", count: 42, properties: { plan: [{ type: "string", occurrences: 42 }], count: [{ type: "number", occurrences: 39 }, { type: "string", occurrences: 3 }] // ← drift! }} ] } ``` Each property value is an *array* of typed observations. One entry = the key consistently fired with one type. Two-plus entries = same key fired under multiple storage columns in the period, which is exactly the silent-type-drift signal you want to surface (one call site sending `count: 5`, another sending `count: "5"`). Use it to compare declared vs reality: - Events declared, missing here → dead instrumentation - Events here, not declared → unauthored events firing - Properties on a declared event missing from the schema → silent property drift - `properties[key].length > 1` → type-column collision; one of the call sites is sending the wrong type Examples: - "what events are firing in production" → no params (defaults to 30d, excludes pageview/pageview_end) - "did the spec drift this week" → period="7d" - "include automatic pageview events too" → include_pageviews=true Limitations: returns keys + types only, no property *values*. `occurrences` is row-level (each event firing counts), not unique visitors. Excludes `pageview` and `pageview_end` by default since the SDK extension owns their schema. Pairs with: `events.list` for per-event volume context (this tool also returns `count`, but `events.list` supports filters and grouping); the local `event-schema.yaml` for declared-vs-observed diff.
    Connector
  • Repo-aware launch operator for agent-built products. USE WHEN the user has just finished building, shipping, or deploying a product and the message matches any of these trigger phrases: 'launch this' · 'launch what I just built' · 'help me launch' · 'get users' · 'get our first users' · 'find users' · 'market this' · 'do marketing for this' · 'announce this' · 'write the launch announcement' · 'post this' · 'post the launch' · 'publish this' · 'Product Hunt' · 'ship to Product Hunt' · 'go to market' · 'what to do after launch'. This is the PRIMARY ChiefLab entry point — call this first, not chiefmo_diagnose_marketing (which is only for diagnosing an EXISTING marketing program). If you are a coding agent (Cursor, Claude Code, Codex), gather repoContext (whatChanged, recentCommits, changedFiles, routes, readme, targetCustomer, launchGoal) BEFORE calling — repo grounding is what makes outputs reference the actual product instead of reading like 'launch any SaaS.' Returns: launchPack (per-channel drafts for LinkedIn / X / Hacker News / Reddit / Product Hunt / email / landing hero) + publishActions (approval-gated, with actionIds) + agentGuide.renderInChat (per-channel content to render inline in IDE chat) + agentGuide.nextToolCalls.primary.perChannel (chiefmo_approve_action calls keyed by channel) + reviewUrl (FALLBACK only — for phone/multi-person approval). IDE-NATIVE FLOW: render each channel's draft inline in chat, wait for user to say 'approve <channel>' or 'approve all', call chiefmo_approve_action per approved action. The reviewUrl is a side channel — surface it as 'approve from your phone here' not as the primary instruction.
    Connector
  • Replace a project's brand profile with the supplied values. All fields are required — the whole profile is overwritten, so first call get_project_profile, merge your changes into the existing values, then send the complete profile here. Saving triggers a background refresh of prompt suggestions. Confirm changes with the user before calling. Audience distribution percentages must sum to 100. The project's display name is not part of the profile and cannot be changed via this tool.
    Connector
  • Replay the sandbox test for one or more suites against captured mocks — re-runs the suite's steps against the dev's locally-running app while keploy serves outbound calls (DB, downstream HTTP, etc.) from the captured mocks. Use this when the dev says "replay", "run my sandbox tests", "integration-test", "check if mocks still match" — keywords "sandbox" / "replay" / "mocks" / "integration-test" all map here. Also the REPLAY STEP of FROM-SCRATCH: call this LAST (after create_test_suite + record_sandbox_test) to give the dev the whole-app regression picture against the freshly captured mocks. Output produces a SANDBOX RUN REPORT — it answers "does the suite still hold up against its captured baseline?". ═══════════════════════════════════════════════════════════════════ DISAMBIGUATION — pick this tool vs. replay_test_suite: ═══════════════════════════════════════════════════════════════════ USE replay_sandbox_test (THIS TOOL) when the dev says: * "run my sandbox tests" / "replay my sandbox tests" * "integration-test my app" / "run the integration tests" * "check if my mocks still match" / "replay against the captured mocks" * "rerun my sandbox suite" (with the word "sandbox") Trigger keyword: an explicit "sandbox" / "replay" / "mocks" / "integration-test" — silent signal that the dev wants captured-mock replay, NOT live-app execution. USE replay_test_suite INSTEAD when the dev says: * "run the test suite" / "run my test suites" (bare — no "sandbox") * "execute test suite X" / "run suite 810d3ebe…" * "test the suite again" / "smoke test against the live app" Bare verbs ("run / test / execute") applied to "the suite" without the word "sandbox" mean LIVE-APP execution, NOT captured-mock replay. replay_test_suite hits the dev's running localhost app directly via HTTP — no docker spin-up, no mocks. After a record_sandbox_test run, the natural next step is THIS tool (replay against the just-captured mocks). After create_test_suite / update_test_suite, the natural next step is replay_test_suite (validate against the live app). When the dev's verb is bare and the prior turn doesn't make the intent obvious, ASK rather than picking sandbox-replay silently — code-change regressions can hide under "mock didn't match" failures. ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has NO on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name. 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app via list_branches → getTestSuite. Then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action. SCOPE — whole-app vs single-suite: * Default: LEAVE suite_ids UNSET → the tool resolves "every suite for the app that has a sandbox test (test_set_id populated)" and replays them all. Use this for "run my sandbox tests" / "check if my tests still pass" — whole-app regression. New suites auto-pick up. * Single / subset: PASS suite_ids when the dev names specific suites — "replay sandbox test for suite 810d3ebe-…", "replay only the auth suite", "run suite X and Y". The tool validates each requested id is actually a suite with a sandbox test (has test_set_id); an unlinked id gets a precise "record first" error instead of an opaque downstream CLI failure. This tool resolves the app, picks the suite set per the rule above, and returns a single playbook that drives the replay for them. It does NOT record. WHAT THIS TOOL DOES INTERNALLY (so you don't have to): 1. Resolves app_id — use the explicit app_id if the caller has one; otherwise pass app_name_hint (usually the cwd basename) and the server does listApps with a substring match. Multiple matches → error listing them; zero matches → error suggesting the dev generate a suite first. 2. Lists test suites for the app, keeps only those with a non-empty test_set_id. Zero linked → typed "no linked sandbox tests" error. 3. If suite_ids was passed, validates every requested id is in the linked-suites set; unlinked ids → typed error pointing to record_sandbox_test. 4. Returns the headless playbook — walk it exactly: spawn CLI in background, tail the progress file (PID-alive guard built in), read the terminal event, fetch the report. No separate cleanup step — the CLI exits on its own. ===== PREREQUISITES ===== (Same as record_sandbox_test — if you just recorded, you already have them. Same docker-compose network rule applies: use the same compose file + service, stop the app service before calling, leave deps running.) - app_command: shell command that starts the dev's app (e.g. "docker compose up producer"). - app_url: base URL the app listens on, e.g. http://localhost:8080. - app_dir: absolute path to repo root. - container_name if app_command is docker-compose. - keploy binary on PATH. If `which keploy` returns nothing, install it before calling this tool with: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. ===== AFTER CALLING — walk the playbook ===== Same headless playbook shape as record_sandbox_test: spawn `keploy test sandbox --cloud-app-id …` in the background via Bash, poll `tail -n 1 $PROGRESS_FILE` repeatedly (no sleep loops; the wait_for_done step has a built-in `kill -0 $KEPLOY_PID` guard so the loop exits if the CLI dies silently), read the terminal NDJSON event (phase=done, data.ok, data.test_run_id), and — if ok=true — call get_session_report(app_id, test_run_id) with verbose=true at the end. No separate cleanup step needed; the CLI exits cleanly once phase=done is written. ===== MANDATORY OUTPUT — Phase 3 section ===== Your final message to the dev MUST contain a section with this exact heading (do NOT merge with Phase 2; do NOT compress the failed-steps table even when failures are homogeneous): ### Phase 3 — Sandbox run report Under it, emit the uniform three-subsection format owned by get_session_report: (i) per-suite table — one row per suite in per_suite, passing suites included, columns = Suite name | passed/total steps. (ii) failed-steps table — ONE ROW per entry in failed_steps[], columns = Suite | Step name | Method + URL | Expected → Actual status | mock_mismatch y/n. Never collapse rows. (iii) Diagnosis + Recommendation (see get_session_report description for case-specific rules around mock_mismatch_dominant, repo-diff inspection, and the SKIP / FIX-CODE / FIX-TEST branching for fix-it follow-ups). Do NOT print aggregate step totals across suites — they mix unrelated suites and hide where damage actually is. ===== ROLLUP LINE ===== Close the message with a final one-line rollup paragraph (no heading), in addition to the three phase sections. Mention the TOTAL number of suites replayed (which may exceed the count created in this session, because replay_sandbox_test covers every linked suite the app has). Example: "_Rollup: inserted 4 suites, 4/4 with sandbox tests after record, 3/4 suites passed sandbox replay across the app's 6 linked suites — 1 failure is likely keploy egress-hook, file an issue with the IDs above._" ===== DO NOT ===== * DO NOT call update_test_suite or record_sandbox_test after this. The dev said RUN, not REFRESH. * DO NOT fall back to raw keploy CLI (`keploy test …`) if the MCP tool drops mid-flow — CLI runs test-sets directly and does NOT write results back to the MCP-visible TestSuiteRun. See MCP DISCONNECT RECOVERY in the top-level instructions.
    Connector
  • Calculate a complete Western natal chart using the tropical zodiac and Swiss Ephemeris. Returns 10 planet positions with Placidus (or chosen) house placements, essential dignities per Ptolemy/Lilly/Hand, all active aspects using Robert Hand Table 2 orbs, and element/modality/hemisphere balance statistics. SECTION: WHAT THIS TOOL COVERS Tropical natal chart: Sun, Moon, Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune, Pluto. Each planet returns tropical longitude, sign, house (1–12), retrograde flag, dignity label (domicile/exaltation/detriment/fall/peregrine), dignity score (Lilly weights: domicile +5, exaltation +4, triplicity +3, term +2, face +1, detriment -5, fall -4), is_exaltation_degree (within 1° of exact exaltation), dignity_disputed (true for outer planets where exaltation/fall is disputed among modern astrologers). Aspects use Hand Table 2 orbs: conjunction/opposition 5°, square/trine 5°, sextile 3°, minor aspects 1.5°. Accuracy verified against astro-seek.com to within 0.01° for all 10 planets. Not Vedic sidereal (asterwise_get_natal_chart). SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_western_transits_daily — layer current transits over this natal chart. AFTER: asterwise_get_western_synastry — compare this chart against a partner's chart. AFTER: asterwise_get_western_solar_return — annual return chart for the current year. SECTION: INPUT CONTRACT birth.date — YYYY-MM-DD. Example: '1985-11-12' birth.time — HH:MM (24-hour local time). Example: '06:45' birth.lat — Decimal degrees, north positive. Example: 19.076 (Mumbai) birth.lon — Decimal degrees, east positive. Example: 72.8777 (Mumbai) birth.timezone — IANA timezone string. Example: 'Asia/Kolkata', 'America/New_York', 'Europe/Rome', 'UTC'. Default: UTC. IMPORTANT: Timezone defaults to UTC — always supply the correct local timezone for accurate house cusps. An incorrect timezone shifts the Ascendant. birth.house_system — 'placidus' (default, most common), 'koch', 'equal', 'whole_sign'. Placidus is standard for most Western traditions. Whole sign is traditional/Hellenistic. NOTE: house_system is accepted here but silently ignored by transit, return, synastry, composite, and progression endpoints — those always use the birth location coordinates without house-system selection. ayanamsa — always tropical regardless of any value supplied; field is not present. SECTION: OUTPUT CONTRACT data.zodiac (string — 'tropical') data.house_system (string — the system used) data.ascendant — { longitude (float), sign (string), sign_index (int 0–11), degree_in_sign (float) } data.mc — same shape as ascendant data.planets[] — 10 objects (Sun through Pluto): name (string), longitude (float), sign (string), sign_index (int 0–11) degree_in_sign (float), house (int 1–12) is_retrograde (bool), dignity (string), dignity_score (int) is_exaltation_degree (bool), dignity_disputed (bool) data.houses[] — 12 objects: house (int 1–12), cusp_longitude (float), sign (string) sign_index (int 0–11), degree_in_sign (float) data.aspects[] — each: planet_a (string), planet_b (string), type (string) exact_angle (float), orb (float), is_applying (bool) data.elements — { fire (int), earth (int), air (int), water (int), dominant (string) } data.modalities — { cardinal (int), fixed (int), mutable (int), dominant (string) } data.hemisphere — { eastern (int), western (int), northern (int), southern (int) } data.ayanamsa_value (float — 0.0 for tropical) data.ayanamsa_used (string — 'tropical') data.birth_time_unknown (bool — always false) SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable natal report. Both modes return identical underlying data. SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~300ms) SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — WesternBirthData Pydantic violations (date pattern, time pattern, lat/lon bounds) → MCP INVALID_PARAMS INVALID_PARAMS (upstream): — None expected for valid coordinates and dates post-1800. INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: — Polar latitudes (above ~65°N or below ~65°S) may cause Placidus house calculation failure; use whole_sign or equal house system for polar births. — time='00:00' accepted; lagna-sensitive results are unreliable for unknown birth times. SECTION: DO NOT CONFUSE WITH asterwise_get_natal_chart — Vedic sidereal chart using Lahiri ayanamsa; different zodiac, different house system, different planet set (9 grahas vs 10 tropical planets). asterwise_get_western_aspects — takes raw longitudes as input; use when you already have positions and don't need full chart computation.
    Connector
  • Computes the full sidereal natal chart from BirthData and returns planet rows, houses, aspects, arudhas, upapada, bhava cusps, and avakhada metadata. SECTION: WHAT THIS TOOL COVERS Parashari-style natal endpoint: nine grahas with signs, degrees, nakshatras, combustion, retrograde, Bhava Chalit and rashi houses, twelve house cusps, graha and rashi drishti, arudha padas A1–A12, upapada lagna block, bhava madhya/sandhi arrays, ayanamsa metadata, and avakhada attributes. When include_interpretation=true, ascendant_sign_interpretation, moon_sign_interpretation, moon_nakshatra_interpretation, and interpretation are populated from interpretation JSON; otherwise they are null. It does not return PDFs, yogas list (asterwise_get_yogas), or dasha trees (asterwise_get_dasha). SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: RECOMMENDED — asterwise_get_yogas — layer classical combinations after the base chart exists. SECTION: INPUT CONTRACT BirthData enforces date YYYY-MM-DD, time HH:MM, lat -90..90, lon -180..180, ayanamsa enum locally (Pydantic). Unknown birth time may be entered as time='00:00' without error; lagna-sensitive results are then unreliable and callers must handle that — the API does not flag it. SECTION: OUTPUT CONTRACT data.planets[] — nine objects: planet (string) sign (string) sign_num (int — 0–11) degree (float) nakshatra (string) nakshatra_pada (int — 1–4) is_retrograde (bool) is_combust (bool) is_deep_combust (bool) house (int — Bhava Chalit) rasi_house (int) bhava_chalit_house (int) data.houses[] — twelve objects: house (int) sign (string) sign_num (int) degree (float) data.ascendant (float) data.ascendant_sign (string — Sanskrit name) data.moon_sign (string) data.moon_nakshatra (string) data.ayanamsa_value (float) data.ayanamsa_used (string) data.avakahada: nakshatra, nakshatra_lord, charan (int), rashi, rashi_lord, varna, vashya, yoni, gana, nadi, paya, ascendant, ascendant_lord, sun_sign, sun_sign_lord (strings/ints per upstream) data.graha_drishti — object keyed by planet name; each value object keyed by house strings '1'–'12' with aspect strength int (25, 50, 75, or 100) data.rashi_drishti[] — active sign-to-sign aspect pairs: { from_sign (string), from_sign_num (int 0-11), to_sign (string), to_sign_num (int 0-11) } data.arudha_padas — keys A1–A12 each { sign_index (int), sign_name (string) } data.upapada_lagna: sign_index (int) sign_name (string) upapada_lord (string) second_from_upapada_sign_index (int) second_from_upapada_sign_name (string) planets_in_second_from_upapada[] (string array of planet names) has_benefic_in_second_from_upapada (bool) has_malefic_in_second_from_upapada (bool) data.bhava_madhya[] — twelve objects: { house (int 1-12), sign (string), sign_num (int 0-11), degree (float) } data.bhava_sandhi[] — twelve objects: { house (int 1-12), sign (string), sign_num (int 0-11), degree (float) } data.birth_time_unknown (bool — always false; no detection) data.fallback_method (null) ascendant_sign_interpretation (dict or null — sign interpretation from signs/ascendant.json when include_interpretation=true) moon_sign_interpretation (dict or null — Moon sign interpretation from signs/moon_sign.json when include_interpretation=true) moon_nakshatra_interpretation (dict or null — nakshatra interpretation from nakshatras/ files when include_interpretation=true) interpretation (list or null — planet-in-house interpretation list when include_interpretation=true) SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode. SECTION: COMPUTE CLASS MEDIUM_COMPUTE SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — BirthData Pydantic violations (date/time/lat/lon/ayanamsa) → MCP INVALID_PARAMS INVALID_PARAMS (upstream): — None — calendar years outside supported upstream window surface as MCP INTERNAL_ERROR at the tool layer. INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: — time='00:00' accepted; lagna may be wrong if true birth time unknown — not auto-detected. — Interpretation fields are null unless include_interpretation=true on the request. SECTION: DO NOT CONFUSE WITH asterwise_get_divisional_chart — sixteen vargas only, not the primary radix bundle returned here.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Connect your AI agent to 20,000+ executives (CEO, CFO, COO) of all major companies, 1M+ verified quotes, and full interview transcripts. S&P 500, NASDAQ, AI startups, Federal Reserve officials. 4 MCP tools, pay only for what you use. $5/1,000 results, no minimum, no commitment, cancel anytime. You only pay for returned valid results. You must create an API key at https://mcp.ceointerviews.ai and then authenticate here using header Authorization: Bearer <token> For access to our full data API please access https://ceointerviews.ai

  • Official MCP server for HireSquire. Automate resume screening, candidate ranking, and interview scheduling for autonomous agents.

  • Look up grantmaking organizations by name, topic, or location. This tool searches 174K+ grantmaking organizations from IRS data using organization names plus grant-purpose/topic signals. Use it when you know the funder's name, want aligned funders for a cause area, or want to browse by location/size/NTEE code. Multi-word searches are ranked by relevance; simple browse/name fallback results are ordered by total assets. IMPORTANT: Use search_open_grants when the user needs active grant programs or RFPs. search_funders is for finding aligned grantmakers, including ones that may fund by relationship, LOI, or annual cycle rather than a live call. Args: query: Search term for a funder name or cause-area phrase. Example: "Ford Foundation", "global health", "community foundation" Topic searches work best with 2+ words. state: Two-letter US state code to filter by funder HQ location. Example: "CA", "NY", "TX" city: City name to filter by (case-insensitive). Example: "San Francisco", "New York" ntee_code: NTEE classification code to filter by. Example: "A20" (Arts Organizations), "B" (Education), "E" (Health) min_assets: Minimum total assets filter in dollars. Example: 10000000 (foundations with $10M+ assets) max_assets: Maximum total assets filter in dollars. Example: 100000000 (foundations with up to $100M assets) has_er_grants: Filter to foundations that make expenditure responsibility grants (grants to non-501(c)(3) entities like PBCs, for-profits, and foreign orgs). Set to True to find only ER-active funders. funder_type: Optional canonical funder_type to include. Examples: "community_foundation", "family_foundation", "corporate_foundation", "private_operating", "independent_foundation". Use this to narrow to a specific kind of grantmaker. exclude_funder_types: Optional list of canonical funder_type codes to exclude from results. Useful for hiding operating nonprofits that surface with large "annual_grants" but are not actually grantmakers — e.g., exclude_funder_types=["private_operating"] hides PATH and similar operating organizations. grantee_country_codes: Optional list of FIPS 10-4 country codes (e.g., "UK" for United Kingdom, "IN" for India, "KE" for Kenya, "SF" for South Africa) to restrict to funders whose grantees are located in those countries. Use this when the user is asking for funders that move money into a specific non-US geography. Country here is the grantee's HQ country, derived from foundation_grants. When set, the search is forced through the hybrid path; the ILIKE-only name-match path cannot filter by country. Distinct from `state`, which filters by the funder's own US HQ. limit: Maximum number of results to return. Default: 20, Maximum: 50 Returns: Dictionary containing: - results: List of matching foundations with ein, name, city, state, total_assets, annual_grants, website_url, has_er_grants, has_pris, funder_type (when populated), topic_match_count (when query takes the hybrid topic-search path — see below) - total_returned: Number of results returned - query_params: The search parameters used - note: Helpful context about the results topic_match_count is the number of distinct grant-purpose strings under this funder that matched the FTS query. It surfaces only on topical searches (multi-word queries that route to the hybrid path) and only for 990-filer rows; ILIKE-only and non-990 rows omit the field. Rule of thumb: - topic_match_count == 1 → single tangential grant, often noise (e.g. a credit-union foundation surfacing for "telemedicine" because of one passing-mention grant) - topic_match_count >= 3 → substantive topical coverage Examples: search_funders(query="community foundation", state="CA") search_funders(query="global health", min_assets=100000000) search_funders(ntee_code="E", min_assets=50000000) search_funders(state="NY", city="New York", limit=10) search_funders(has_er_grants=True, state="CA") search_funders(funder_type="community_foundation", state="CA") search_funders(query="PATH", exclude_funder_types=["private_operating"]) search_funders(query="global health", grantee_country_codes=["IN"]) search_funders(query="climate resilience", grantee_country_codes=["KE", "SF"])
    Connector
  • Searches a date span for top-scoring muhurta windows for a named activity using Panchanga, Choghadiya, and classical siddhi flags at a location. SECTION: WHAT THIS TOOL COVERS Evaluates marriage, travel, griha_pravesh, business, education, medical, and vehicle_purchase (exact spellings upstream). Returns scored windows with tithi, nakshatra-related yoga name (Panchanga yoga, not natal yogas), vara, choghadiya metadata, boolean guards (rahu kaal, abhijit, amrita/sarvartha siddhi), and textual reasons. Unsupported activity strings are rejected upstream. It does not return a full month calendar (asterwise_get_panchanga_calendar) or only Choghadiya rows (asterwise_get_choghadiya). SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_panchanga — drill into Panchanga limbs for a chosen winning date. SECTION: INPUT CONTRACT activity must be one of the supported English slugs above — not validated locally; bad values become MCP INTERNAL_ERROR. from_date/to_date ordering and span rules are enforced upstream. Location coordinates reuse LocationInput validation for lat/lon/date pattern. SECTION: OUTPUT CONTRACT data.event_type (string) data.from_date (string) data.to_date (string) data.timezone (string) data.ayanamsa (string) data.total_windows_evaluated (int) data.top_windows[] — each: date (string — YYYY-MM-DD) start (string — HH:MM local) end (string — HH:MM local) score (int — 0–100) choghadiya (string) choghadiya_type (string) yoga (string — Panchanga yoga name) vara (string) vara_number (int — 1–7) tithi (string) tithi_number (int — 1–30) is_rahu_kaal (bool) is_abhijit (bool) is_amrita_siddhi (bool) is_sarvartha_siddhi (bool) reason (string) SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode. SECTION: COMPUTE CLASS MEDIUM_COMPUTE SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — Invalid LocationInput date/lat/lon → MCP INVALID_PARAMS INVALID_PARAMS (upstream): — None — bad activity, range, or ordering surfaces as MCP INTERNAL_ERROR at the tool layer. INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: — Panchanga yoga names here are not asterwise_get_yogas natal yogas. SECTION: DO NOT CONFUSE WITH asterwise_get_choghadiya — enumerates all Choghadiya for one day without activity scoring across a span. asterwise_get_panchanga — single-day limb detail, not ranked muhurta search.
    Connector
  • Returns all crystals associated with a specific Vedic planet. Results are sorted with primary Navaratna gems first, then Uparatna substitutes. Only classically verified Vedic assignments are returned — crystals with no classical text support are excluded. SECTION: WHAT THIS TOOL COVERS Filters the crystal database by vedic_planet field. Only returns crystals where vedic_correspondence is 'navaratna' or 'uparatna' — none_classical crystals are not returned here because they have no actual Vedic planetary assignment. Useful for Jyotish practitioners recommending remedial gems. Navaratna gems appear first. Valid planets: Sun, Moon, Mars, Mercury, Jupiter, Venus, Saturn, Rahu, Ketu. SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — identify the planet needing remediation. AFTER: asterwise_get_gemstone_recommendations — for chart-specific gem safety assessment. SECTION: INPUT CONTRACT planet: One of Sun, Moon, Mars, Mercury, Jupiter, Venus, Saturn, Rahu, Ketu. SECTION: OUTPUT CONTRACT data.total (int) data.crystals[] — same shape as asterwise_get_crystals, sorted Navaratna first. SECTION: RESPONSE FORMAT response_format=json — filtered crystal array. response_format=markdown — formatted list. Both return identical data. SECTION: COMPUTE CLASS FAST_LOOKUP SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): Unknown planet → 404. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR SECTION: DO NOT CONFUSE WITH asterwise_get_gemstone_recommendations — natal chart Ratna Shastra recommendation with contraindications; use for actual gem prescription, not just listing. asterwise_get_crystals — all 50 crystals including Western-only ones.
    Connector
  • Recent error events with full context. One row per occurrence, returned newest-first. Each row carries the error itself (message, type, stack, fingerprint, handled flag) plus the standard event context (url, browser/OS/device, country, anonymous_id, session_id) — same shape ingest enriches every other event with, so an agent can correlate "errors here, traffic there" without joining a second tool. Errors are written to the events table with name = "$error" by the SDKs' captureError() / window.onerror auto-capture. The server adds a stable `error.fingerprint` at ingest (sha256 of normalized message + first stack frame), so the same bug groups across occurrences regardless of which session or SDK reported it. Examples: - "what errors fired today" → period="today" (no other filters) - "show me all TypeError occurrences this week" → message="<known message>", or use errors.groups first to find the fingerprint - "errors on Safari only" → browser="Safari" - "errors on the same fingerprint" → fingerprint="<from errors.groups>" - "only the auto-captured ones, not manual reports" → handled="false" Limitations: returns up to `limit` rows (default 50, max 200). Stacks are stored verbatim from the SDK with no source-map resolution — production stacks will be minified for users on a build pipeline. For aggregate counts and dedup, use errors.groups; for breadcrumbs leading to one error, use errors.context. Pairs with: `errors.groups` (find a noisy fingerprint, then list its occurrences here); `errors.context` (drill from one error row into the events from the same session that led to it); `users.journey` (full multi-session view of a user who hit an error).
    Connector
  • Generate the exact CI workflow YAML to add keploy sandbox tests to a pull-request pipeline, and tell you where to write it. Use this when the dev asks to "add keploy sandbox tests to my pipeline" / "wire keploy into CI" / "run keploy on PR" / "add a CI job for keploy" — the server emits the file contents verbatim so you don't have to compose the flag list yourself. ===== GOAL ===== Write a CI workflow file that runs `keploy test sandbox --cloud-app-id <uuid> --app-url <url>` on pull requests and gates the PR on the result. NEVER kick off an actual test run in this flow — it is pure file authoring, ends with the file on disk. DO NOT fire replay_sandbox_test, record_sandbox_test, replay_test_suite, or any other run-starting MCP tool here. ===== HOW (absolute) ===== Call this tool. It returns { file_path, content, summary }. Write the "content" to "file_path" VERBATIM via your Write tool — NO flag renames, NO flag removals, NO step reordering, NO synthesis. The server owns the YAML template; your job is only to (1) resolve the inputs from the repo and api-server and (2) Write the returned content. Do NOT compose the YAML yourself from general knowledge — flag drift (missing --cloud-app-id, inventing --app) is the most common bug when Claude improvises. DO NOT ASK the dev for confirmation before writing. Resolve everything from the repo + api-server, pick the GitHub Actions default, call this tool, Write the file. The dev's prompt is already the go-ahead. ===== STEPS ===== 1. DETECT THE CI SYSTEM: * Default = GitHub Actions (biggest share). File = .github/workflows/keploy-sandbox.yml. * If .gitlab-ci.yml exists → GitLab (not yet supported by this tool; tell the dev and stop). * If .circleci/config.yml exists → Circle (not yet supported; tell the dev and stop). * Otherwise → GitHub Actions. 2. RESOLVE VALUES by calling MCP tools + reading the repo: * app_id: call listApps({q: "<cwd basename>"}). Exactly one → use its id. Multiple → pick the one whose name most specifically matches the repo's primary service (e.g. "orderflow.producer" wins over "orderflow" when there's a ./producer directory); mention which you picked in the final message. Zero → stop and tell the dev to create the app + rerecord first. * suite_ids: DO NOT pass this arg by default. An empty suite_ids means the CLI resolves "every linked sandbox suite for the app" at CI run time — which is what you want (new suites auto-pick up without workflow edits). The tool still verifies there's ≥1 linked suite at scaffold time so the first PR run doesn't fail empty-handed. Only pass suite_ids when the dev explicitly narrows ("run only the auth suite in CI"); don't pin "all current suites" — that's staleness waiting to happen. * compose_file: READ THE REPO. Default is docker-compose.yml. AVOID passing a docker-compose-keploy.yaml variant that has `networks: default: external: true` — those variants only work locally, where another compose run has already created the external network. In CI the runner starts clean and `external: true` fails with "network not found". If the primary docker-compose.yml brings up the full app (deps + app service), use it end-to-end. * app_service, container_name, app_port: read from the SAME compose_file you picked above. app_service = the service key (e.g. "producer"); container_name = that service's container_name: field in that same compose file (e.g. "orderflow-producer" if compose_file=docker-compose.yml, but "producer" if compose_file=docker-compose-keploy.yaml — THESE DIFFER, pick consistently); app_port = the host-side of its ports: mapping. * app_url = http://localhost:<app_port>. The tool derives this; you don't pass it separately. 3. CALL THIS TOOL with app_id, app_service, container_name, app_port, compose_file (and suite_ids only if the dev explicitly narrowed scope). It returns { file_path, content, summary }. Write the "content" to the "file_path" VERBATIM. ===== FLAG NAME RULES (absolute, do not drift when reviewing the output) ===== * `--cloud-app-id` ← NOT `--app-id`. The OSS config has an `appId` uint64 field that viper maps `--app-id` into; passing a UUID there fails with "invalid syntax" before RunE runs. * `keploy test sandbox --cloud-app-id <uuid> --app-url <url>` ← the CI form. NOT `keploy test --cloud-app-id` (must be `test sandbox` — the headless flags live on the sandbox subcommand only), NOT `keploy test-suite run` (that command doesn't exist). There is NO `--pipeline` flag. * Install URL = `https://keploy.io/ent/install.sh` ← NOT `https://keploy.io/install.sh` (OSS; no sandbox subcommand at all), NOT a github.com/keploy/keploy release tarball. If the server-emitted content ever disagrees with these rules, trust the server output and file a bug — don't edit the YAML. ===== RESOLUTION ARGS ===== * Pass either app_id (explicit UUID) or app_name_hint (substring; server does listApps and requires exactly one match). * Pass app_service (docker-compose service name), container_name (from compose container_name: field read from the SAME compose_file arg), and app_port (HTTP port the service exposes). * compose_file is optional, defaults to "docker-compose.yml". If the repo has a -keploy.yaml variant with `external: true` networks, do NOT point compose_file at it — it won't work in CI. * suite_ids is optional and should be LEFT BLANK by default — the CLI resolves every linked suite at run time. Only pin an explicit list when the dev narrows scope. ===== FINAL RESPONSE — three short sections, no questions ===== ### Created | File | Lines | | --- | --- | | .github/workflows/keploy-sandbox.yml | N | ### Summary - App: <name> (<app_id>), <N> linked suites replayed on every PR - Trigger: pull_request → main, + manual workflow_dispatch - Failure on any suite gates the PR (non-zero exit from the CLI) ### Before the first run, add this GitHub secret - `KEPLOY_API_KEY` — at https://github.com/<owner>/<repo>/settings/secrets/actions/new (self-hosted users — point at your own api-server by building the enterprise binary with -X main.api_server_uri=<url>; there is no runtime env override on the released binary.) This tool does NOT run anything. It only generates file contents.
    Connector
  • Claim an API key using a claim token from the container. After calling request_api_key(), read the claim token from ~/.borealhost/.claim_token on your container and pass it here. The token is single-use — once claimed, it cannot be used again. The API key is automatically activated for this MCP session. Args: claim_token: The claim token string read from the container file Returns: {"api_key": "bh_...", "key_prefix": "bh_...", "site_slug": "my-site", "scopes": ["read", "write"], "message": "API key created and activated..."} Errors: VALIDATION_ERROR: Invalid, expired, or already-claimed token
    Connector
  • The "always start here" premium call for autonomous agents. Composes 13 upstream sources into a curated world-state snapshot: BTC ticker, Fear and Greed, VIX, Fed funds rate, USD-base forex (EUR/JPY/GBP/CHF), HN front page top 5, significant earthquakes 24h, upcoming space launches, top Polymarket markets, and infrastructure status (GitHub, Cloudflare, OpenAI, Anthropic). Returns BOTH a structured JSON `context` object for parsers AND a pre-formatted `system_prompt` string (~350 tokens) the agent pastes verbatim into its LLM context. Saves the agent from making 13 separate calls and writing a formatter. Curation choice (which signals matter, how to compress them) is the moat. Costs 2 credits ($0.04 USDC). 5-min cache. Bearer auth required.
    Connector
  • Authenticate with your saved API key. Read your key from ~/.agents-overflow-key and pass it here. Call this at the START of every session before using any other tools.
    Connector
  • Add an item to the caller's personal inventory. Authenticated. Required OAuth scope: `inventory:write`. One creation tool covers all lifecycle states — set ``status`` based on the user's intent: "I bought" → ``owned``, "I want" → ``wanted``, "I'm selling" → ``for_sale``. Either ``product_id`` (linked to an existing Partle product) or ``name`` (freeform) must be set. **Not idempotent** — each call creates a new row. Args: name: Freeform name for items not yet linked to a Partle product. Either ``name`` or ``product_id`` must be set. product_id: Link to a canonical Partle product. status: Lifecycle. One of: ``owned``, ``wanted``, ``for_sale``, ``sold``, ``discarded``. Default ``owned``. quantity: How many. Fractional allowed. Default 1. notes: Freeform multi-line text — the dumping ground for anything not modeled as a column: extra URLs, comments, where stored, condition narrative, purpose, source, history, log entries. Markdown is fine. **Put extra URLs here, not in another field.** acquisition_price: What the user paid. acquisition_currency: Currency of acquisition_price. purchased_at: ISO date (YYYY-MM-DD) when it was acquired. asking_price: When status=for_sale, asking price. asking_currency: Currency of asking_price. condition: Free string — typical: ``new``, ``like_new``, ``good``, ``fair``, ``poor``. external_link: **Primary** click-through URL only (source listing, vendor page, manufacturer page). Exactly one. Additional URLs go in ``notes`` as markdown links. external_id: Stable identifier from the source system, used as a **dedup key**. Per-user unique when set — same external_id can't appear twice for one user. Format is up to you (e.g. ``aliexpress:1005004714348221``, ``amazon:order/3024.../line/1``, content hash). Leave null for handwritten items. project: Tag for grouping (e.g. "kitchen-renovation"). api_key: Legacy/fallback auth. Returns: The newly-created inventory row (with embedded `product` if linked), or ``{"error": ...}`` on auth/validation failure.
    Connector
  • Delete a test suite on a Keploy branch — synchronous, no playbook to walk. USE THIS when: * The dev's update_test_suite call was rejected with "preserves no steps from the existing suite — that's a full rewrite, not an edit". Delete the existing suite and re-author from scratch via create_test_suite. The error message itself routes here. * The dev explicitly says "delete the suite", "remove suite X", "wipe my orderflow suite". * A genuine wholesale redesign — every step changed in shape — that the audit trail shouldn't try to reconcile as edits. DO NOT USE THIS when: * The dev wants a real edit (one assertion, one step's body). Use update_test_suite + preserve existing step IDs instead — keeps audit history intact. * The dev wants to "redo" a single failed run. Test runs are independent of suite state; just rerun via replay_test_suite. INPUT * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to delete * branch_id (required) — Keploy branch UUID. The delete creates a branch-scoped DeleteTestSuite audit event so reads on the same branch see the suite as gone. Direct main writes are blocked. OUTPUT * On success: {"deleted": true} — suite is tombstoned at the branch overlay; subsequent reads (getTestSuite / listTestSuites) on this branch return 404 / exclude it. * 404 if the suite_id doesn't exist on this app/branch (verify via getTestSuite or listTestSuites first if you're unsure). After delete, the standard re-create flow is: (1) call create_test_suite with a freshly authored steps_json. The new suite gets a fresh suite_id; the old id is tombstoned, not reusable. ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name (don't invent one). 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action.
    Connector
  • Resolves a place mention (free-text name, address, or lat/lng) to the protocol's cell64 identifier, and returns the topic-grouped inventory of bands and algorithms available at that location. When to use: Use whenever the input refers to a real-world location and the next step needs the cell64 identifier or wants to know which bands are available before recalling. The response carries `data_at_this_cell` with three sub-fields: `live_bands_by_topic` (every band recallable here, grouped by topic such as flood_water_event_window, vegetation_condition, built_up_human_geography), `algorithms_for_topic` (composition recipes that fuse those bands into named scores), and `declared_but_no_materializer_at_this_responder` (cube slots reserved without a live connector). For the single-shot path that runs the full chain server-side and returns one packaged answer, use `emem_ask` instead.
    Connector
  • Complete Disco signup using an email verification code. Call this after discovery_signup returns {"status": "verification_required"}. The user receives a 6-digit code by email — pass it here along with the same email address used in discovery_signup. Returns an API key on success. Args: email: Email address used in the discovery_signup call. code: 6-digit verification code from the email.
    Connector
  • Broadcast a pre-signed Ethereum transaction via eth_sendRawTransaction on the source chain RPC. Use this as the canonical broadcast path for calldata produced by lz_send_message (EndpointV2.send), lz_oft_send (OFT.send), lz_stargate_send (StargatePoolNative.sendToken), and lz_transfer_build (Value Transfer API steps). Returns the transaction hash on success. The caller must construct calldata, sign locally with msg.value = nativeFee from the corresponding quote tool, then submit the RLP-encoded signed tx hex here.
    Connector
  • One-shot decision tool. Returns the coordination breakdown, use-case-specific interpretation, and (if raw_sentiment is provided) a coordination-adjusted sentiment score in a single call. Prefer this over chaining get_coordination_breakdown + manual sentiment dampening — the math here matches the canonical filter_sentiment endpoint. Cost: 5u per call (~$0.05 via x402, deducts 5 from daily quota).
    Connector