Skip to main content
Glama
134,097 tools. Last updated 2026-05-16 00:14

"Fresh" matching MCP tools:

  • Runs a free one-off security scan of the given domain and returns its grade (A–F), scan timestamp, and up to three top-priority issues with a permalink to the full report on siteguardian.io. Use this when the user asks for a quick security check of a domain that is NOT yet under SiteGuardian monitoring, or when they want a fresh assessment before subscribing. Results are cached for two hours, so repeated calls about the same domain return the same snapshot and mark it with cached=True. Do NOT use this for domains already under monitoring by the user — call get_domain_status instead for the account-scoped view with framework tags. Do NOT use this to batch-scan many domains as a competitive-intelligence tool; per-source-IP and per-target rate limits bound usage. This tool does not require authentication.
    Connector
  • Trigger a fresh HIPAA compliance scan for a healthcare practice. Always dispatches a new 70+ control scan via VPS — never returns cached results. Returns a job_id for polling via get_scan_status. Optionally specify notification_email to receive the PDF report when the scan completes. Cost: 150 credits.
    Connector
  • Return who the server sees you as on this MCP session. Use this when you're unsure whether you're authenticated — typically right after register_agent_poll returns approved, to confirm that the current session is now bound to the new agent without having to poke a write tool. Also useful as a first-call diagnostic on any fresh MCP connection. Response: auth: 'anonymous' | 'authenticated' auth_kind: 'mcp_session_binding' | 'bearer' | 'session' | 'signature' | 'none' user_id?: string agent?: { slug, display_name, description?, profile_url } account_type?: 'agent' | 'human'
    Connector
  • Regenerate the logo for a WebZum site using AI. Creates a new version with a fresh logo and reassembles. Use the optional userMessage to steer the design — "make it more minimal", "use a serif typeface", "incorporate a coffee bean shape", etc. Required: businessId, versionId, pageId. Returns { versionId, status: 'completed' | 'in_progress', ...extra }. If status is 'in_progress', poll get_site_status with the returned versionId every 5-10s until isComplete is true. Concurrency: edits on the same businessId MUST be serial. Never fire parallel edit calls on the same site.
    Connector
  • Get the complete profile of a single Chinese apparel supplier by ID. PREREQUISITE: You MUST first call search_suppliers or recommend_suppliers to obtain a valid supplier_id. Do not guess IDs. USE WHEN user asks: - "tell me more about [supplier]" / "show full details for sup_XXX" - "what certifications does this factory hold" - "what's their monthly capacity / worker count / equipment list" - "can [supplier] export to US / EU / Japan / Korea" - "give me the full profile / dossier / fact sheet for [supplier]" - "how verified is this supplier's data" (returns coverage_pct + 8 dimensions) - "what's their ownership type — own factory or broker" - "show payment terms / lead time / sample turnaround for sup_XXX" - "这家供应商具体情况 / 详细资料 / 工厂档案" - "[供应商] 的合规 / 认证 / 出口资质" Returns 60+ fields including: monthly capacity (lab-verified), equipment list, certifications (BSCI/OEKO-TEX/GRS/SA8000), ownership type (own factory vs subcontractor vs broker), market access (US/EU/JP/KR), chemical compliance (ZDHC/MRSL), traceability depth, and verified_dimensions breakdown showing exactly which of the 8 dimensions (basic_info, geo_location, production, compliance, market_access, export, financial, contact) have data. WORKFLOW: search_suppliers → pick supplier_id → get_supplier_detail → optionally get_supplier_fabrics (fabric catalog) OR check_compliance (market export readiness) OR find_alternatives (backup pool) OR compare_suppliers (side-by-side evaluation). RETURNS: { data: { supplier_id, company_name_cn/en, type, province, city, product_types, worker_count, certifications, compliance_status, quality_score, verified_dimensions: { verified_dims: "5/8", coverage_pct, dimensions: {...} } } } EXAMPLES: • User: "Show me the full profile for sup_001" → get_supplier_detail({ supplier_id: "sup_001" }) • User: "What certifications does Texhong hold and can they export to EU?" → get_supplier_detail({ supplier_id: "sup_texhong_042" }) — then inspect certifications + eu_market_ready; follow with check_compliance for formal verification • User: "我要看 sup_123 的完整档案" → get_supplier_detail({ supplier_id: "sup_123" }) ERRORS & SELF-CORRECTION: • "Supplier not found" → the supplier_id is invalid or outside free-tier access. Re-run search_suppliers to obtain a fresh valid ID. Do not guess sequential IDs. • Field returns null → that dimension is unverified for this supplier. Check verified_dimensions.coverage_pct before asserting data. If coverage_pct < 50, warn the user: "This supplier's record has limited verified data (X/8 dimensions). Consider find_alternatives for better-documented options." • "not available for public access" → this supplier is in the reserve pool (paid tier only). Use search_suppliers filters data_confidence=verified to stay in public tier. • Rate limit 429 → wait 60 seconds; do not retry immediately. AVOID: Do not call this for multiple suppliers in a loop — use compare_suppliers with up to 10 IDs at once. Do not call to browse the database — use search_suppliers or get_province_distribution for discovery. NOTE: Source: MRC Data (meacheal.ai). Every numeric field shows both declared and lab-verified values where available. 中文:按 ID 获取单个供应商的完整档案(含维度覆盖率详情)。
    Connector
  • Regenerate the header (nav bar, logo placement, top-of-page) of a WebZum site. Creates a new version with a fresh AI-generated header and reassembles every page. Use when the user wants the nav restyled, links reordered, or the header redesigned. Required: businessId, versionId, pageId. Returns { versionId, status: 'completed' | 'in_progress', ...extra }. If status is 'in_progress', poll get_site_status with the returned versionId every 5-10s until isComplete is true. Concurrency: edits on the same businessId MUST be serial. Never fire parallel edit calls on the same site.
    Connector

Matching MCP Servers

  • A
    license
    A
    quality
    B
    maintenance
    Data freshness verification for AI agents. Probes endpoints for HTTP cache staleness, latency percentiles, content fingerprinting, TLS certificate health, DNS timing, and redirect chains. Returns deterministic FRESH/STALE/UNKNOWN JSON verdicts with policy evaluation and NIST AI RMF mapping.
    Last updated
    3
    MIT

Matching MCP Connectors

  • Freshness-aware AI retrieval with 21 MCP tools for timestamped, decay-ranked live signals.

  • Solana on-chain intelligence for AI agents. 13 tools for token analysis, wallet profiling, bundle & sniper detection, fresh-wallet clustering, dev profiling, cross-token patterns, and live PumpFun/Raydium/Meteora event streams.

  • Update an existing Blueprint's configuration in place. Only fields you pass are updated; fields you omit keep their current values. To clear a list field (e.g. remove all rules), pass an explicit empty list []. Existing API keys for this Blueprint are preserved — agents using those keys continue working after the update. Ownership stamps are also preserved; you cannot transfer Blueprint ownership. The workflow_name itself cannot be renamed. To rename, create a new Blueprint with the new name and delete the old one. Different from create_blueprint: create_blueprint creates a new Blueprint and mints a fresh API key. update_blueprint modifies an existing one and returns no new key. Args: api_key: GeodesicAI API key (starts with gai_) workflow_name: Name of the Blueprint to update (must already exist) customer_name: New customer/project name. Pass None to keep current. mode: "observe" or "enforce". Pass None to keep current. extracted_fields: New list of agent-extracted fields. Pass None to keep current; pass [] to clear. derived_fields: New list of platform-derived fields. None or []. derivation_rules: New list of derivation rules. See blueprint_guide prompt for schema. None or []. formal_constraints: New list of constraints. See blueprint_guide prompt for schema. None or []. semantic_checks: New list of semantic checks. None or []. require_math: Override math validation flag. None to keep current. require_consistency: Override consistency flag. None to keep. require_coherence: Override coherence flag. None to keep. require_provenance: Override provenance flag. None to keep. require_high_assurance: Override high-assurance flag. None to keep. enable_anomaly_detection: Override anomaly flag. None to keep. enable_drift_tracking: Override drift flag. None to keep. Returns: status: "ok" | "ERROR" blueprint: workflow_name that was updated fields_changed: list of config keys that were modified field_count: new total of extracted + derived fields rule_count: new total of derivation rules constraint_count: new total of formal constraints
    Connector
  • Creates participant invites for a perspective and returns 48-hour magic-link URLs, optionally sending invitation emails. Pass EITHER participants (creates new invites) OR invite_ids (reuses existing invites, minting a fresh 48h link) — never both. Behavior: - With participants: creates a new invite per participant (deduped by lowercased email *within the same call*; on duplicate emails, the LAST entry wins for both `name` and `context` — earlier entries are discarded). Calling again with the same email creates a separate invite record — there's no cross-call dedup. To re-issue a link for an existing participant without creating a new record, pass that participant's invite_id via invite_ids instead. - With invite_ids: reuses existing invites — no duplicates — but mints a new 48-hour link each call. Previously-issued links remain valid until they expire on their own. - Sends a real invitation email per participant when send_email=true. When send_email=false (default), no email is sent — distribute the URLs yourself. Errors with "Email sending is currently disabled." if email is turned off in this environment. - Errors when the perspective is not found or you do not have access. Errors with "This perspective is still in draft. Complete the outline before inviting participants." if the perspective has no outline yet. With invite_ids, errors with "Invite not found: <id>" (covers both malformed ids and ids that don't exist) or an access error per id. - Limits: 1–50 participants/ids per call ("Maximum 50 participants per call. Split into multiple calls."). participants and invite_ids are mutually exclusive. - context per participant (≤20 keys, ≤50-char keys, ≤2000-char values) is stored with the invite and passed to the perspective as trusted participant metadata. It cannot be changed after creation — create a new invite to update it. Ask the user whether they want to attach context before calling. When to use this tool: - Generating distributable conversation links for a list of participants. - Sending invitation emails directly (send_email=true with optional custom_message / custom_subject). - Re-issuing fresh links for previously-created invites (use invite_ids). When NOT to use this tool: - The perspective is still DRAFT — finish the design loop first (perspective_await_job until "ready", optionally perspective_update). - Public/anonymous links — use perspective_get_embed_options for share_url / embed snippets instead. - Internal smoke testing — use perspective_get_preview_link. Examples: - New invites, no email: `{ workspace_id, perspective_id, participants: [{ email: "alice@co.com", name: "Alice" }] }` - New invites, send emails: `{ workspace_id, perspective_id, participants: [...], send_email: true }` - Re-issue links for existing invites and email them: `{ workspace_id, perspective_id, invite_ids: ["abc123", "def456"], send_email: true }` - Re-issue links only (regenerate expired): `{ workspace_id, perspective_id, invite_ids: ["abc123"] }`
    Connector
  • Delete a test suite on a Keploy branch — synchronous, no playbook to walk. USE THIS when: * The dev's update_test_suite call was rejected with "preserves no steps from the existing suite — that's a full rewrite, not an edit". Delete the existing suite and re-author from scratch via create_test_suite. The error message itself routes here. * The dev explicitly says "delete the suite", "remove suite X", "wipe my orderflow suite". * A genuine wholesale redesign — every step changed in shape — that the audit trail shouldn't try to reconcile as edits. DO NOT USE THIS when: * The dev wants a real edit (one assertion, one step's body). Use update_test_suite + preserve existing step IDs instead — keeps audit history intact. * The dev wants to "redo" a single failed run. Test runs are independent of suite state; just rerun via replay_test_suite. INPUT * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to delete * branch_id (required) — Keploy branch UUID. The delete creates a branch-scoped DeleteTestSuite audit event so reads on the same branch see the suite as gone. Direct main writes are blocked. OUTPUT * On success: {"deleted": true} — suite is tombstoned at the branch overlay; subsequent reads (getTestSuite / listTestSuites) on this branch return 404 / exclude it. * 404 if the suite_id doesn't exist on this app/branch (verify via getTestSuite or listTestSuites first if you're unsure). After delete, the standard re-create flow is: (1) call create_test_suite with a freshly authored steps_json. The new suite gets a fresh suite_id; the old id is tombstoned, not reusable. ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name (don't invent one). 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action.
    Connector
  • Returns copy-paste-ready fix recommendations (nginx, Apache, DNS, shell) for the issues found on a domain the caller has already paid for — either an active Monitor/Compliance subscription covering the domain, OR a purchased one-off Report for the domain. Each recommendation carries a stable issue_id, a priority (high/medium/low), a title, prose instructions, one or more config snippets with the target domain already interpolated, a verify command, and a category tag. Use this when the user asks how to fix an issue, wants the exact config to apply, or needs to verify a fix worked. Pass the optional issue_id to scope the response to one specific finding. The response is read-only — this tool NEVER triggers a fresh scan; fixes are computed from the most recent stored scan (including the Report-included re-scan if that was used). Do NOT use this for domains the caller hasn't purchased coverage for — you'll get an upgrade_required error that links to the pricing page. Do NOT use this to run or trigger a scan; call scan_domain for anonymous checks. Requires a valid API key.
    Connector
  • Read a filing's content by `document_id` from `list_filings`. Numbers and prose live inside the document; `list_filings` metadata only locates filings. RESPONSE SHAPES: • `kind='embedded'` (under `max_bytes`, ~20 MB default) — full `bytes_base64`, `source_url_official` (evergreen registry URL), `source_url_direct` (short-TTL signed proxy URL). PDFs render as a native document block. • `kind='resource_link'` (oversized) — NO `bytes_base64`. Returns `reason`, `next_steps`, both source URLs, and `index_preview` `{page_count, text_layer, outline_present}`. Call `get_document_navigation` to locate pages, then re-call this tool with `pages='N-M'` and `format='pdf'|'text'|'png'`. When NOT to use: to enumerate a company's filings, use `list_filings`. To check size or available formats before deciding, use `get_document_metadata`. Never synthesize `document_id` — composite IDs will 404. CRITICAL: on failure (rate limit / 5xx / timeout) do NOT fabricate names, numbers, or dates — tell the user what failed and offer retry or `source_url_official`. Outline titles, previews, and navigation snippets are for LOCATING pages, never for quoting. `max_bytes` is a hard inline cutoff: raising it forces full proxy/R2 transfer (slower, costlier); the default returns `resource_link` for big PDFs so you can page-fetch. `fresh=true` bypasses the R2 cache and refetches from upstream — filings are immutable so it's rarely needed. `source_url_official` auto-resolves from the most recent `list_filings` call; `company_id` / `transaction_id` / `filing_type` / `filing_description` are overrides only when `document_id` did NOT come through `list_filings`.
    Connector
  • Get Helium's proprietary ML model-predicted price for a specific option contract. Helium trains per-symbol regression models on historical options data. This tool looks up the most recent available options chain for the symbol (today or up to 5 days back), finds the exact contract matching strike/expiration/type, and runs it through that model to produce a predicted fair-value price. Returns: - symbol: the ticker - strike: the strike price used - expiration: the expiration date used - option_type: 'call' or 'put' - predicted_price: Helium's model-predicted option price in dollars - prob_itm: probability of expiring in the money (0.0–1.0), or null if model unavailable - options_data_date: the date of the options chain snapshot the model was run on (so you know how fresh the underlying market data is) Throws an error if no options chain data is available for the symbol within the past 5 days, or if the exact contract (strike/expiration/type combination) does not exist in that chain. Args: symbol: Ticker symbol, e.g. 'AAPL', 'SPY'. strike: Strike price as a number, e.g. 150.0. expiration: Expiration date as 'YYYY-MM-DD', e.g. '2026-06-20'. option_type: Must be 'call' or 'put'.
    Connector
  • Edit an existing test suite — change one or more step bodies, assertions, headers, or remove/add steps. Returns a playbook that delegates to `keploy update-test-suite`, which validates the new state (static structural checks + 2 live runs for idempotency + GET-coupling check) and snapshot-replaces the suite via api-server. POST-EDIT BEHAVIOUR: any structural change here (step method/url/body/headers/extract/assert, or add/delete steps) AUTOMATICALLY clears the suite's sandbox test server-side — the suite comes back as linked=false. Call record_sandbox_test on the updated suite before any sandbox replay; otherwise replay_sandbox_test will 400 with "no sandboxed tests". Cosmetic-only edits (name, description, labels) preserve the sandbox test. ═══════════════════════════════════════════════════════════════════ FETCH-FIRST RULE — required for the edit to be accepted: ═══════════════════════════════════════════════════════════════════ The api-server's replace handler rejects updates that preserve ZERO step IDs from the existing suite ("full rewrite, not an edit"). To make a real edit: 1. Call getTestSuite first (or use download_recording / get_app_testing_context if you already have the suite). Capture each existing step's "id" field. 2. Compose your new steps_json INCLUDING the existing "id" on every step you want to KEEP or EDIT. Omit "id" only on steps you're ADDING. Drop a step entirely from steps_json to DELETE it. 3. Call this tool with that merged steps_json. If you author a fresh JSON without the existing step IDs, the server rejects it with "preserves no steps from the existing suite". When that happens, your two options are: (a) re-author with IDs preserved (preferred — keeps history), or (b) call delete_test_suite then create_test_suite (loses history, fresh suite_id). ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name. 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. The getTestSuite call in step 4 is the one whose response you also use to capture every step's existing "id" for the FETCH-FIRST RULE above — so step 4 is actually a 2-for-1: discovery AND fetch-first happen on the same call. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action. ═══════════════════════════════════════════════════════════════════ INPUTS ═══════════════════════════════════════════════════════════════════ * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to update * branch_id (required) — Keploy branch UUID (resolve via the two-step flow before calling) * steps_json (required) — JSON array of the FULL desired step list. Each kept step MUST carry the existing "id". Same step shape as create_test_suite (response, extract, assert, etc — all static structural checks apply). * name / description / labels (optional) — overrides for top-level suite metadata * app_url (required) — base URL of the dev's running local app, e.g. http://localhost:8080. The CLI fires the new state TWICE against this for the idempotency check + GET-coupling check. * app_dir (optional) — repo root the CLI cd's into; defaults to "." ═══════════════════════════════════════════════════════════════════ HOW THIS TOOL WORKS ═══════════════════════════════════════════════════════════════════ This tool DOES NOT call api-server itself. It returns a 3-step playbook for you (Claude) to walk via Bash — same shape as create_test_suite: 1. Write merged JSON to a temp file. 2. Run `keploy update-test-suite --suite-id <id> --file <path> --branch-id <uuid> --base-url <url>` — runs every static structural check, fires the new state twice locally, applies the GET-coupling check, then POSTs the snapshot-replace. 3. Cleanup the temp file. Walk the playbook in order. If step 2 exits non-zero, surface stdout to the dev — it has the rule violation / failure detail. OUTCOMES the AI should recognize: * Exit 0 + stdout has "✓ suite updated:" + "View:" line → success. Surface the View URL to the dev. * Exit 1 + "preserves no steps from the existing suite" → fetch-first rule was missed. Re-author with step IDs preserved (or call delete_test_suite + create_test_suite as the documented escape hatch). * Exit 1 + structural-check violations → fix the suite per the violation messages, then REWRITE the suite file via Bash and RE-RUN this CLI command directly. DO NOT call update_test_suite again to retry — the playbook + file path are already valid; only the JSON content needs revision. The validator output includes a canonical step skeleton on structural failures. * Exit 2 + "couldn't reach the dev's app" → ensure the app is up at app_url and retry. PREREQUISITES the playbook assumes: * The dev's app is up and reachable at app_url. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists or KEPLOY_API_KEY is exported.
    Connector
  • Pro/Teams — second-pass adversarial certification of an architect.validate run that scored production_ready (A or B first-pass tier). Mints the certified production_ready badge when both reviewers sign off; caps the run to C/emerging when the second pass surfaces a missed production_blocker. MANDATORY DOCTRINE RULE (load-bearing): the badge certifies the EXACT code that produced the validate run_id, NOT 'this codebase' in general. If you modify, fix, or iterate the code between architect.validate and architect.certify — even a single character — cert rejects with code_fingerprint_mismatch. Fixing the code voids the run. The recovery path is always: edit code → architect.validate → fresh run_id → architect.certify on the fresh run. Do NOT cert from a stale run_id after iteration; ask the user to re-validate first. WHEN TO CALL: only after architect.validate returned tier=production_ready AND the user wants the certified badge AND the code has not been touched since the validate run. NOT for tier=draft/emerging/not_applicable runs (typed rejections fire — see below). NOT idempotent across attempts: each call is one of the 3 attempts in the retry budget. BEHAVIOR: atomic one-shot single LLM call, ~60-180s server-side at high reasoning effort (small payloads finish faster; observed p99 ~250s; server-side budget is 20 min, ~5× observed max). Exceeds typical MCP-client tool-call idle budget (~60s in Claude Code), so the FIRST notifications/progress event fires at t=0 carrying the run_id. The run is atomic by contract — no in_progress lifecycle, no cancellation, no resume. Updates the persisted run's result_json (public review URL + me.validation_history(run_id=...) reflect the cert outcome). ELIGIBILITY GATE (typed rejection enum on failure): caller must own the run, tier=production_ready, less than 24h old, not already certified, within cert retry budget (max 3 attempts), no other cert call in flight for the same run_id, code fingerprint must match the validated code, AND the submitted payload must be cert-payload-complete (see Payload Completeness below — cert rejects pre-LLM with `payload_incomplete` when an imported module's surface isn't visible in the validate payload that produced this run_id). Rejection reasons (typed Literal): auth_required, paid_plan_required, run_not_found, not_run_owner, not_eligible_tier, not_agentic_component (tier=not_applicable runs), already_certified, certification_age_exceeded, retry_budget_exhausted, code_fingerprint_mismatch, code_fingerprint_missing, code_not_on_file (caller omitted `code` argument AND the 24h cert-retry hold for this run has expired or was never written. Recovery: re-run architect.certify from the same MCP session that ran architect.validate, passing the code explicitly — the server never persists code by design), payload_incomplete (submitted/validated payload imports modules whose contents aren't visible — cert refuses pre-LLM to prevent a false-precision downgrade. Recovery: re-validate with verbatim public-surface stubs for every imported module, then re-cert on the fresh run_id. Empirically validated: PR #157 iter8/iter9 cert rejections were exactly this class — code on disk was correct, the submitted payload merely omitted module visibility), cert_consensus_score_below_threshold (consensus_median<75 — consensus runs only), cert_consensus_unstable_blocker (any principle mode_stability<80% — consensus runs only), run_state_corrupt, cert_persistence_failed, cert_in_flight (a prior architect.certify call on this run_id is still running. Poll me.validation_history for the verdict; do not retry until it resolves). PAYLOAD COMPLETENESS (load-bearing for cert eligibility): the cert reviewer reads the EXACT payload that produced the validate run_id. Imported modules whose surface isn't present in the payload cause pre-LLM `payload_incomplete` refusal. Avoidance — when validating with intent to cert, bundle public-surface stubs for every imported module: `from sqlalchemy.exc import SQLAlchemyError` → include a stub class; `from app.db import models` → include a `class models:` namespace stub with the columns/methods you reference; module-level imports of `dataclass`, `Literal`, `json`, `datetime`, `timezone` MUST also be in the payload (cert correctly catches when they're omitted — code would NameError on import). 'Submit Like Production': the payload should be the code as it would actually run, not a compressed sketch. PRE-LLM REJECTION AUDIT TRAIL: when cert rejects before the LLM call (payload_incomplete, code_fingerprint_mismatch, etc.), `certification_attempts=[]` on the response — no attempt landed in the retry budget, no LLM hop occurred. The rejection envelope's `rejection_reason` + `guidance` are the actionable surface. (Audit-trail UI surfacing of pre-LLM rejections is tracked in the platform self-audit set as anomaly #5; out of scope for the cert tool itself.) INPUTS: re-send the SAME code that produced the run_id (the architect persists findings + recommendations, never code, by design — privacy-preserving). Server compares the submitted code's SHA-256 fingerprint to the stored fingerprint and rejects mismatches. Auth: Bearer <token>, Pro or Teams plan required. UK/EU data residency (Cloud Run europe-west2). Code processed transiently by OpenAI (no-training-on-API-data) and dropped; payloads JSON-escaped + delimited as inert untrusted data — prompt-injection inside code is ignored. RECOVERY: if your MCP client closes the tool-call early, recover the cert verdict via me.validation_history(run_id=<that-id>) once the server-side LLM call lands — same Bearer token, same pattern as architect.validate. If the cert call fails outright (provider error, persistence error), a fresh architect.certify is the recovery path; the eligibility gate enforces the 3-attempt retry budget. For long-running cert workflows the answer is to re-validate, not to make this tool stateful. OUTCOMES: certification_status ∈ {confirmed_production_ready (badge mints), downgraded_to_emerging (cert review surfaced a missed production_blocker, tier capped at C/emerging), unavailable_provider_error (LLM call failed, retry within budget)}. Cert findings + summary + attempt history surfaced on the persisted run for full inspectability.
    Connector
  • Execute point-in-time queries for one or more engineering metrics. Returns current metric values for specified time periods, with support for batch queries and optional period-over-period comparisons. Time range (startTime/endTime) cannot exceed 6 months (180 days). PREREQUISITES - Follow this workflow: 1. Discover all available metrics ONCE: Call listMetricDefinitions (view='basic') - cache this response 2. Get metric query metadata ONCE per metric: Call listMetricDefinitions (view='full', key=METRIC_KEY) - supportedAggregations: Valid aggregation methods - orderByAttribute: Attribute path for sorting by metric values - groupByOptions[].key: Valid groupBy keys (use exact values, do NOT guess) - filterOptions[].key: Valid filter keys (use exact values, do NOT guess) Cache the full view response for each metric. Reuse the metadata from cached responses for subsequent queries on the same metric. 3. Construct query: Use the query metadata from the full view responses in step 2 to build valid point-in-time requests IMPORTANT: Cache only results from listMetricDefinitions. Do NOT cache point-in-time query results - always execute fresh queries for current data. Only refresh cached listMetricDefinitions responses if no longer in your context window or explicitly requested. Do NOT guess attribute names - always use exact values from listMetricDefinitions responses. Response includes: - Lightweight metadata: Column definitions optimized for programmatic use - Row data: Actual metric values and dimensional data - No heavy schemas: Source definitions excluded (get from listMetricDefinitions instead) Error responses: - 400: Invalid metric names, date range, validation errors, or unsupported metric combinations - 403: Feature not enabled (contact help@cortex.io)
    Connector
  • Surface what is actually peaking in short-form video right now for a niche. Returns 3 rising opportunities (format/hook/style/topic) with growth rates, per-entry signal_strength (0-1), sources[] (Google Trends + YouTube velocity + Reddit hot + internal corpus), signal_window, plus 2 saturated patterns to avoid AND top-level provenance with cache_age_hours and cache_status. USE WHEN the user asks "what should I post about", "what's trending in [niche]", or before generating content for the first time. Pairs after analyze_account to validate a creator's formula against current trends. Costs 1 credit. 12-hour cache per niche. TOOL HEALTH: Every response includes a `quality: { level: "full" | "partial" | "degraded", reason?: string }` field. If quality.level is "partial" or "degraded", you MUST flag this to the user explicitly in chat ("Heads up — this call returned partial data: <reason>") before reporting any results. Never silently route around a degraded response. REPORTING: When you summarize this in chat, you MUST cite the data_sources array verbatim and surface cache_status (fresh|stale) — the user needs to know if they're looking at live data. For each rising/saturated entry, surface the growth percentage AND the signal_window verbatim. Never round growth percentages: if the response says "+178.4%", report "+178.4%" — never "+180%".
    Connector
  • Returns file metadata (content_type, download_url, download_size, expires_at) for the report or zip artifact. Use artifact='report' (default) for the interactive HTML report (~700KB, self-contained with embedded JS for collapsible sections and interactive Gantt charts — open in a browser). Use artifact='zip' for the full pipeline output bundle (md, json, csv intermediary files that fed the report). While the task is still pending or processing, returns {ready:false,reason:"processing"}. Check readiness by testing whether download_url is present in the response. Once ready, present download_url to the user or fetch and save the file locally. Download URLs expire after 15 minutes (see expires_at); call plan_file_info again to get a fresh URL if needed. Terminal error codes: generation_failed (plan failed), content_unavailable (artifact missing). Unknown plan_id returns error code PLAN_NOT_FOUND.
    Connector
  • Pro/Teams — second-pass adversarial certification of an architect.validate run that scored production_ready (A or B first-pass tier). Mints the certified production_ready badge when both reviewers sign off; caps the run to C/emerging when the second pass surfaces a missed production_blocker. MANDATORY DOCTRINE RULE (load-bearing): the badge certifies the EXACT code that produced the validate run_id, NOT 'this codebase' in general. If you modify, fix, or iterate the code between architect.validate and architect.certify — even a single character — cert rejects with code_fingerprint_mismatch. Fixing the code voids the run. The recovery path is always: edit code → architect.validate → fresh run_id → architect.certify on the fresh run. Do NOT cert from a stale run_id after iteration; ask the user to re-validate first. WHEN TO CALL: only after architect.validate returned tier=production_ready AND the user wants the certified badge AND the code has not been touched since the validate run. NOT for tier=draft/emerging/not_applicable runs (typed rejections fire — see below). NOT idempotent across attempts: each call is one of the 3 attempts in the retry budget. BEHAVIOR: atomic one-shot single LLM call, ~60-180s server-side at high reasoning effort (small payloads finish faster; observed p99 ~250s; server-side budget is 20 min, ~5× observed max). Exceeds typical MCP-client tool-call idle budget (~60s in Claude Code), so the FIRST notifications/progress event fires at t=0 carrying the run_id. The run is atomic by contract — no in_progress lifecycle, no cancellation, no resume. Updates the persisted run's result_json (public review URL + me.validation_history(run_id=...) reflect the cert outcome). ELIGIBILITY GATE (typed rejection enum on failure): caller must own the run, tier=production_ready, less than 24h old, not already certified, within cert retry budget (max 3 attempts), no other cert call in flight for the same run_id, code fingerprint must match the validated code, AND the submitted payload must be cert-payload-complete (see Payload Completeness below — cert rejects pre-LLM with `payload_incomplete` when an imported module's surface isn't visible in the validate payload that produced this run_id). Rejection reasons (typed Literal): auth_required, paid_plan_required, run_not_found, not_run_owner, not_eligible_tier, not_agentic_component (tier=not_applicable runs), already_certified, certification_age_exceeded, retry_budget_exhausted, code_fingerprint_mismatch, code_fingerprint_missing, code_not_on_file (caller omitted `code` argument AND the 24h cert-retry hold for this run has expired or was never written. Recovery: re-run architect.certify from the same MCP session that ran architect.validate, passing the code explicitly — the server never persists code by design), payload_incomplete (submitted/validated payload imports modules whose contents aren't visible — cert refuses pre-LLM to prevent a false-precision downgrade. Recovery: re-validate with verbatim public-surface stubs for every imported module, then re-cert on the fresh run_id. Empirically validated: PR #157 iter8/iter9 cert rejections were exactly this class — code on disk was correct, the submitted payload merely omitted module visibility), cert_consensus_score_below_threshold (consensus_median<75 — consensus runs only), cert_consensus_unstable_blocker (any principle mode_stability<80% — consensus runs only), run_state_corrupt, cert_persistence_failed, cert_in_flight (a prior architect.certify call on this run_id is still running. Poll me.validation_history for the verdict; do not retry until it resolves). PAYLOAD COMPLETENESS (load-bearing for cert eligibility): the cert reviewer reads the EXACT payload that produced the validate run_id. Imported modules whose surface isn't present in the payload cause pre-LLM `payload_incomplete` refusal. Avoidance — when validating with intent to cert, bundle public-surface stubs for every imported module: `from sqlalchemy.exc import SQLAlchemyError` → include a stub class; `from app.db import models` → include a `class models:` namespace stub with the columns/methods you reference; module-level imports of `dataclass`, `Literal`, `json`, `datetime`, `timezone` MUST also be in the payload (cert correctly catches when they're omitted — code would NameError on import). 'Submit Like Production': the payload should be the code as it would actually run, not a compressed sketch. PRE-LLM REJECTION AUDIT TRAIL: when cert rejects before the LLM call (payload_incomplete, code_fingerprint_mismatch, etc.), `certification_attempts=[]` on the response — no attempt landed in the retry budget, no LLM hop occurred. The rejection envelope's `rejection_reason` + `guidance` are the actionable surface. (Audit-trail UI surfacing of pre-LLM rejections is tracked in the platform self-audit set as anomaly #5; out of scope for the cert tool itself.) INPUTS: re-send the SAME code that produced the run_id (the architect persists findings + recommendations, never code, by design — privacy-preserving). Server compares the submitted code's SHA-256 fingerprint to the stored fingerprint and rejects mismatches. Auth: Bearer <token>, Pro or Teams plan required. UK/EU data residency (Cloud Run europe-west2). Code processed transiently by OpenAI (no-training-on-API-data) and dropped; payloads JSON-escaped + delimited as inert untrusted data — prompt-injection inside code is ignored. RECOVERY: if your MCP client closes the tool-call early, recover the cert verdict via me.validation_history(run_id=<that-id>) once the server-side LLM call lands — same Bearer token, same pattern as architect.validate. If the cert call fails outright (provider error, persistence error), a fresh architect.certify is the recovery path; the eligibility gate enforces the 3-attempt retry budget. For long-running cert workflows the answer is to re-validate, not to make this tool stateful. OUTCOMES: certification_status ∈ {confirmed_production_ready (badge mints), downgraded_to_emerging (cert review surfaced a missed production_blocker, tier capped at C/emerging), unavailable_provider_error (LLM call failed, retry within budget)}. Cert findings + summary + attempt history surfaced on the persisted run for full inspectability.
    Connector
  • Retrieve metadata for a filing document by `document_id` (from `list_filings`). Returns available content formats with byte sizes, page count, source URL, creation date. Raw upstream preserved under `jurisdiction_data`. Call this before `fetch_document` when a document may be large or its format is unknown — cheaper than a full fetch. Do NOT construct or guess `document_id` — some registries use composite IDs that must come from `list_filings`; synthesized IDs will 404. Empty `available_formats` means the body is paywalled or unavailable upstream. Unsupported jurisdictions return 501; call `list_jurisdictions({supports_tool:'get_document_metadata'})` for the coverage matrix. `fresh=true` bypasses cache but is rarely useful — filings are immutable once registered.
    Connector
  • Create a binding price quote that locks the price for 15 minutes. Use this tool before booking.checkout to guarantee the quoted price during payment. Do NOT skip this step if the user wants price certainty — without a quoteId, checkout calculates a fresh price that may differ. Returns quoteId (pass to booking.checkout), public and federation totals, per-night breakdown, and expiry timestamp.
    Connector