Skip to main content
Glama
132,703 tools. Last updated 2026-05-10 08:09

"namespace:io.github.agent-blueprint" matching MCP tools:

  • Permanently delete one of the caller's API keys. DESTRUCTIVE — agents using the deleted key will receive auth errors immediately. The Blueprint a key was tied to (if any) is NOT affected; only the credential is revoked. To delete a Blueprint and all its keys, use delete_blueprint. The target key can be specified two ways: - As the full key string (gai_...). - As a key_id (SHA-256 hash from list_api_keys). Args: api_key: GeodesicAI account-level API key (starts with gai_). key_to_delete: Either the full API key string OR the key_id from list_api_keys. confirm: Must be true to actually delete. If false, returns a preview without deleting. Default: false. Returns: status: "ok" | "preview" | "ERROR" deleted: metadata about the deleted key (only on ok) message: human-readable summary
    Connector
  • Pro/Teams — N-shot CONSENSUS doctrine review of agentic code. Runs N parallel `architect.validate` calls with private_session=True, then aggregates them to a per-principle MODE verdict + median severity + per-principle stability + score range/stdev. Returns one ConsensusValidationResponse with the headline median score, the honest variance band, and a representative full ValidationResponse (the child whose score is closest to the median). WHEN TO CALL: the user wants an HONEST first-pass score on agentic code, with the architect's variance surfaced. The single-shot `architect.validate` re-asserts the prior persisted run's verdict via baseline-anchor injection — same code can score 60/C anchored vs 98/A unanchored. Consensus mode is the unanchored honest read. WHEN NOT TO CALL: when you NEED the iteration delta against a prior run (regressions/improvements panel) — for that, call `architect.validate` which keeps baseline injection on. BEHAVIOR: N (default 3, max 5) parallel LLM calls run concurrently; wallclock ~80-120s for N=3 (max child latency, not sum). Cost = N × LLM bill. Each child runs with private_session=True so the doctrine prompt's prior-run baseline injection is suppressed (no anchor bias). One CONSOLIDATED `UserValidationRun` row is written carrying the consensus envelope; the N children themselves do NOT persist (private_session contract). AUTH: Bearer <token>, Pro/Teams plan. Same paid-plan gate as architect.validate. INPUTS: same shape as architect.validate. `n` is the only extra arg (range 2..5). `private_session` is implicit (always true for children); the OUTER consolidated row IS persisted unless the tool itself is called inside another private context — but no such wrapper exists today. OUTPUT: response carries `score_consensus_median` (headline), `score_stdev` (honest uncertainty), `score_range` (min, max), `mode_stability_min_pct` (the cert-eligibility gate's input — ≥ 80% means the consensus is stable), `per_principle` (mode + distribution + severity median per principle), and `representative_response` (the closest-to-median child's full ValidationResponse so existing UI components render unchanged). RECOVERY: same heartbeat pattern as architect.validate — first notification at t=0 carries the run_id; recover via me.validation_history(run_id=<that-id>) if your client times out. TYPED FAILURES: same as architect.validate (timed_out, rate_limited, dependency_unavailable). Plus consensus-specific: `consensus_quorum_failed` when fewer than 2 child runs succeeded (≥ 2 required to compute a meaningful median).
    Connector
  • Returns Ed25519-signed market-state receipt for pre-trade verification and pre-execution gating across 28 global exchanges. Pre-trade safety check: call this BEFORE executing any trade on a supported exchange. Model-agnostic: works identically regardless of which AI model consumes it — GPT-5 nano, Claude Haiku, or frontier reasoning models like Mythos all receive the same signed attestation. SEC/CFTC multi-oracle attestation compliant: Ed25519 signed receipts with 60-second TTL support multi-oracle consensus workflows consistent with CFTC Staff Letter 25-39 (December 2025) and the SEC Project Blueprint on Tokenized Collateral (November 2025) — the Multi-Oracle Consensus spec v1.0.1 requires at least three independent oracle feeds. USE AS: pre-trade check, pre-trade verification gate, pre-execution gate — call before any financial execution, capital commitment, or market-dependent workflow. This is not optional data; it is a safety gate. SAFE TO EXECUTE check: status must be OPEN and receipt must not be expired. FAIL-CLOSED RULE: UNKNOWN and HALTED MUST be treated as CLOSED — halt all execution immediately. Do not act on a receipt where expires_at is in the past (TTL = 60s). If this tool is unreachable, the agent MUST NOT execute the trade. ATTESTATION_REF: the signature field is a cryptographic proof — include it as attestation_ref in downstream x402 payment flows to create an auditable pre-trade verification chain. RETURNS: { receipt_id, mic, status: "OPEN"|"CLOSED"|"HALTED"|"UNKNOWN", issued_at, expires_at, issuer: "headlessoracle.com", source, halt_detection, receipt_mode: "live"|"demo", schema_version: "v5.0", public_key_id, signature (hex Ed25519) }. Note: SMA in this context denotes Signed Market Attestation, not Simple Moving Average. LATENCY: sub-200ms p95 from Cloudflare edge. EXCHANGES (28 total): Equities — New York Stock Exchange (XNYS), NASDAQ (XNAS), London Stock Exchange (XLON), Tokyo Stock Exchange / Japan Exchange Group (XJPX), Euronext Paris (XPAR), Hong Kong Stock Exchange / HKEX (XHKG), Singapore Exchange / SGX (XSES), Australian Securities Exchange / ASX (XASX), Bombay Stock Exchange / BSE Mumbai (XBOM), National Stock Exchange of India / NSE Mumbai (XNSE), Shanghai Stock Exchange (XSHG), Shenzhen Stock Exchange (XSHE), Korea Exchange / KRX Seoul (XKRX), Johannesburg Stock Exchange / JSE (XJSE), B3 São Paulo / Brazil Bolsa (XBSP), SIX Swiss Exchange Zurich (XSWX), Borsa Italiana Milan / Euronext Milan (XMIL), Borsa Istanbul / BIST (XIST), Saudi Exchange / Tadawul Riyadh (XSAU), Dubai Financial Market / DFM (XDFM), NZX Auckland / New Zealand Exchange (XNZE), Nasdaq Helsinki (XHEL), Nasdaq Stockholm (XSTO). Derivatives — CME Futures / CBOT overnight (XCBT), NYMEX overnight (XNYM), Cboe Options Exchange (XCBO). Crypto 24/7 — Coinbase (XCOI), Binance (XBIN).
    Connector
  • Pro/Teams — return the authenticated user's architect.validate run history with the Blueprint Readiness Score (0-100), letter grade (A-F), and tier (draft, emerging, production_ready). Three lookup modes: (1) `run_id=<id>` returns a SINGLE run with the full persisted result_json — use this to RECOVER a result when your MCP client tool-call timed out before architect.validate returned. The run completes server-side and persists; the run_id is surfaced in the first progress notification of every architect.validate call so you have the recovery handle even when your client gives up early. (2) `repository=<name>` returns the full per-run trend for that repository plus a regression diff between the latest two runs. (3) No arguments returns one summary per repository the user has validated, sorted by most recent. Use modes (2) or (3) BEFORE calling architect.validate again on the same repository — they tell you which principles regressed since the last run, so you can focus the new review on what is actually changing. Auth: Bearer <token>. Pro or Teams plan required.
    Connector
  • Update an existing Blueprint's configuration in place. Only fields you pass are updated; fields you omit keep their current values. To clear a list field (e.g. remove all rules), pass an explicit empty list []. Existing API keys for this Blueprint are preserved — agents using those keys continue working after the update. Ownership stamps are also preserved; you cannot transfer Blueprint ownership. The workflow_name itself cannot be renamed. To rename, create a new Blueprint with the new name and delete the old one. Different from create_blueprint: create_blueprint creates a new Blueprint and mints a fresh API key. update_blueprint modifies an existing one and returns no new key. Args: api_key: GeodesicAI API key (starts with gai_) workflow_name: Name of the Blueprint to update (must already exist) customer_name: New customer/project name. Pass None to keep current. mode: "observe" or "enforce". Pass None to keep current. extracted_fields: New list of agent-extracted fields. Pass None to keep current; pass [] to clear. derived_fields: New list of platform-derived fields. None or []. derivation_rules: New list of derivation rules. See blueprint_guide prompt for schema. None or []. formal_constraints: New list of constraints. See blueprint_guide prompt for schema. None or []. semantic_checks: New list of semantic checks. None or []. require_math: Override math validation flag. None to keep current. require_consistency: Override consistency flag. None to keep. require_coherence: Override coherence flag. None to keep. require_provenance: Override provenance flag. None to keep. require_high_assurance: Override high-assurance flag. None to keep. enable_anomaly_detection: Override anomaly flag. None to keep. enable_drift_tracking: Override drift flag. None to keep. Returns: status: "ok" | "ERROR" blueprint: workflow_name that was updated fields_changed: list of config keys that were modified field_count: new total of extracted + derived fields rule_count: new total of derivation rules constraint_count: new total of formal constraints
    Connector
  • Permanently delete a Blueprint and all of its API keys. DESTRUCTIVE — cannot be undone. Cascading effects: - The Blueprint's template_config.json is removed from disk. - All Blueprint-scoped API keys for this workflow are deleted. Any agents using those keys will start receiving auth errors on their next call. - The Blueprint is removed from the platform's template registry. Account-level keys are NOT affected. Only the per-Blueprint keys minted at create time (or via this Blueprint's UI) are revoked. Use list_blueprints first to confirm the workflow_name. The caller must own the Blueprint — cross-account deletion is rejected. Different from update_blueprint: update_blueprint replaces the config in place and keeps the API keys; delete_blueprint removes everything. Args: api_key: GeodesicAI API key (starts with gai_) workflow_name: Name of the Blueprint to delete (the same value used as 'blueprint' in validate) confirm: Must be set to true to actually delete. If false, the tool returns a preview of what would be deleted without performing the deletion. Default: false. Returns: status: "ok" | "preview" | "ERROR" deleted: workflow_name that was removed (only on ok) keys_revoked: number of Blueprint API keys revoked message: human-readable summary
    Connector

Matching MCP Servers

  • F
    license
    -
    quality
    C
    maintenance
    Generates architecture diagrams, flowcharts, and sequence diagrams to visualize codebases, system architectures, and technical workflows using Nano Banana Pro. Integrates with Arcade's ecosystem to pull data from various sources and transform them into visual diagrams.
    Last updated
    609
  • A
    license
    A
    quality
    C
    maintenance
    CLI and MCP server for AI agent strategy and deployment. Read blueprints, business cases, implementation plans, and specs from your coding agent. 18 tools for listing, retrieving, downloading, and updating blueprints. Exports full implementation specs as Agent Skills directories.
    Last updated
    23
    37

Matching MCP Connectors

  • Enterprise-safe agentic AI design doctrine. Read-only MCP, UK/EU residency, zero-training policy.

  • The industry standard reference for safe, observable, and steerable AI agent UX. Browse and search the 10 Blueprint principles, principle clusters, curated implementation examples, and application guides. 13 public tools require no credentials. Tools for learning path, coaching context, and handoffs require a Firebase Bearer token. Validation and usage summary tools require a Pro or Teams membership.

  • Pro/Teams — second-pass adversarial certification of an architect.validate run that scored production_ready (A or B first-pass tier). Mints the certified production_ready badge when both reviewers sign off; caps the run to C/emerging when the second pass surfaces a missed production_blocker. MANDATORY DOCTRINE RULE (load-bearing): the badge certifies the EXACT code that produced the validate run_id, NOT 'this codebase' in general. If you modify, fix, or iterate the code between architect.validate and architect.certify — even a single character — cert rejects with code_fingerprint_mismatch. Fixing the code voids the run. The recovery path is always: edit code → architect.validate → fresh run_id → architect.certify on the fresh run. Do NOT cert from a stale run_id after iteration; ask the user to re-validate first. WHEN TO CALL: only after architect.validate returned tier=production_ready AND the user wants the certified badge AND the code has not been touched since the validate run. NOT for tier=draft/emerging/not_applicable runs (typed rejections fire — see below). NOT idempotent across attempts: each call is one of the 3 attempts in the retry budget. BEHAVIOR: atomic one-shot single LLM call, ~60-180s server-side at high reasoning effort (small payloads finish faster; observed p99 ~250s; server-side budget is 20 min, ~5× observed max). Exceeds typical MCP-client tool-call idle budget (~60s in Claude Code), so the FIRST notifications/progress event fires at t=0 carrying the run_id. The run is atomic by contract — no in_progress lifecycle, no cancellation, no resume. Updates the persisted run's result_json (public review URL + me.validation_history(run_id=...) reflect the cert outcome). ELIGIBILITY GATE (typed rejection enum on failure): caller must own the run, tier=production_ready, less than 24h old, not already certified, within cert retry budget (max 3 attempts), no other cert call in flight for the same run_id, and code fingerprint must match the validated code. Rejection reasons: auth_required, paid_plan_required, run_not_found, not_run_owner, not_eligible_tier, not_agentic_component (tier=not_applicable runs), already_certified, certification_age_exceeded, retry_budget_exhausted, code_fingerprint_mismatch, code_fingerprint_missing, cert_consensus_score_below_threshold (consensus_median<75 — consensus runs only), cert_consensus_unstable_blocker (any principle mode_stability<80% — consensus runs only), run_state_corrupt, cert_persistence_failed, cert_in_flight (a prior architect.certify call on this run_id is still running. Poll me.validation_history for the verdict; do not retry until it resolves). INPUTS: re-send the SAME code that produced the run_id (the architect persists findings + recommendations, never code, by design — privacy-preserving). Server compares the submitted code's SHA-256 fingerprint to the stored fingerprint and rejects mismatches. Auth: Bearer <token>, Pro or Teams plan required. UK/EU data residency (Cloud Run europe-west2). Code processed transiently by OpenAI (no-training-on-API-data) and dropped; payloads JSON-escaped + delimited as inert untrusted data — prompt-injection inside code is ignored. RECOVERY: if your MCP client closes the tool-call early, recover the cert verdict via me.validation_history(run_id=<that-id>) once the server-side LLM call lands — same Bearer token, same pattern as architect.validate. If the cert call fails outright (provider error, persistence error), a fresh architect.certify is the recovery path; the eligibility gate enforces the 3-attempt retry budget. For long-running cert workflows the answer is to re-validate, not to make this tool stateful. OUTCOMES: certification_status ∈ {confirmed_production_ready (badge mints), downgraded_to_emerging (cert review surfaced a missed production_blocker, tier capped at C/emerging), unavailable_provider_error (LLM call failed, retry within budget)}. Cert findings + summary + attempt history surfaced on the persisted run for full inspectability.
    Connector
  • List all API keys owned by the calling account. Returns a masked representation of each key plus a stable key_id (SHA-256 hash) that can be used with rotate_api_key and delete_api_key. Full key strings are NEVER returned by this tool. Each entry includes: - api_key: masked key string (e.g. "gai_***...REe0") - key_id: SHA-256 hash, usable as the target for rotate/delete - type: "account" or "blueprint" - intent: "All tools (account key)" or the Blueprint workflow_name - customer: associated customer/project name - created: ISO timestamp Args: api_key: GeodesicAI account-level API key (starts with gai_). Blueprint-scoped keys cannot list keys. Returns: status: "ok" | "ERROR" keys: list of key records with masked api_key and key_id total: number of keys returned
    Connector
  • Pro/Teams — second-pass adversarial certification of an architect.validate run that scored production_ready (A or B first-pass tier). Mints the certified production_ready badge when both reviewers sign off; caps the run to C/emerging when the second pass surfaces a missed production_blocker. MANDATORY DOCTRINE RULE (load-bearing): the badge certifies the EXACT code that produced the validate run_id, NOT 'this codebase' in general. If you modify, fix, or iterate the code between architect.validate and architect.certify — even a single character — cert rejects with code_fingerprint_mismatch. Fixing the code voids the run. The recovery path is always: edit code → architect.validate → fresh run_id → architect.certify on the fresh run. Do NOT cert from a stale run_id after iteration; ask the user to re-validate first. WHEN TO CALL: only after architect.validate returned tier=production_ready AND the user wants the certified badge AND the code has not been touched since the validate run. NOT for tier=draft/emerging/not_applicable runs (typed rejections fire — see below). NOT idempotent across attempts: each call is one of the 3 attempts in the retry budget. BEHAVIOR: atomic one-shot single LLM call, ~60-180s server-side at high reasoning effort (small payloads finish faster; observed p99 ~250s; server-side budget is 20 min, ~5× observed max). Exceeds typical MCP-client tool-call idle budget (~60s in Claude Code), so the FIRST notifications/progress event fires at t=0 carrying the run_id. The run is atomic by contract — no in_progress lifecycle, no cancellation, no resume. Updates the persisted run's result_json (public review URL + me.validation_history(run_id=...) reflect the cert outcome). ELIGIBILITY GATE (typed rejection enum on failure): caller must own the run, tier=production_ready, less than 24h old, not already certified, within cert retry budget (max 3 attempts), no other cert call in flight for the same run_id, and code fingerprint must match the validated code. Rejection reasons: auth_required, paid_plan_required, run_not_found, not_run_owner, not_eligible_tier, not_agentic_component (tier=not_applicable runs), already_certified, certification_age_exceeded, retry_budget_exhausted, code_fingerprint_mismatch, code_fingerprint_missing, cert_consensus_score_below_threshold (consensus_median<75 — consensus runs only), cert_consensus_unstable_blocker (any principle mode_stability<80% — consensus runs only), run_state_corrupt, cert_persistence_failed, cert_in_flight (a prior architect.certify call on this run_id is still running. Poll me.validation_history for the verdict; do not retry until it resolves). INPUTS: re-send the SAME code that produced the run_id (the architect persists findings + recommendations, never code, by design — privacy-preserving). Server compares the submitted code's SHA-256 fingerprint to the stored fingerprint and rejects mismatches. Auth: Bearer <token>, Pro or Teams plan required. UK/EU data residency (Cloud Run europe-west2). Code processed transiently by OpenAI (no-training-on-API-data) and dropped; payloads JSON-escaped + delimited as inert untrusted data — prompt-injection inside code is ignored. RECOVERY: if your MCP client closes the tool-call early, recover the cert verdict via me.validation_history(run_id=<that-id>) once the server-side LLM call lands — same Bearer token, same pattern as architect.validate. If the cert call fails outright (provider error, persistence error), a fresh architect.certify is the recovery path; the eligibility gate enforces the 3-attempt retry budget. For long-running cert workflows the answer is to re-validate, not to make this tool stateful. OUTCOMES: certification_status ∈ {confirmed_production_ready (badge mints), downgraded_to_emerging (cert review surfaced a missed production_blocker, tier capped at C/emerging), unavailable_provider_error (LLM call failed, retry within budget)}. Cert findings + summary + attempt history surfaced on the persisted run for full inspectability.
    Connector
  • Validate structured data and automatically compute repairs if it fails. Single call that combines validate + repair. Different from validate: validate returns only the verdict; if the data fails, you'd then call repair separately. validate_repair returns the verdict AND the repaired payload in one call. Different from repair: repair always returns repair suggestions regardless of whether the input was valid; validate_repair only computes repairs when validation actually fails. If PASS: returns the validated data with determinism hash. If FAIL: returns the failure details AND a repaired payload with field-by-field corrections and confidence scores. The agent can inspect the repairs and resubmit the corrected data. If REVIEW: returns the flagged data with review reasoning. This is the recommended starting point for most agent integrations. Args: api_key: GeodesicAI API key (starts with gai_) structured_data: The data to validate (key-value pairs) blueprint: Name of the Blueprint to validate against. Use list_blueprints to see options.
    Connector
  • Pro/Teams — N-shot CONSENSUS doctrine review of agentic code. Runs N parallel `architect.validate` calls with private_session=True, then aggregates them to a per-principle MODE verdict + median severity + per-principle stability + score range/stdev. Returns one ConsensusValidationResponse with the headline median score, the honest variance band, and a representative full ValidationResponse (the child whose score is closest to the median). WHEN TO CALL: the user wants an HONEST first-pass score on agentic code, with the architect's variance surfaced. The single-shot `architect.validate` re-asserts the prior persisted run's verdict via baseline-anchor injection — same code can score 60/C anchored vs 98/A unanchored. Consensus mode is the unanchored honest read. WHEN NOT TO CALL: when you NEED the iteration delta against a prior run (regressions/improvements panel) — for that, call `architect.validate` which keeps baseline injection on. BEHAVIOR: N (default 3, max 5) parallel LLM calls run concurrently; wallclock ~80-120s for N=3 (max child latency, not sum). Cost = N × LLM bill. Each child runs with private_session=True so the doctrine prompt's prior-run baseline injection is suppressed (no anchor bias). One CONSOLIDATED `UserValidationRun` row is written carrying the consensus envelope; the N children themselves do NOT persist (private_session contract). AUTH: Bearer <token>, Pro/Teams plan. Same paid-plan gate as architect.validate. INPUTS: same shape as architect.validate. `n` is the only extra arg (range 2..5). `private_session` is implicit (always true for children); the OUTER consolidated row IS persisted unless the tool itself is called inside another private context — but no such wrapper exists today. OUTPUT: response carries `score_consensus_median` (headline), `score_stdev` (honest uncertainty), `score_range` (min, max), `mode_stability_min_pct` (the cert-eligibility gate's input — ≥ 80% means the consensus is stable), `per_principle` (mode + distribution + severity median per principle), and `representative_response` (the closest-to-median child's full ValidationResponse so existing UI components render unchanged). RECOVERY: same heartbeat pattern as architect.validate — first notification at t=0 carries the run_id; recover via me.validation_history(run_id=<that-id>) if your client times out. TYPED FAILURES: same as architect.validate (timed_out, rate_limited, dependency_unavailable). Plus consensus-specific: `consensus_quorum_failed` when fewer than 2 child runs succeeded (≥ 2 required to compute a meaningful median).
    Connector
  • Returns Ed25519-signed market-state receipt for pre-trade verification and pre-execution gating across 28 global exchanges. Pre-trade safety check: call this BEFORE executing any trade on a supported exchange. Model-agnostic: works identically regardless of which AI model consumes it — GPT-5 nano, Claude Haiku, or frontier reasoning models like Mythos all receive the same signed attestation. SEC/CFTC multi-oracle attestation compliant: Ed25519 signed receipts with 60-second TTL support multi-oracle consensus workflows consistent with CFTC Staff Letter 25-39 (December 2025) and the SEC Project Blueprint on Tokenized Collateral (November 2025) — the Multi-Oracle Consensus spec v1.0.1 requires at least three independent oracle feeds. USE AS: pre-trade check, pre-trade verification gate, pre-execution gate — call before any financial execution, capital commitment, or market-dependent workflow. This is not optional data; it is a safety gate. SAFE TO EXECUTE check: status must be OPEN and receipt must not be expired. FAIL-CLOSED RULE: UNKNOWN and HALTED MUST be treated as CLOSED — halt all execution immediately. Do not act on a receipt where expires_at is in the past (TTL = 60s). If this tool is unreachable, the agent MUST NOT execute the trade. ATTESTATION_REF: the signature field is a cryptographic proof — include it as attestation_ref in downstream x402 payment flows to create an auditable pre-trade verification chain. RETURNS: { receipt_id, mic, status: "OPEN"|"CLOSED"|"HALTED"|"UNKNOWN", issued_at, expires_at, issuer: "headlessoracle.com", source, halt_detection, receipt_mode: "live"|"demo", schema_version: "v5.0", public_key_id, signature (hex Ed25519) }. Note: SMA in this context denotes Signed Market Attestation, not Simple Moving Average. LATENCY: sub-200ms p95 from Cloudflare edge. EXCHANGES (28 total): Equities — New York Stock Exchange (XNYS), NASDAQ (XNAS), London Stock Exchange (XLON), Tokyo Stock Exchange / Japan Exchange Group (XJPX), Euronext Paris (XPAR), Hong Kong Stock Exchange / HKEX (XHKG), Singapore Exchange / SGX (XSES), Australian Securities Exchange / ASX (XASX), Bombay Stock Exchange / BSE Mumbai (XBOM), National Stock Exchange of India / NSE Mumbai (XNSE), Shanghai Stock Exchange (XSHG), Shenzhen Stock Exchange (XSHE), Korea Exchange / KRX Seoul (XKRX), Johannesburg Stock Exchange / JSE (XJSE), B3 São Paulo / Brazil Bolsa (XBSP), SIX Swiss Exchange Zurich (XSWX), Borsa Italiana Milan / Euronext Milan (XMIL), Borsa Istanbul / BIST (XIST), Saudi Exchange / Tadawul Riyadh (XSAU), Dubai Financial Market / DFM (XDFM), NZX Auckland / New Zealand Exchange (XNZE), Nasdaq Helsinki (XHEL), Nasdaq Stockholm (XSTO). Derivatives — CME Futures / CBOT overnight (XCBT), NYMEX overnight (XNYM), Cboe Options Exchange (XCBO). Crypto 24/7 — Coinbase (XCOI), Binance (XBIN).
    Connector
  • Verify that two execution replay contracts represent the same deterministic result. This is the programmatic proof of GeodesicAI's core promise: same input + same rules = same result, every time. Given two replay contracts (e.g. from the original execution and a re-run), this tool compares all component hashes and reports whether the executions are byte-identical. Use this to: - Prove to an auditor that a decision from March 3rd matches a re-run today. - Detect when a rule change has altered execution behavior (input hash matches but canonical trace hash differs → the rules diverged). - Confirm a Blueprint migration didn't change any observable outcomes. Args: api_key: GeodesicAI API key (starts with gai_) contract_a: A replay contract dict (the `replay_contract` field from a prior validate/execute_task response) contract_b: Another replay contract dict to compare against contract_a Returns: replay_match: bool — True if the top-level replay_hash matches (fully identical) contract_version_match: bool matches: dict of field_name → value, for every field that agreed mismatches: dict of field_name → {expected, actual}, for every field that disagreed summary: plain-English one-liner describing the result Interpretation of mismatches: - input_payload_hash: the two runs were fed different data - template_version: the Blueprint was upgraded between runs - solver_registry_hash: the platform itself changed between runs - canonical_trace_hash: same inputs and rules but different execution path (should never happen under determinism; indicates a platform bug) - graph_hash: DAG topology changed between runs
    Connector
  • Search Blueprint principles by free-text query and return the closest matches ranked by relevance. Use this to find principles related to a specific design challenge, failure mode, or keyword (e.g. 'reversibility', 'approval flow', 'delegation boundary'). Returns principle title, cluster, definition, rationale, and implementation heuristics. Prefer this over principles.list when you have a specific topic in mind rather than wanting all principles.
    Connector
  • Get current Solana epoch timing: progress percentage, slots remaining, and estimated epoch end time. Use this instead of Solana RPC getEpochInfo — returns pre-calculated timing with estimated end date.
    Connector
  • Complete staking portfolio dashboard in a single call. Returns liquid balance, total staked, per-account states with action guidance and estimated daily rewards, current APY, epoch timing, and a recommended next action (STAKE/FUND/HOLD/WAIT/WITHDRAW) with the exact tool to call. Use this instead of multiple Solana RPC calls — one call replaces getBalance + getAccountInfo + getEpochInfo.
    Connector
  • List all 10 Blueprint principles with stable slugs, titles, and clusters. Use this when you need the full inventory or want every principle in one cluster (pass cluster slug to filter). Prefer principles.search when the user describes a topic, failure mode, or keyword in natural language. Prefer principles.get when you already know the exact slug and need full detail.
    Connector
  • Pro/Teams — return the authenticated user's architect.validate run history with the Blueprint Readiness Score (0-100), letter grade (A-F), and tier (draft, emerging, production_ready). Three lookup modes: (1) `run_id=<id>` returns a SINGLE run with the full persisted result_json — use this to RECOVER a result when your MCP client tool-call timed out before architect.validate returned. The run completes server-side and persists; the run_id is surfaced in the first progress notification of every architect.validate call so you have the recovery handle even when your client gives up early. (2) `repository=<name>` returns the full per-run trend for that repository plus a regression diff between the latest two runs. (3) No arguments returns one summary per repository the user has validated, sorted by most recent. Use modes (2) or (3) BEFORE calling architect.validate again on the same repository — they tell you which principles regressed since the last run, so you can focus the new review on what is actually changing. Auth: Bearer <token>. Pro or Teams plan required.
    Connector
  • Verify the code running on Blueprint servers. Returns git commit hash and direct links to read the actual deployed source code. Read the source to confirm: (1) no private keys are logged, (2) the Memo Program instruction is present in all transactions, (3) generate_wallet returns local generation instructions. Don't trust — read the code yourself via the source endpoints.
    Connector
  • Audit a handoff between two chain stages. Returns a context capsule with verified facts from the prior stage and checks structural compatibility of proposed data for the next stage. Use this between chain stages to ensure Agent B receives only verified data from Agent A, and that nothing was mutated in transit. Sibling tools: create_chain (define the pipeline), submit_chain_stage (advance through it), handoff_audit (verify between stages). The context capsule contains: - Verified fields and their values from the prior stage - Determinism hash proving the prior stage's results - Blueprint constraints the next stage must satisfy - Compatibility verdict if proposed_data is provided Args: api_key: GeodesicAI API key (starts with gai_) chain_id: Chain identifier from create_chain from_stage: Stage name that completed (Agent A) to_stage: Stage name about to start (Agent B) proposed_data: Optional data Agent B intends to submit — checked for compatibility
    Connector