Skip to main content
Glama
135,983 tools. Last updated 2026-05-17 12:46

"namespace:dev.agent-module" matching MCP tools:

  • Pro/Teams — second-pass adversarial certification of an architect.validate run that scored production_ready (A or B first-pass tier). Mints the certified production_ready badge when both reviewers sign off; caps the run to C/emerging when the second pass surfaces a missed production_blocker. MANDATORY DOCTRINE RULE (load-bearing): the badge certifies the EXACT code that produced the validate run_id, NOT 'this codebase' in general. If you modify, fix, or iterate the code between architect.validate and architect.certify — even a single character — cert rejects with code_fingerprint_mismatch. Fixing the code voids the run. The recovery path is always: edit code → architect.validate → fresh run_id → architect.certify on the fresh run. Do NOT cert from a stale run_id after iteration; ask the user to re-validate first. WHEN TO CALL: only after architect.validate returned tier=production_ready AND the user wants the certified badge AND the code has not been touched since the validate run. NOT for tier=draft/emerging/not_applicable runs (typed rejections fire — see below). NOT idempotent across attempts: each call is one of the 3 attempts in the retry budget. BEHAVIOR: atomic one-shot single LLM call, ~60-180s server-side at high reasoning effort (small payloads finish faster; observed p99 ~250s; server-side budget is 20 min, ~5× observed max). Exceeds typical MCP-client tool-call idle budget (~60s in Claude Code), so the FIRST notifications/progress event fires at t=0 carrying the run_id. The run is atomic by contract — no in_progress lifecycle, no cancellation, no resume. Updates the persisted run's result_json (public review URL + me.validation_history(run_id=...) reflect the cert outcome). ELIGIBILITY GATE (typed rejection enum on failure): caller must own the run, tier=production_ready, less than 24h old, not already certified, within cert retry budget (max 3 attempts), no other cert call in flight for the same run_id, code fingerprint must match the validated code, AND the submitted payload must be cert-payload-complete (see Payload Completeness below — cert rejects pre-LLM with `payload_incomplete` when an imported module's surface isn't visible in the validate payload that produced this run_id). Rejection reasons (typed Literal): auth_required, paid_plan_required, run_not_found, not_run_owner, not_eligible_tier, not_agentic_component (tier=not_applicable runs), already_certified, certification_age_exceeded, retry_budget_exhausted, code_fingerprint_mismatch, code_fingerprint_missing, code_not_on_file (caller omitted `code` argument AND the 24h cert-retry hold for this run has expired or was never written. Recovery: re-run architect.certify from the same MCP session that ran architect.validate, passing the code explicitly — the server never persists code by design), payload_incomplete (submitted/validated payload imports modules whose contents aren't visible — cert refuses pre-LLM to prevent a false-precision downgrade. Recovery: re-validate with verbatim public-surface stubs for every imported module, then re-cert on the fresh run_id. Empirically validated: PR #157 iter8/iter9 cert rejections were exactly this class — code on disk was correct, the submitted payload merely omitted module visibility), cert_consensus_score_below_threshold (consensus_median<75 — consensus runs only), cert_consensus_unstable_blocker (any principle mode_stability<80% — consensus runs only), run_state_corrupt, cert_persistence_failed, cert_in_flight (a prior architect.certify call on this run_id is still running. Poll me.validation_history for the verdict; do not retry until it resolves). PAYLOAD COMPLETENESS (load-bearing for cert eligibility): the cert reviewer reads the EXACT payload that produced the validate run_id. Imported modules whose surface isn't present in the payload cause pre-LLM `payload_incomplete` refusal. Avoidance — when validating with intent to cert, bundle public-surface stubs for every imported module: `from sqlalchemy.exc import SQLAlchemyError` → include a stub class; `from app.db import models` → include a `class models:` namespace stub with the columns/methods you reference; module-level imports of `dataclass`, `Literal`, `json`, `datetime`, `timezone` MUST also be in the payload (cert correctly catches when they're omitted — code would NameError on import). 'Submit Like Production': the payload should be the code as it would actually run, not a compressed sketch. PRE-LLM REJECTION AUDIT TRAIL: when cert rejects before the LLM call (payload_incomplete, code_fingerprint_mismatch, etc.), `certification_attempts=[]` on the response — no attempt landed in the retry budget, no LLM hop occurred. The rejection envelope's `rejection_reason` + `guidance` are the actionable surface. (Audit-trail UI surfacing of pre-LLM rejections is tracked in the platform self-audit set as anomaly #5; out of scope for the cert tool itself.) INPUTS: re-send the SAME code that produced the run_id (the architect persists findings + recommendations, never code, by design — privacy-preserving). Server compares the submitted code's SHA-256 fingerprint to the stored fingerprint and rejects mismatches. Auth: Bearer <token>, Pro or Teams plan required. UK/EU data residency (Cloud Run europe-west2). Code processed transiently by OpenAI (no-training-on-API-data) and dropped; payloads JSON-escaped + delimited as inert untrusted data — prompt-injection inside code is ignored. RECOVERY: if your MCP client closes the tool-call early, recover the cert verdict via me.validation_history(run_id=<that-id>) once the server-side LLM call lands — same Bearer token, same pattern as architect.validate. If the cert call fails outright (provider error, persistence error), a fresh architect.certify is the recovery path; the eligibility gate enforces the 3-attempt retry budget. For long-running cert workflows the answer is to re-validate, not to make this tool stateful. OUTCOMES: certification_status ∈ {confirmed_production_ready (badge mints), downgraded_to_emerging (cert review surfaced a missed production_blocker, tier capped at C/emerging), unavailable_provider_error (LLM call failed, retry within budget)}. Cert findings + summary + attempt history surfaced on the persisted run for full inspectability.
    Connector
  • Pro/Teams — second-pass adversarial certification of an architect.validate run that scored production_ready (A or B first-pass tier). Mints the certified production_ready badge when both reviewers sign off; caps the run to C/emerging when the second pass surfaces a missed production_blocker. MANDATORY DOCTRINE RULE (load-bearing): the badge certifies the EXACT code that produced the validate run_id, NOT 'this codebase' in general. If you modify, fix, or iterate the code between architect.validate and architect.certify — even a single character — cert rejects with code_fingerprint_mismatch. Fixing the code voids the run. The recovery path is always: edit code → architect.validate → fresh run_id → architect.certify on the fresh run. Do NOT cert from a stale run_id after iteration; ask the user to re-validate first. WHEN TO CALL: only after architect.validate returned tier=production_ready AND the user wants the certified badge AND the code has not been touched since the validate run. NOT for tier=draft/emerging/not_applicable runs (typed rejections fire — see below). NOT idempotent across attempts: each call is one of the 3 attempts in the retry budget. BEHAVIOR: atomic one-shot single LLM call, ~60-180s server-side at high reasoning effort (small payloads finish faster; observed p99 ~250s; server-side budget is 20 min, ~5× observed max). Exceeds typical MCP-client tool-call idle budget (~60s in Claude Code), so the FIRST notifications/progress event fires at t=0 carrying the run_id. The run is atomic by contract — no in_progress lifecycle, no cancellation, no resume. Updates the persisted run's result_json (public review URL + me.validation_history(run_id=...) reflect the cert outcome). ELIGIBILITY GATE (typed rejection enum on failure): caller must own the run, tier=production_ready, less than 24h old, not already certified, within cert retry budget (max 3 attempts), no other cert call in flight for the same run_id, code fingerprint must match the validated code, AND the submitted payload must be cert-payload-complete (see Payload Completeness below — cert rejects pre-LLM with `payload_incomplete` when an imported module's surface isn't visible in the validate payload that produced this run_id). Rejection reasons (typed Literal): auth_required, paid_plan_required, run_not_found, not_run_owner, not_eligible_tier, not_agentic_component (tier=not_applicable runs), already_certified, certification_age_exceeded, retry_budget_exhausted, code_fingerprint_mismatch, code_fingerprint_missing, code_not_on_file (caller omitted `code` argument AND the 24h cert-retry hold for this run has expired or was never written. Recovery: re-run architect.certify from the same MCP session that ran architect.validate, passing the code explicitly — the server never persists code by design), payload_incomplete (submitted/validated payload imports modules whose contents aren't visible — cert refuses pre-LLM to prevent a false-precision downgrade. Recovery: re-validate with verbatim public-surface stubs for every imported module, then re-cert on the fresh run_id. Empirically validated: PR #157 iter8/iter9 cert rejections were exactly this class — code on disk was correct, the submitted payload merely omitted module visibility), cert_consensus_score_below_threshold (consensus_median<75 — consensus runs only), cert_consensus_unstable_blocker (any principle mode_stability<80% — consensus runs only), run_state_corrupt, cert_persistence_failed, cert_in_flight (a prior architect.certify call on this run_id is still running. Poll me.validation_history for the verdict; do not retry until it resolves). PAYLOAD COMPLETENESS (load-bearing for cert eligibility): the cert reviewer reads the EXACT payload that produced the validate run_id. Imported modules whose surface isn't present in the payload cause pre-LLM `payload_incomplete` refusal. Avoidance — when validating with intent to cert, bundle public-surface stubs for every imported module: `from sqlalchemy.exc import SQLAlchemyError` → include a stub class; `from app.db import models` → include a `class models:` namespace stub with the columns/methods you reference; module-level imports of `dataclass`, `Literal`, `json`, `datetime`, `timezone` MUST also be in the payload (cert correctly catches when they're omitted — code would NameError on import). 'Submit Like Production': the payload should be the code as it would actually run, not a compressed sketch. PRE-LLM REJECTION AUDIT TRAIL: when cert rejects before the LLM call (payload_incomplete, code_fingerprint_mismatch, etc.), `certification_attempts=[]` on the response — no attempt landed in the retry budget, no LLM hop occurred. The rejection envelope's `rejection_reason` + `guidance` are the actionable surface. (Audit-trail UI surfacing of pre-LLM rejections is tracked in the platform self-audit set as anomaly #5; out of scope for the cert tool itself.) INPUTS: re-send the SAME code that produced the run_id (the architect persists findings + recommendations, never code, by design — privacy-preserving). Server compares the submitted code's SHA-256 fingerprint to the stored fingerprint and rejects mismatches. Auth: Bearer <token>, Pro or Teams plan required. UK/EU data residency (Cloud Run europe-west2). Code processed transiently by OpenAI (no-training-on-API-data) and dropped; payloads JSON-escaped + delimited as inert untrusted data — prompt-injection inside code is ignored. RECOVERY: if your MCP client closes the tool-call early, recover the cert verdict via me.validation_history(run_id=<that-id>) once the server-side LLM call lands — same Bearer token, same pattern as architect.validate. If the cert call fails outright (provider error, persistence error), a fresh architect.certify is the recovery path; the eligibility gate enforces the 3-attempt retry budget. For long-running cert workflows the answer is to re-validate, not to make this tool stateful. OUTCOMES: certification_status ∈ {confirmed_production_ready (badge mints), downgraded_to_emerging (cert review surfaced a missed production_blocker, tier capped at C/emerging), unavailable_provider_error (LLM call failed, retry within budget)}. Cert findings + summary + attempt history surfaced on the persisted run for full inspectability.
    Connector
  • Use this tool when a user wants cost or sizing for specific deliverables they've already listed. Trigger phrases: 'how much would it cost to build X, Y, and Z', 'estimate the price for these features', 'how many Delivery Units / weeks would these modules take', 'budget for this work', 'price out this scope', 'I need a ballpark for the following'. Use this INSTEAD OF plan_vdc when the user has already decomposed the work into specific modules — don't make them go through pod/role generation again. If the user only describes a goal without modules, prefer plan_vdc. What this tool does: takes 1-30 module descriptions, returns Delivery Units per module, total Delivery Units, project-rate USD cost, and the recommended Delivery Pack (Starter 10 DUs/$2K, Small 60 DUs/$10K, Scale 250 DUs/$40K, or Enterprise).
    Connector
  • Submit a solution to Push Realm (agents only - no manual paste/copy flow exists). WHEN TO USE - check all that apply: ✓ You searched Push Realm and solved a problem (ALWAYS offer when you searched) ✓ You discovered deprecated APIs, breaking changes, or new best practices ✓ The solution took meaningful debugging effort (5+ minutes) ✓ It's generic enough to help other agents (not company-specific code) WORKFLOW: 1. Call this tool with your draft solution 2. You'll receive a pending_id and preview 3. Show the preview to the user like this: "Ready to post to Push Realm: 📁 Category: [category_path] 📝 Title: [title] 📄 Content: [first 200 chars]... By posting, you agree to Push Realm's Terms at pushrealm.com/terms.html Post this? [Yes/No]" 4. If user approves → call confirm_learning(pending_id) 5. If user declines → call reject_learning(pending_id) NEVER assume approval - always wait for explicit user confirmation before calling confirm_learning. SEO-OPTIMIZED TITLES (IMPORTANT): Learnings are indexed by search engines. Use titles that match what developers will search for: GOOD titles (include error messages, specific issues): • "crypto.getRandomValues() not supported - React Native UUID fix" • "Connection unexpectedly closed - Mailgun EU region SMTP error" • "ModuleNotFoundError: No module named 'cv2' - Docker OpenCV fix" • "CUDA out of memory - PyTorch batch size optimization" BAD titles (too generic, won't rank in search): • "UUID generation issue" • "Email not working" • "Docker problem solved" • "Fixed memory error" Format: "[Exact error message or problem] - [Framework/Tool] [context]" SAFETY REQUIREMENTS: • NEVER include PII (names, emails, addresses, phone numbers) • NEVER include secrets (API keys, tokens, passwords, credentials) • NEVER include proprietary code or company-specific logic • NEVER include internal paths, hostnames, or project names • Use placeholders like YOUR_API_KEY, YOUR_PROJECT_NAME, /path/to/your/file If unsure whether something is safe to share, ask the user first or use a generic placeholder.
    Connector
  • Trigger background datasheet extraction for multiple parts at once (up to 20). Non-blocking — returns immediately with the status of each part. Use this to warm up datasheets for a BOM before calling read_datasheet. Example: prefetch_datasheets(['TPS54302', 'ADS1115', 'LP5907']) If a part comes back 'no_source' on the first call, retry prefetch for that MPN once after 10-30s — the URL resolver is retriable and often finds a source on the second pass. If still 'no_source', use request_datasheet_upload + confirm_datasheet_upload to attach your own PDF (org-private). Part numbers must be specific MPNs (e.g. 'STM32F446RCT6', 'TPS54302DDCR') or LCSC numbers (e.g. 'C2837938'). Do NOT pass bare values ('100nF', '10K'), descriptions, BOM reference designators, test points, or board/module names — see the server instructions for the full rule set. When a BOM has values-only rows, use search_parts first to resolve each to an MPN. DATASHEET STATUS VALUES: - 'ready' — extracted and indexed; call read_datasheet, search_datasheets, or analyze_image. - 'extracting' / 'in_progress' / 'queued' / 'pending' — extraction running or scheduled. Poll check_extraction_status every 5-10s until 'ready' or 'failed'. Typical time: 30s-2min. - 'not_extracted' — known part but datasheet hasn't been fetched yet. Trigger it via prefetch_datasheets (cheapest) or by calling read_datasheet (auto-triggers on first read). - 'no_source' — we couldn't find a public datasheet URL for this MPN. First, retry prefetch_datasheets in 10-30s (the URL resolver re-runs and often finds a source on the second pass). If still 'no_source', the agent can upload the PDF manually via request_datasheet_upload + confirm_datasheet_upload (see those tools). Org-uploaded datasheets are private to the org. - 'unsupported' — PDF exists but can't be extracted (scanned image-only, encrypted, or corrupted). Upload a clean text-based PDF via request_datasheet_upload to override. - 'failed' / 'error' — extraction errored. The response includes the error reason. Retry via prefetch_datasheets or escalate to support. - 'rejected' — input wasn't a real MPN (bare value like '100nF', description, or reference designator). Fix the input and re-call. - 'deduplicated' — another part in the family already has this datasheet; same content is returned under the primary MPN.
    Connector
  • Fetch a sanitized public sample section from Refpro's reference deal library. Inputs: deal_type (FF | BRRRR | NC) and section (summary | financials | risk_notes | full). Returns sanitized example markdown content for the requested section, plus a deep-link URL to the canonical version on refpro.ai. The 'full' section stitches summary, financials, and risk_notes in order. All content is sanitized example data — not a real customer deal — and is safe to surface verbatim to end users. No network calls; samples are loaded once at module init.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Agent Module provides structured, validated knowledge bases engineered for autonomous agent consumption at runtime. Agents retrieve deterministic knowledge instead of scanning unstructured web content — eliminating hallucinated citations in regulated domains.

  • Deterministic compliance and vertical knowledge bases for autonomous agents. Free 24hr trial.

  • Batch-score multiple npm, PyPI, Cargo, or Go packages for supply chain risk. Takes a list of package names and returns a risk table sorted by commitment score (lowest = highest risk first). Risk flags: - CRITICAL: single publisher + >10M weekly downloads (publish-access concentration risk) - HIGH: new package (<1yr) + high downloads (unproven, rapid adoption = supply chain risk) - WARN: low publisher count + high downloads Perfect for auditing a full package.json, requirements.txt, Cargo.toml, or go.mod — paste your dependency list and get a prioritized risk report. For Go: pass full module paths (e.g., "github.com/gin-gonic/gin", "golang.org/x/net") and set ecosystem="golang". The "maintainers" column shows GitHub contributor count since Go has no centralized publisher concept. Examples: score all deps in a project, compare two similar packages, identify abandonware before it becomes a CVE.
    Connector
  • Produce a Due Diligence Statement per Regulation (EU) 2023/1115 for one or more plots. Each plot carries operator-supplied geometry (GeoJSON Polygon for >4 ha, Point for ≤4 ha non-cattle per Article 2(28)), country of production (ISO3), Combined Nomenclature code (HS-6+), and quantity in kg. The endpoint applies the regulation's 10 % canopy / 0.5 ha / 5 m height forest definition (Article 2(4)) using the EU Commission's expected JRC GFC2020 V3 baseline plus Hansen GFC v1.12 loss-year confirmation; Sims et al. 2025 driver attribution and RADD SAR fallback layer on when those connectors are wired (Absence today). The response is an Annex II-shaped envelope with per-plot verdict (pass/fail/not_in_scope/indeterminate/fail_below_de_minimis), failing-cell fraction, and signed fact CIDs for every per-cell verdict — operators quote them in the company's Article 12 record. Article 9(1)(b) legality (land tenure, FPIC, country-of-origin laws) is structurally out of EO scope; the response carries an explicit `legality_disclaimer` for that reason. When to use: Call when a commodity supplier or EU importer needs to evidence due diligence under Regulation (EU) 2023/1115. Use the plot-level signed receipts as evidence inside the operator's company record; pair with a partner legality module before submitting the final DDS to the EU Information System (TRACES NT). For a single plot, pass one entry in `plots`. For batch supply-chain audits, pass up to a few dozen plots in one call — the endpoint fans out per plot. Surface the failing-cell fraction, the chosen forest baseline, and the legality disclaimer in the user-facing response so the operator understands what the engine claims (and does not).
    Connector
  • Fetch Bitrix24 app development documentation by exact title (use `bitrix-search` with doc_type app_development_docs). Returns plain text labeled fields (Title, URL, Module, Category, Description, Content) without Markdown.
    Connector
  • Get the canonical steps for installing petal_components in a Phoenix project. Call this when the user asks to install petal_components, when you are setting up a new Phoenix project that needs UI components, or when verifying an existing installation. Returns step-by-step instructions covering mix.exs, mix deps.get, Tailwind v4 CSS config, and the web module import. Steps are idempotent - safe to follow on a project that is partially configured.
    Connector
  • Enable or disable an AI module on a site. The module must be in the plan's available module list. Requires: API key with write scope. Args: slug: Site identifier module_name: Module to toggle. Available modules: "chatbot" (AI chat widget), "seo" (SEO optimization), "translation" (content translation), "content" (AI content generation) Returns: {"module": "chatbot", "enabled": true, "message": "Module enabled"} Errors: NOT_FOUND: Unknown slug or module not in plan VALIDATION_ERROR: Invalid module name
    Connector
  • Get authoritative Senzing SDK reference data for flags, migration, and API details. Use this instead of search_docs when you need precise SDK method signatures, flag definitions, or V3→V4 migration mappings. Topics: 'migration' (V3→V4 breaking changes, function renames/removals, flag changes), 'flags' (all V4 engine flags with which methods they apply to), 'response_schemas' (JSON response structure for each SDK method), 'functions' / 'methods' / 'classes' / 'api' (search SDK documentation for method signatures, parameters, and examples — use filter for method or class name), 'all' (everything). Use 'filter' to narrow by method name, module name, or flag name
    Connector
  • Request a free 24-hour trial key. Unlocks all 4 content layers on the chosen vertical. 500-call cap. No payment required.
    Connector
  • Pro/Teams — first-pass doctrine review of agentic code/workflow against the 10-principle Agentic AI Blueprint. Returns code_classification (autonomous_agentic_workflow vs non_agentic_component), per-principle findings (verdict, severity_score 0-100, severity_class, code-cited evidence, recommendation), severity-weighted readiness (score|null, grade|null, tier ∈ {production_ready, emerging, draft, not_applicable}), recommended examples, reproducibility envelope (model, seed, doctrine_fingerprint, prompt_template_fingerprint), persistence_status with shareable run_id/badge_url/review_url. WHEN TO CALL: the user wants a governance audit, readiness score, or production_ready badge on an agent/workflow they just built or changed. WHEN NOT TO CALL: non-agentic plumbing (math utilities, type aliases, event-loop helpers, single-shot request/response handlers) returns tier=not_applicable with score=null/grade=null — that's not a failure, the doctrine simply doesn't grade non-agentic code, and architect.certify will refuse with not_agentic_component. Submit the OWNING agentic workflow instead. BEHAVIOR: long-running LLM call (~60-180s typical at high reasoning effort, single-pass; server-side budget 20 min). Mints run_id at t=0; first notifications/progress event carries run_id as recovery handle; keepalive every 30s. Persists ValidationRun + UserValidationRun + AIValidationRunLog + LLMUsageLog atomically; on rollback, badge/review URLs are stripped. Auth: Bearer <token>, Pro/Teams plan. UK/EU residency; transient OpenAI processing (no-training); prompt-injection in code is inert. INPUTS: send FULL file contents verbatim as `implementation_context` (NO truncation, NO `...` placeholders, NO comment removal — the architect treats your `...` as literal code and hallucinates bugs that don't exist). If too large, split into MULTIPLE calls scoped by file/module; never truncate one call. Pass repository="<name>" to group runs into a project trend. Pass private_session=true to bypass server-side logging (persistence + recovery disabled). focus_area narrows scope; unmatched focus_area fails explicitly rather than silently widening. PAYLOAD COMPLETENESS (load-bearing if you intend to architect.certify this run): the validate first-pass is permissive — it scores on doctrine alignment + structural patterns visible in the submitted code. Cert's adversarial second-pass is rigorous — it scores on cert-payload-completeness as well as code correctness. A run that scores 100/A at validate can cert-reject pre-LLM with `payload_incomplete` when imported modules' surfaces aren't visible. To validate with INTENT TO CERT, also bundle verbatim public-surface stubs for every imported module: `from sqlalchemy.exc import SQLAlchemyError` → include a stub class; `from app.db import models` → include a `class models:` namespace stub with the columns/methods the code references; module-level imports of `dataclass`, `Literal`, `json`, `datetime`, `timezone` MUST also be in the payload (cert correctly catches when they're omitted — the module would NameError on import as submitted). 'Submit Like Production': the payload should be the code as it would actually run. Empirically reconfirmed PR #157 iter8 → iter9 cert downgrades. SCORE VARIANCE DISCLOSURE (anomaly #10 — empirically documented): validate scores are POINT ESTIMATES with an observed empirical variance band of ~20-67 pts on BYTE-IDENTICAL input. Runs against the same repository, same code, same deterministic seed (the seed is derived from input — same input → same seed) can produce materially different scores AND different top-blocker rankings, because OpenAI's reasoning models at reasoning_effort=high are not strictly deterministic even with the seed parameter pinned. Empirical evidence: PR #157 iter1 33/F vs iter2 100/A on the byte-identical baseline-race primitives (+67 spread); invoice-payment-manager #158 38/F vs #159 74/C (+36 spread). The `reproducibility_mode='best_effort'` field on every response is the platform's honest disclosure of this property. For decisions where stability matters more than speed, call `architect.validate_consensus` (N=3-5 aggregated, median verdict + per-principle stability metrics) instead — collapses the variance, surfaces unstable principles explicitly. A single validate run is a single roll; consensus is the right tool when one score isn't enough. VERIFICATION LAYERS (the two-layer doctrine this platform practices on itself): validate verifies DOCTRINE ALIGNMENT against the 10-principle Blueprint — design patterns, hand-off explicitness, operational-state inspectability, race/blocker handling at the architectural level. validate does NOT guarantee runtime correctness. cert verifies PAYLOAD COMPLETENESS and runs an adversarial second pass over the submitted code — catches production_blockers the first pass missed, name-errors on import, missing module surfaces, etc. cert does NOT verify runtime correctness either. Passing validate is a NECESSARY condition for production_ready, not a sufficient one. Runtime correctness (does this actually execute and behave?) is verified at the THIRD layer — your tests, types, walks. The platform's own recursive-integrity practice: every PR runs validate against its own primitives, then cert. Real bugs surfaced via this practice in PR #157 — NULL-UUID false-positive (iter3) and tie-breaker mismatch (iter5) — that 25 unit tests had missed. Two-layer verification is the discipline, not 'either/or'. RECOVERY: if your MCP client closes the tool-call early, fetch the result via me.validation_history(run_id=<that-id>) once the run completes server-side — same Bearer token (per-user auth). Unavailable when private_session=true. TYPED FAILURES: timed_out, rate_limited, dependency_unavailable, schema_mismatch (each carries retryable + next_action). NEXT STEP: if tier=production_ready (A or B grade), the response carries certification_status='not_evaluated' — call architect.certify(run_id, code) to mint the certified production_ready badge (separate ~60-150s adversarial review, eligibility-gated). See Payload Completeness above for the common pre-cert pitfall.
    Connector
  • Is it safe to deploy these changes? Cross-references your changed modules against active constraints, recent incidents, knowledge freshness, and active alerts. Returns a composite verdict (ready/caution/block) with per-module breakdown and actionable recommendations. Use BEFORE deploying to catch constraint violations, recent regressions in the same area, stale knowledge that needs verification, and active alerts that might interact with your changes.
    Connector
  • Given a svelte component or module returns a list of suggestions to fix any issues it has. This tool MUST be used whenever the user is asking to write svelte code before sending the code back to the user
    Connector
  • What went wrong last time we touched this module? Returns past incidents, deploy failures, gotchas, and active constraints for a module or system. Use BEFORE modifying infrastructure code, deploy scripts, or any module with a history of fragility. Surfaces the kind of tribal knowledge that prevents repeat failures — Docker bind mount traps, Vault agent write patterns, stale dist/ artifacts, port conflicts, and similar operational landmines.
    Connector
  • Get a behavioral commitment profile for any Go module on proxy.golang.org. Takes a full module path (e.g., "github.com/gin-gonic/gin", "golang.org/x/net", "k8s.io/client-go", "gopkg.in/yaml.v3") and returns real signals: module age, version count, publish cadence, GitHub contributors (the closest equivalent to "publishers" since Go has no centralized publisher concept — git push access is the publish equivalent), GitHub stars, OpenSSF Scorecard score. The Go ecosystem has no centralized download counter, so this profile is GitHub-primary — the linked source repository's activity, contributor count, and Scorecard carry more weight than for npm/PyPI/Cargo. Stars are used as the popularity proxy. Useful for: vetting Go dependencies before adding to go.mod, identifying abandonware, supply chain risk assessment. Examples: "github.com/gin-gonic/gin", "golang.org/x/crypto", "github.com/spf13/cobra", "k8s.io/api"
    Connector
  • Submit a Proof of Value assessment after exploring the AI Compliance trial. Includes quality scoring and subscription intent. Include a contact channel so we can reach you about membership activation.
    Connector
  • Retrieve structured knowledge from Agent Module verticals. Returns deterministic, validated knowledge nodes. Index layer always free. All 4 content layers available via trial key on ethics.
    Connector