agent-bom
agent-bom is a comprehensive AI supply chain security scanner and runtime enforcement MCP server for discovering, assessing, and remediating vulnerabilities across AI agent infrastructure, MCP servers, and dependencies.
Core Scanning & Discovery
scan– Full AI supply chain scan: auto-discovers MCP configs (Claude Desktop, Cursor, Windsurf, VS Code Copilot, etc.), extracts packages, queries OSV.dev for CVEs, assesses credential exposure, computes blast radius, and returns a structured report. Supports Docker image scanning, policy evaluation, SBOM ingestion, and NVD/EPSS/CISA KEV enrichment.inventory– Fast discovery and package extraction without CVE scanning; quick inventory of MCP configs, servers, packages, and transport types.where– List all MCP client config discovery paths and show which files exist on the current system.check– Check a specific package (npm, PyPI, Go, Cargo, Maven, NuGet) for known CVEs before installing, with severity, CVSS score, and fix version.
Risk Analysis
blast_radius– Map the full attack chain for a CVE: affected packages → MCP servers → agents → exposed credentials and tools.context_graph– Build an agent context graph with lateral movement analysis (BFS paths) to answer "if agent X is compromised, what else is reachable?"runtime_correlate– Cross-reference scan results with proxy runtime audit logs to identify which vulnerable tools were actually called in production.
Policy, Compliance & Remediation
policy_check– Evaluate security policy rules (severity thresholds, CISA KEV, AI risk flags, denied packages) against scan results; returns pass/fail with violations.compliance– Map findings to 47 controls across OWASP LLM Top 10, OWASP MCP Top 10, MITRE ATLAS, and NIST AI RMF with per-control status and an overall score.remediate– Generate actionable fix commands (npm/pip upgrades), credential scope reduction guidance, and flag unfixable vulnerabilities.cis_benchmark– Run CIS Foundations Benchmark checks against AWS (18 checks) or Snowflake (12 checks) with per-check pass/fail results.
Trust & Integrity
skill_trust– Assess SKILL.md/instruction files across 5 trust categories with a benign/suspicious/malicious verdict.verify– Verify package integrity via SHA-256/SRI hashes and SLSA build provenance attestations against npm/PyPI registries.marketplace_check– Pre-install trust check for an MCP server package: download count, CVE status, registry verification, and trust signals.registry_lookup– Query the built-in threat intelligence registry (109+ MCP servers) for risk level, known tools, credential requirements, and verification status.
Advanced Capabilities
generate_sbom– Generate a standards-compliant SBOM in CycloneDX 1.6 or SPDX 3.0 format.diff– Compare a fresh scan against a baseline to identify new/resolved vulnerabilities and package inventory changes.code_scan– Run SAST via Semgrep on source code to detect SQL injection, XSS, command injection, hardcoded credentials, and more.fleet_scan– Batch-scan a list of MCP server names against the security registry for fleet-wide risk assessment.analytics_query– Query vulnerability trends, posture history, and runtime event summaries from ClickHouse.
Additional features: real-time runtime enforcement proxy with behavioral attack pattern detection, MCP config drift watching, SIEM integration (Splunk, Datadog, Elasticsearch), output in JSON/SARIF/HTML/Mermaid formats, and AI-specific scanning for GPU/ML packages and model provenance (HuggingFace, Ollama, MLflow, W&B).
Scans AWS cloud infrastructure and Amazon Q configurations to identify security vulnerabilities and ensure compliance with CIS benchmarks.
Integrates with ClickHouse to provide security scan analytics, visualization, and posture scoring for AI infrastructure.
Performs security scanning of Databricks environments to detect misconfigurations and dependency vulnerabilities.
Scans Docker images and Docker-based MCP servers for security risks, tool poisoning, and dependency vulnerabilities.
Integrates as a CI/CD gate to automate security scans and enforce compliance policies during the development lifecycle.
Supports deployment and fleet-wide security scanning of AI agent infrastructure within Kubernetes using Helm charts.
Discovers and analyzes JetBrains AI configurations to identify potential credential leaks and security risks.
Enables dispatching security alerts and vulnerability findings to Jira for incident management and remediation tracking.
Scans Kubernetes clusters to map vulnerability propagation and assess the security posture of AI agent deployments.
Discovers and scans MLflow platforms to identify security risks and verify the provenance of AI models.
Provides integration with OpenTelemetry for monitoring and tracing the security scan pipeline and execution.
Dispatches real-time security alerts and scan reports to Slack channels via webhooks for immediate notification.
Provides governance and security scanning for Snowflake instances, including compliance checks against CIS Snowflake benchmarks.
Generates standardized Software Bill of Materials (SBOM) reports in the SPDX format for security compliance and transparency.
Analyzes security risks and maps the blast radius for AI agent tools and MCP servers utilizing SQLite databases.
CVE-2025-1234 (CRITICAL · CVSS 9.8 · CISA KEV)
|── better-sqlite3@9.0.0 (npm)
|── sqlite-mcp (MCP Server · unverified · root)
|── Cursor IDE (Agent · 4 servers · 12 tools)
|── ANTHROPIC_KEY, DB_URL, AWS_SECRET (Credentials exposed)
|── query_db, read_file, write_file, run_shell (Tools at risk)
Fix: upgrade better-sqlite3 → 11.7.0Blast radius is the core idea: CVE -> package -> MCP server -> agent -> credentials -> tools. CWE-aware impact keeps a DoS from being reported like credential compromise.
Try the demo
agent-bom agents --demo --offlineThe demo uses a curated sample so the output stays reproducible across releases. Every CVE shown is a real OSV/GHSA match against a genuinely vulnerable package version — no fabricated findings (locked in by tests/test_demo_inventory_accuracy.py). For a real scan, run agent-bom agents, or add -p . to fold project manifests and lockfiles into the same result.
Pick your entrypoint
Goal | Run | What you get |
Find what is installed and reachable |
| Agent discovery, MCP mapping, project dependency findings, blast radius |
Turn findings into a fix plan |
| Prioritized remediation with fix versions and reachable impact |
Check a package before install |
| Machine-readable pre-install verdict |
Scan a container image |
| OS and package CVEs with fixability |
Audit IaC or cloud posture |
| Misconfigurations, manifest hardening, optional live cluster posture |
Review findings in a persistent graph |
| API, dashboard, unified graph, current-state and diff views |
Inspect live MCP traffic |
| Inline runtime inspection, detector chaining, response/argument review |
Quick start
pip install agent-bom # CLI
# pipx install agent-bom # isolated global install
# uvx agent-bom --help # ephemeral run
agent-bom agents # discover + scan local AI agents and MCP servers
agent-bom agents -p . # add project lockfiles + manifests
agent-bom check flask@2.0.0 --ecosystem pypi # pre-install CVE gate
agent-bom image nginx:latest # container image scan
agent-bom iac Dockerfile k8s/ infra/main.tf # IaC scan, optionally `--k8s-live`After the first scan:
agent-bom agents -p . --remediate remediation.md # fix-first plan
agent-bom agents -p . --compliance-export fedramp -o evidence.zip # tamper-evident evidence bundle
pip install 'agent-bom[ui]' && agent-bom serve # API + dashboardProduct views
These come from the live product path, using the built-in demo data pushed through the API. See docs/CAPTURE.md for the canonical capture protocol.
Dashboard — Risk overview
The landing page is the Risk overview: a letter-grade gauge, the four headline counters (actively exploited · credentials exposed · reachable tools · top attack-path risk), the security-posture grade with sub-scores (policy + controls, open evidence, packages + CVEs, reach + exposure, MCP configuration), and the score breakdown for each driver.

Dashboard — Attack paths and exposure
The second dashboard frame focuses on the fix-first path list and the coverage / backlog KPIs below it, so the attack-path drilldown stays readable without a tall stitched screenshot.

Fix-first remediation
Risk, reach, fix version, and framework context in one review table — operators act without jumping between pages.

Agent mesh
Agent-centered shared-infrastructure graph — selected agents, their shared MCP servers, tools, packages, and findings.

Inside the engine: parsers, taint, call graph, blast-radius scoring.
External calls are limited to package metadata, version lookups, and CVE enrichment.
Enterprise self-hosted deployment
agent-bom runs end-to-end inside your infrastructure — your AWS account, your VPC, your EKS cluster, your Postgres / ClickHouse / Snowflake, your SSO, your KMS. No hosted control plane. No mandatory vendor backend. No telemetry.
This section is deployment-first: what runs in your infrastructure, what the
data path looks like, which stores hold state, and how a focused pilot narrows
that same architecture without inventing a different product. The detailed
rollout runbooks live under site-docs/deployment/.
Default self-hosted deployment shape
agent-bom is easiest to reason about as three layers:
entry points: local CLI scans, GitHub Action CI/CD gates, endpoint fleet sync, proxy sidecars/wrappers, and an optional central gateway
operator plane: the self-hosted API + UI, scan/fleet/gateway/compliance routes, and job orchestration in your EKS cluster or self-managed compute
data plane: Postgres/Supabase for transactional state, with ClickHouse or Snowflake added only when your deployment actually needs them
flowchart LR
subgraph entry["Entry points in your environment"]
cli["CLI scans<br/>agents · image · iac"]
gha["GitHub Action<br/>CI/CD gate + SARIF"]
fleet["Endpoint fleet<br/>--push-url sync"]
proxy["Proxy / sidecar<br/>stdio or HTTP/SSE"]
gateway["Central gateway<br/>agent-bom gateway serve"]
end
subgraph targets["Targets under review"]
local["Local repos + stdio MCPs"]
remote["Remote MCPs + SaaS + cluster workloads"]
end
subgraph control["Self-hosted operator plane"]
api["API + UI<br/>findings · graph · remediation"]
routes["Fleet / policy / compliance routes<br/>tenant-scoped API"]
jobs["Scan jobs + ingest workers"]
end
subgraph data["Your data stores"]
pg["Postgres / Supabase<br/>jobs · fleet · graph · audit"]
ch["ClickHouse (optional)<br/>analytics + long-retention events"]
snow["Snowflake (optional)<br/>warehouse-native deployment"]
end
cli --> local
gha --> local
fleet --> jobs
proxy --> remote
gateway --> remote
proxy --> routes
gateway --> routes
jobs --> api
routes --> api
api --> pg
api -. optional analytics .-> ch
api -. optional warehouse path .-> snowThis is the architecture. A pilot is just a narrower rollout profile over the same surfaces and stores.
Rollout profiles
Profile | Turn on first | Keep optional until needed |
Local + CI/CD gate | CLI scans + GitHub Action + HTML/SARIF output | fleet, proxy, gateway, ClickHouse |
Focused pilot | scan + fleet + proxy + API/UI | ClickHouse, Snowflake, full gateway rollout |
Standard self-hosted | scan + fleet + proxy + gateway + API/UI | ClickHouse |
Regulated / zero-trust | standard self-hosted + Istio/Kyverno/ExternalSecret | Snowflake |
The gateway closes the biggest deployment gap for remote MCP usage: one central URL in your EKS fronts N remote MCP upstreams, so laptops do not each need their own proxy config. See the multi-MCP gateway design and the focused EKS rollout.
Core surfaces and entry points, one shared graph
Surface | CLI / route | What it does | Runs as |
scan |
| Discovery, inventory, CVE enrichment, blast-radius scoring | CLI + CronJob |
CI/CD gate | GitHub Action | Pull-request and release gating, SARIF, policy-driven exits | GitHub Actions runner |
fleet |
| Endpoint + collector fleet ingest with tenant scoping | API endpoint |
proxy / runtime |
| Inline MCP JSON-RPC inspection + policy enforcement | K8s sidecar or laptop wrapper |
gateway |
| Central HTTP traffic plane plus shared policy/audit plane | Service + API routes |
API + UI |
| Findings, graph, remediation, compliance, posture | 2 Deployments + HPA |
By default, findings, fleet data, audit logs, graph state, and remediation outputs stay in your infrastructure. Optional egress (OSV lookups, NVD enrichment, Slack / Jira / Vanta / Drata webhooks, SIEM / OTLP) is operator-controlled.
Two enforcement shapes, one control plane
Pilot teams pick per workload:
agent-bom gateway serve— central multi-upstream HTTP gateway. One service in your EKS fronts N MCP upstreams (SaaS MCPs, Snowflake-hosted MCPs, in-cluster MCPs) and every laptop points at/mcp/{server-name}over HTTP/SSE. Fleet-driven auto-discovery via--from-control-planeso the upstream list comes from the scans your team already runs, not a blank YAML. Source:src/agent_bom/gateway_server.py, CLI:src/agent_bom/cli/_gateway.py, tests:tests/test_gateway_server.py.agent-bom proxy— per-MCP sidecar or stdio wrapper (proxy.py:527stdio,proxy.py:258HTTP/SSE). One instance per server. The honest mode for stdio-only MCPs and for workload-local enforcement where a shared traffic plane would hairpin.
Both modes pull the same gateway policy (/v1/gateway/policies) and push to the same audit sink (/v1/proxy/audit). Central control, edge enforcement, no hairpinning.
Backend matrix — pick what fits your data
agent-bom does not treat every backend as interchangeable. Pick per capability — full detail in backend-parity.md.
Capability | SQLite | Postgres / Supabase (default) | ClickHouse (analytics) | Snowflake (warehouse-native) |
Scan jobs + fleet agents + gateway policies + audit log | ✓ | ✓ | n/a (not a transactional store) | ✓ |
Exceptions, schedules, graph | ✓ (SQLite stores ship in repo) | ✓ | n/a | n/a (not yet ported) |
API keys + trend store | Postgres-only | ✓ | n/a | n/a (not yet ported) |
Row-level tenant isolation | ✓ | ✓ | ✓ | ✓ (governance-oriented) |
High-volume OLAP / time-series | n/a | n/a | ✓ | ✓ (via Snowpark) |
Best for | laptops, single-node | standard EKS pilot | audit + analytics at scale | you already live in Snowflake |
Source: src/agent_bom/api/store.py, postgres_store.py, clickhouse_store.py, snowflake_store.py. Parity roadmap: backend-parity.md.
Common deployment shapes:
Pilot default — Postgres (or Supabase) control plane. Everything works, fastest install.
Analytics-heavy — Postgres + ClickHouse. Postgres stays transactional; ClickHouse ingests the audit/event firehose.
Snowflake-native (unified stack) — Snowflake as the primary and analytics store. Uses Hybrid Tables for transactional writes (scan / fleet / policy / audit), columnar tables for analytics, Snowpipe Streaming for real-time ingest, and the Postgres-compatible protocol where clients need it. Cross-cloud replication lets EKS read/write the same tables your Cortex MCPs read, regardless of region. Best when you already govern data there. See snowflake-backend.md.
Ready-made Helm values files
Three shipped examples in deploy/helm/agent-bom/examples/:
File | Shape | Use when |
Postgres + MCP-focused scanner CronJob + restricted ingress | Pilot scope, MCP + agents + fleet + proxy | |
Postgres pool tuned + HPA + pod anti-affinity + PriorityClass | Production rollout | |
Istio mTLS + Kyverno policy + PSA restricted | Regulated / zero-trust environments | |
Snowflake as primary backend via key-pair auth | You already govern data in Snowflake |
The scoped product stack
Most self-hosted teams start with the surfaces below. The focused pilot simply turns on a narrower subset first; it does not use a different architecture. Every one of them maps to code in this repo and ships today.
scan — discovery, inventory, CVE, image, IaC, Kubernetes, cloud analysis (
src/agent_bom/cli/agents/)CI/CD gate — GitHub Action packaging of the scan surface for pull-request and release workflows with SARIF output
fleet — endpoint + collector inventory pushed into the control plane (
POST /v1/fleet/sync)proxy / runtime — per-MCP sidecar or stdio wrapper — the honest mode for stdio MCPs and workload-local enforcement (
src/agent_bom/proxy.py)gateway — two things, same namespace:
central policy + audit plane (
/v1/gateway/*) that every enforcement point pulls + pushes (src/agent_bom/api/routes/gateway.py)central HTTP traffic plane (
agent-bom gateway serve) that fronts N remote MCP upstreams behind one URL with fleet-driven auto-discovery, bearer + OAuth2 client-credentials auth injection, inlinecheck_policy, and audit push (src/agent_bom/gateway_server.py,src/agent_bom/cli/_gateway.py)
API + UI — operator plane for findings, graph, remediation, audit, policy, compliance (
src/agent_bom/api/server.py,ui/)
1. External flow — where the data comes from
flowchart LR
clients["Cursor · Claude · VS Code<br/>Codex · Cortex · Continue"]
cli["agent-bom agents --push"]
prx["agent-bom proxy <mcp>"]
cp(["agent-bom control plane<br/>in your EKS cluster"])
clients -.-> cli
clients -.-> prx
cli -->|HTTPS push| cp
prx -->|policy pull · audit push| cp2. Inside your EKS cluster — what actually deploys
The Helm chart installs a single namespace with the control plane, its backup job, and the operator surface. Selected MCP workloads run alongside with an agent-bom-proxy sidecar that pulls gateway policy and pushes audit events back.
flowchart TB
subgraph ns["namespace: agent-bom"]
direction TB
api["Deployment: agent-bom-api<br/>3 replicas · HPA · /readyz drain"]
ui["Deployment: agent-bom-ui<br/>2 replicas"]
cron["CronJob: controlplane-backup<br/>pg_dump → S3 (SSE-KMS)"]
es[("ExternalSecret<br/>API keys · HMAC key · DB URL")]
obs["PrometheusRule + Grafana dashboard ConfigMap"]
end
subgraph work["Selected MCP workloads (same or adjacent ns)"]
direction LR
mcpsvc["MCP server pod"]
proxy["Sidecar: agent-bom-proxy"]
mcpsvc -.- proxy
end
api --- ui
api --- es
api -. scrape / alert .- obs
api --- cron
proxy -->|policy pull · audit push| apiOutside the namespace but in your VPC: Postgres (primary state), ClickHouse (optional analytics), External Secrets wired to KMS, and Prometheus + Grafana + OTel scraping the API. The restore round-trip is exercised in CI (backup-restore.yml).
3. How a request flows through the control plane
flowchart TB
REQ([HTTP request])
BODY[Body size + read timeout]
TRACE[Trust headers + W3C trace]
AUTH["Auth — API key · OIDC · SAML"]
RBAC[RBAC role check]
TENANT[Tenant context propagation]
QUOTA[Tenant quota + rate limit]
ROUTE[Route handler]
AUDIT[(HMAC audit log)]
STORE[(Postgres · ClickHouse · Snowflake<br/>KMS at rest)]
REQ --> BODY --> TRACE --> AUTH --> RBAC --> TENANT --> QUOTA --> ROUTE
ROUTE --> AUDIT
ROUTE --> STOREEvery layer is testable on its own; failures emit Prometheus metrics. Operators introspect a live request via GET /v1/auth/debug and see rotation status via GET /v1/auth/policy.
4. Day-1 install on EKS (scripted)
Inside the control plane: OIDC + SAML SSO with RBAC, enforced API-key rotation policy, tenant-scoped quotas + rate limits, HMAC-chained audit log with signed export, KMS-encrypted Postgres backups with a verified restore round-trip in CI (backup-restore.yml), and signed compliance evidence bundles with Ed25519 asymmetric signing (/v1/compliance/{framework}/report — key pinned via /v1/compliance/verification-key, verification cookbook at docs/COMPLIANCE_SIGNING.md).
Pilot teams run:
# 1. Pick your backend shape (postgres default; snowflake / istio / production also shipped)
helm install agent-bom deploy/helm/agent-bom \
-n agent-bom --create-namespace \
-f deploy/helm/agent-bom/examples/eks-mcp-pilot-values.yaml
# 2. Smoke-test the install end-to-end — health + auth + fleet + scan + evidence bundle
kubectl -n agent-bom port-forward svc/agent-bom-api 8080:8080 &
./scripts/pilot-verify.sh http://localhost:8080 "$API_KEY"
# 3. Sync endpoint fleet
agent-bom agents --preset enterprise --introspect \
--push-url https://agent-bom.example.com/v1/fleet/sync
# 4. Wrap one MCP server with the runtime proxy (per-MCP today — see roadmap note above)
agent-bom proxy --policy ./policy.json -- <editor-mcp-command>
# 5. Pull an auditor-ready evidence bundle
curl -sD headers.txt -o soc2.json \
"https://agent-bom.example.com/v1/compliance/soc2/report" \
-H "Authorization: Bearer $API_KEY"See docs/ENTERPRISE_SECURITY_PLAYBOOK.md for the full enterprise trust story — every capability mapped to a code path and a test, with the scripted EKS pilot install at the end. Also: site-docs/deployment/eks-mcp-pilot.md for the focused pilot runbook and docs/COMPLIANCE_SIGNING.md for offline signature verification.
Operator guides by scenario:
Scenario | Guide |
Enterprise trust story (start here for pilots) | |
Own AWS / EKS end-to-end | |
Enterprise pilot scope | |
Focused EKS MCP pilot | |
Endpoint fleet on laptops | |
Snowflake-native backend | |
Istio + Kyverno zero-trust | |
Backend parity matrix | |
Grafana dashboards | |
SIEM / OCSF integration | |
Metrics catalog + SLOs | |
Performance + sizing |
Self-hosted SSO uses OIDC or SAML; SAML admins fetch SP metadata at /v1/auth/saml/metadata. Control-plane API keys follow an enforced lifetime policy (AGENT_BOM_API_KEY_DEFAULT_TTL_SECONDS, AGENT_BOM_API_KEY_MAX_TTL_SECONDS); rotate in place at /v1/auth/keys/{key_id}/rotate.
Trust & transparency
agent-bom is a read-only scanner. It never writes configs, never executes MCP servers, never stores credential values. No telemetry. No analytics. Releases are Sigstore-signed with SLSA provenance and self-published SBOMs.
When | What's sent | Where | Opt out |
Default CVE lookups | Package names + versions | OSV API |
|
Floating version resolution | Names + requested version | npm / PyPI / Go proxy |
|
| CVE IDs | NVD, EPSS, CISA KEV | omit |
| Package names + versions | deps.dev | omit |
| Package + version | PyPI / npm integrity endpoints | don't run |
Optional integrations | Finding summaries | Slack / Jira / Vanta / Drata | don't pass those flags |
Full trust model: SECURITY_ARCHITECTURE.md · PERMISSIONS.md · SUPPLY_CHAIN.md · RELEASE_VERIFICATION.md.
Compliance
Bundled mappings for FedRAMP, CMMC, NIST AI RMF, ISO 27001, SOC 2, OWASP LLM Top-10, MITRE ATLAS, and EU AI Act. Export tamper-evident evidence packets in one command.
agent-bom agents -p . --compliance-export fedramp -o fedramp-evidence.zip
agent-bom agents -p . --compliance-export nist-ai-rmf -o evidence.zipThe audit log itself is HMAC-chained and exportable as a signed JSON/JSONL bundle at GET /v1/audit/export.
Install & deploy
pip install agent-bom # CLI
docker run --rm agentbom/agent-bom agents # DockerMode | Best for |
CLI ( | local audit + project scan |
Endpoint fleet ( | employee laptops pushing into self-hosted fleet |
GitHub Action ( | CI/CD + SARIF |
Docker ( | isolated scans, containerized self-hosting |
Kubernetes / Helm ( | self-hosted API + dashboard, scheduled discovery |
REST API ( | platform integration, self-hosted control plane |
MCP server ( | Claude Desktop, Claude Code, Cursor, Codex, Windsurf, Cortex |
Runtime proxy ( | MCP traffic enforcement |
Shield SDK ( | in-process protection |
Backend choices stay explicit and optional:
SQLitefor local and single-node usePostgres/Supabasefor the primary transactional control planeClickHousefor analytics and event-scale persistenceSnowflakefor warehouse-native governance and selected backend paths
Run locally, in CI, in Docker, in Kubernetes, as a self-hosted API + dashboard, or as an MCP server — no mandatory hosted control plane, no mandatory cloud vendor.
References: PRODUCT_BRIEF.md · PRODUCT_METRICS.md · ENTERPRISE.md · How agent-bom works.
- uses: msaad00/agent-bom@v0.78.1
with:
scan-type: scan
severity-threshold: high
upload-sarif: true
enrich: true
fail-on-kev: trueContainer image gate, IaC gate, air-gapped CI, MCP scan, and the SARIF / SBOM examples are documented in site-docs/getting-started/quickstart.md.
MCP server
36 security tools available inside any MCP-compatible AI assistant:
{
"mcpServers": {
"agent-bom": {
"command": "uvx",
"args": ["agent-bom", "mcp", "server"]
}
}
}Also on Glama, Smithery, MCP Registry, and OpenClaw.
Extra | Command |
Cloud providers |
|
MCP server |
|
REST API |
|
Dashboard |
|
SAML SSO |
|
JSON · SARIF · CycloneDX 1.6 (with ML BOM) · SPDX 3.0 · HTML · Graph JSON · Graph HTML · GraphML · Neo4j Cypher · JUnit XML · CSV · Markdown · Mermaid · SVG · Prometheus · Badge · Attack Flow · plain text. OCSF is used for runtime / SIEM event delivery, not as a general report format.
Contributing
git clone https://github.com/msaad00/agent-bom.git && cd agent-bom
pip install -e ".[dev-all]"
pytest && ruff check src/CONTRIBUTING.md · docs/CLI_DEBUG_GUIDE.md · SECURITY.md · CODE_OF_CONDUCT.md
Apache 2.0 — LICENSE
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/msaad00/agent-bom'
If you have feedback or need assistance with the MCP directory API, please join our Discord server