Orihime
Supports indexing and analysis of Express.js applications, including endpoint discovery and call graph tracing.
Allows ingestion of Gatling load test results for performance correlation and capacity estimation per endpoint.
Enables scanning of Gradle dependencies for license compliance against SPDX identifiers.
Supports indexing and analysis of Next.js applications, including API routes and call graph.
Identifies RabbitMQ message listeners as entry points for taint analysis and call graph tracing.
Supports indexing and analysis of React components and their dependencies.
Orihime
A cross-repository code knowledge graph for Java/Kotlin/JavaScript/TypeScript codebases. Orihime indexes your source code into an embedded KuzuDB graph database using tree-sitter and exposes the graph through an MCP server (for AI assistants), a local web UI, and a CLI.
Mythology: Orihime (織姫) is Vega — the weaving princess who weaves the fabric of the cosmos. She weaves connections. The tool that weaves your codebase into a single graph.
What It Does
Call graph across repositories — who calls what, across service boundaries, including REST calls resolved to the endpoint they target
Cross-repo taint analysis — track user-controlled data from HTTP/Kafka/JMS entry points through the call graph to dangerous sinks (SQL injection, path traversal, XXE, deserialization, SSRF, log injection, …)
Security reports — OWASP Top 10, CWE, PCI DSS, STIG frameworks; second-order injection detection; custom sources/sinks via YAML
Entry-point reachability filtering — suppress false positives from dead code; only surface findings reachable from real entry points (HTTP handlers,
@KafkaListener,@Scheduled,@JmsListener,@RabbitListener)Complexity hints — static O(n²) loop detection, N+1 JPA risk, unbounded queries, recursive calls — no profiler needed
Performance correlation — ingest Gatling/JMeter load test results; correlate with the call graph to find confirmed hotspots and Little's Law capacity ceilings per endpoint
License compliance — scan Maven/Gradle dependencies against SPDX identifiers; flag GPL/AGPL/LGPL in commercial projects
Incremental re-index — git blob-hash-based skip; only changed files are re-parsed on subsequent runs
Multi-language — Java, Kotlin, JavaScript, TypeScript (Next.js, Express, React)
Quick Start — AI-first (Claude Code)
The primary way to use Orihime is through an AI assistant via MCP. You index once, then ask questions in natural language — no Cypher, no grep, no reading source files.
1. Install
git clone https://github.com/srinivasan-sundaresan95/orihime.git
cd orihime
pip install -e .2. Register with Claude Code (one-time setup)
python -m orihime register # writes MCP server entry to ~/.claude/settings.json
python -m orihime install-skills # copies Claude Code skills to ~/.claude/skills/Restart Claude Code. The orihime MCP tools and skills (/orihime-call-flow, /orihime-security-audit, /orihime-perf-analysis, /orihime-change-impact) are now active.
3. Index your repositories
python -m orihime index --repo /path/to/your/service-a --name service-a
python -m orihime index --repo /path/to/your/service-b --name service-b4. Ask questions
Trace the call flow for GET /api/orders in service-a
Find SQL injection risks in service-b
What breaks if I change OrderService.processPayment?
Which endpoints are approaching saturation?No source file reads. No grep. Claude uses the graph directly — typically 5–8 tool calls vs 30+ for source-only analysis.
CLI alternative: All operations above are also available as Python commands (
python -m orihime index,python -m orihime ui, etc.) if you prefer working outside an AI assistant. See CLI Reference below.
Feature Comparison
Capability | Orihime | GitNexus | SonarQube Community | SonarQube Developer | SonarQube Enterprise |
Cross-repo call graph | ✓ | ✓ | ✗ | ✗ | ✗ |
REST endpoint resolution | ✓ | ✓ | ✗ | ✗ | ✗ |
MCP integration (AI assistants) | ✓ | ✓ | ✓¹ | ✓¹ | ✓¹ |
Claude Code hooks + skills | ✓ | ✓ | ✗ | ✗ | ✗ |
Cross-file taint (SAST / injection) | ✓ | ✗ | ✗ | ✓ | ✓ |
Second-order injection | ✓ | ✗ | ✗ | ✗ | ✗ |
Entry-point reachability filter | ✓ | ✗ | ✗ | ✗ | ✗ |
Custom sources/sinks (YAML) | ✓ | ✗ | ✗ | ✗ | ✓² |
OWASP/CWE/PCI/STIG compliance reports | ✓ | ✗ | ✗ | ✗ | ✓ |
Argument-level taint (value-flow) | ✓ | ✗ | ✗ | ✗ | ✗ |
Complexity hints (O(n²), N+1) | ✓ | ✗ | partial | partial | partial |
I/O fan-out + serial/parallel analysis | ✓ | ✗ | ✗ | ✗ | ✗ |
Perf ingestion + capacity model | ✓ | ✗ | ✗ | ✗ | ✗ |
Cross-service cascade risk | ✓ | ✗ | ✗ | ✗ | ✗ |
License compliance | ✓ | ✗ | ✗ | ✗ | ✓³ |
Embedded DB (no server daemon) | ✓ | ✓ | ✗ | ✗ | ✗ |
Indexes Java / Kotlin | ✓ | ✓ | ✓ | ✓ | ✓ |
Indexes JS / TS | ✓ | ✓ | ✓ | ✓ | ✓ |
License | MIT | PolyForm NC | LGPL | Commercial | Commercial |
¹ Via the official sonarqube-mcp-server (SonarSource, production-ready). Works with all SonarQube editions. ² Custom taint sources/sinks require the Advanced Security add-on (Enterprise+). ³ License compliance (SBOM + policy enforcement) requires the Advanced Security add-on (Enterprise+).
GitNexus (PolyForm Non-Commercial) provides cross-repo call graphs and MCP integration across 14 languages including Java and Kotlin. It does not cover SAST, perf analysis, or compliance reporting.
MCP Tools Reference
Call Graph
Tool | Description |
| All methods that call the given method |
| All methods called by the given method |
| Transitive set of callers up to N hops |
| Trace back from an HTTP endpoint to its callers |
| All classes implementing an interface |
| Inheritance chain |
| All calls to methods outside the indexed repo |
Discovery
Tool | Description |
| Full-text search across class/method FQNs |
| File path and line number for any class or method |
| All indexed repositories |
| All indexed branches for a repo |
| All HTTP endpoints in a repo |
| REST calls that couldn't be matched to an endpoint |
| Cross-service DEPENDS_ON edges |
ORM / JPA
Tool | Description |
| All JPA entity relationships — also used in design review (Phase 1.5) |
| EAGER-fetched collections (N+1 risk) |
Security (SAST)
Tool | Description |
| All taint sinks reachable in the call graph |
| Value-flow taint: argument → parameter across CALLS edges |
| Taint that crosses service boundaries via REST |
| Taint stored to DB then re-read and used as sink |
| All HTTP/Kafka/Scheduled/JMS/RabbitMQ entry points |
| Taint sinks filtered to those reachable from entry points only |
| Report in OWASP / CWE / PCI / STIG format |
| Show active sources, sinks, and sanitizers from YAML config |
Complexity & Performance
Tool | Description |
| Methods flagged with O(n²), N+1, unbounded-query, recursive |
| Load Gatling simulation.log, JMeter XML, or JSON perf data |
| Complexity hints × p99 latency, sorted by risk score |
| Little's Law capacity per endpoint; flags near-saturation |
| Cross-service cascade: upstream endpoints limited by downstream saturation |
License Compliance
Tool | Description |
| Flag GPL/AGPL/LGPL dependencies via Maven Central |
Index
Tool | Description |
| Trigger an index from within the MCP session |
CLI Reference
All operations are also accessible directly without an AI assistant:
python -m orihime index --repo PATH --name NAME [--db PATH] [--force] [--branch NAME]
python -m orihime ui [--port 7700] [--db PATH]
python -m orihime serve
python -m orihime serve-sse [--port 7702] [--db PATH]
python -m orihime resolve [--db PATH]
python -m orihime write-server [--port 7701] [--db PATH]
python -m orihime register [--db PATH] [--python PATH]
python -m orihime install-skillsCommand | Description |
| Parse a repository and write its graph into KuzuDB |
| Start the local web UI on port 7700 |
| Start the MCP server on stdio (for Claude Code, Claude Desktop, any MCP client) |
| Start the MCP server with SSE transport (for CI runners and remote clients) |
| Match RestCall URL patterns against Endpoints across all indexed repos |
| Start the write-serialization server for team/server deployments |
| Write the Orihime MCP server entry to |
| Copy bundled skills to the target AI assistant's config dir ( |
Web UI
http://localhost:7700Page | Description |
| Call graph explorer: search methods, trace callers/callees, visualize CALLS graph |
| Security + complexity findings table — filter by OWASP category, severity, file |
| JSON endpoints backing the UI (also usable directly) |
Configuration
Environment Variables
Variable | Default | Description |
|
| Path to KuzuDB database directory |
| (unset) | URL of the write-serialization server (team mode) |
Custom Sources and Sinks
Create ~/.orihime/security_config.yaml (or set ORIHIME_SECURITY_CONFIG):
sources:
- method_pattern: ".*getCustomUserInput"
description: "Custom input source"
sinks:
- method_pattern: ".*legacyExec"
sink_type: "COMMAND_INJECTION"
description: "Legacy shell executor"
sanitizers:
- method_pattern: ".*sanitizeForLegacy"The built-in config covers HttpServletRequest, @RequestParam, @PathVariable, @RequestBody, JDBC execute*, JPA native queries, Runtime.exec, ProcessBuilder, XML parsers, ObjectInputStream, Files.get, Paths.get, new URL, logging calls, and more.
Documentation
Doc | Description |
All MCP tools with parameters and examples | |
How Java/Kotlin/JS/TS are parsed; ExtractResult schema | |
Custom sources, sinks, sanitizers — YAML reference | |
GitHub Actions PR review workflow setup | |
Docker Compose setup for server deployments | |
How to add a new language extractor | |
How REST calls are matched to endpoints across repos |
Team / Server Mode
KuzuDB has a single-writer constraint. In team deployments where multiple developers re-index simultaneously, run the write-serialization server:
# On the shared server — owns the KuzuDB connection
python -m orihime write-server --port 7701 --db /shared/orihime.db
# Each developer's indexer sends writes to the server
ORIHIME_SERVER_URL=http://server:7701 python -m orihime index --repo /path --name my-serviceDevelopers running locally without ORIHIME_SERVER_URL open KuzuDB directly as always. The web UI and MCP server always read directly from KuzuDB (reads do not go through the write server).
Architecture
Source files
│
▼ tree-sitter (Java, Kotlin, JS, TS)
ParseResult (plain Python dicts, picklable)
│
▼ ProcessPoolExecutor (parallel parse workers)
Phase 2: KuzuDB writes (batched by table, 500-edge transactions)
│
▼
KuzuDB embedded graph ←──────────────────────────────┐
│ │
├── MCP server (FastMCP, stdio) │
├── Web UI (Starlette, port 7700) │
└── Write server (FastAPI, port 7701, team mode) ──┘Graph schema (SCHEMA_VERSION 10):
Node | Key fields |
| id, name, root_path |
| path, language, blob_hash, branch_name |
| fqn, annotations, is_interface |
| fqn, line_start, annotations, is_entry_point, complexity_hint |
| http_method, path, path_regex |
| http_method, url_pattern |
| source_class, target_class, fetch_type, relation_type |
| endpoint_fqn, p50_ms, p99_ms, rps, source |
| endpoint_fqn, saturation_rps, ceiling_concurrency, risk_level |
Relationship | Description |
| Method → Method; carries callee_name, caller_arg_pos, callee_param_pos |
| Method → Endpoint (resolved cross-service call) |
| Method → RestCall (not yet resolved) |
| File → Class |
| Class → Method |
| Repo → Endpoint |
| Repo → Repo (cross-service dependency) |
| Class → Class |
| Class → Class |
| Class → EntityRelation |
| Method → PerfSample |
Performance
Query performance (graph DB)
Benchmarked on an 845-file Java/Kotlin service:
Operation | Time |
Cold index | ~67s |
Incremental re-index (no changes) | ~34s |
| <5ms |
| <15ms |
| <25ms |
Batch write speedup vs naive per-row writes: 12×.
AI assistant benchmark — tracing a single call flow
Java/Kotlin codebase (845 + 224 files, measured)
Benchmarked on a 845-file Kotlin service and a 224-file Java service, tracing one controller endpoint through service → repositories → upstream APIs. GitNexus v1.6.3, Orihime v1.9, and a grep+source-read baseline were all measured on the same codebase on the same hardware (WSL2/Ubuntu, Intel i7, 2026-04-30).
Approach | Cold index | Query latency | Avg tokens/query | Files read |
Baseline — Claude reads source files directly | — | ~4–5 min | ~14,000 | 27 |
GitNexus v1.6.3 | 51.4s | 2–10s⁴ | ~1,490 | 0 |
Orihime v1.9 | 66.6s | 3–22ms | ~683 | 0 |
Orihime vs baseline: 95% fewer tokens · 200–1,400× faster queries
Orihime vs GitNexus: 2.2× fewer tokens · 200–1,400× faster queries · MCP-native
The 7 Orihime tool calls produced ~80% of the structural picture (full controller→service→repo→upstream chain, 27 test methods surfaced, resilience wiring discovered automatically). The remaining ~20% — upstream API URLs, auth headers, branch-level control flow — requires targeted source reads, scoped to ~5 specific files rather than 27.
GitNexus's cold index is ~1.3× faster on NTFS (Node.js parse throughput advantage). On native Linux this gap narrows to near parity.
⁴ GitNexus query latency is dominated by live GitHub API round trips (1–3 per query × 500–2,000ms each, rate-limit dependent). Blast radius returned results in the wrong direction (upstream imports rather than downstream dependents).
License
MIT
Maintenance
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/srinivasan-sundaresan95/orihime'
If you have feedback or need assistance with the MCP directory API, please join our Discord server