Skip to main content
Glama
Jakedismo

CodeGraph CLI MCP Server

by Jakedismo
.env.example4.88 kB
# CodeGraph env example (copy to .env). Prefer env vars over committing secrets. # --- Project identity --- # CODEGRAPH_PROJECT_ID=my-project # CODEGRAPH_ORGANIZATION_ID=my-org # CODEGRAPH_REPOSITORY_URL=https://github.com/user/repo # CODEGRAPH_DOMAIN=example.com # --- Embeddings --- CODEGRAPH_EMBEDDING_PROVIDER=ollama # ollama | openai | jina | lmstudio | onnx CODEGRAPH_EMBEDDING_DIMENSION=1024 CODEGRAPH_EMBEDDINGS_BATCH_SIZE=64 # batch per provider call CODEGRAPH_EMBEDDING_MODEL=qwen3-embedding:0.6B # CODEGRAPH_CHUNK_MAX_TOKENS=1024 # override model max tokens for chunking # CODEGRAPH_EMBEDDING_SKIP_CHUNKING=0 # set 1 to disable chunking and embed nodes directly faster decently accurate with large ctx window embedding modelse # --- Reranking (optional) --- # CODEGRAPH_RERANKING_PROVIDER=jina # jina # CODEGRAPH_RERANKING_MODEL=jina-reranker-v3 # CODEGRAPH_RERANKING_CANDIDATES=256 # CODEGRAPH_RERANKING_TOP_N=10 # --- Performance / concurrency --- CODEGRAPH_WORKERS=4 # caps Rayon threads CODEGRAPH_MAX_CONCURRENT=4 # concurrent embedding requests # CODEGRAPH_SYMBOL_BATCH_SIZE=500 # CODEGRAPH_SYMBOL_MAX_CONCURRENT=4 # --- SurrealDB --- # --- SurrealDB --- # SurrealDB Configuration (for graph storage) CODEGRAPH_SURREALDB_URL=ws://localhost:3004 CODEGRAPH_SURREALDB_NAMESPACE=main CODEGRAPH_SURREALDB_DATABASE=codegraph CODEGRAPH_SURREALDB_USERNAME=root CODEGRAPH_SURREALDB_PASSWORD=root #CODEGRAPH_USE_GRAPH_SCHEMA=true # Enable graph schema (experimental) if you setup the database with the codegraph_experimental.surql #CODEGRAPH_GRAPH_DB_DATABASE=codegraph_experimental # experimental full graph schema has some bug that causes bad performance during indexing were investigating what causes this CODEGRAPH_CHUNK_DB_BATCH_SIZE=512 CODEGRAPH_SURREAL_POOL_SIZE=2 # --- Provider endpoints & keys --- CODEGRAPH_OLLAMA_URL=http://localhost:11434 # CODEGRAPH_LMSTUDIO_URL=http://localhost:1234 # OPENAI_API_KEY=sk-... # ANTHROPIC_API_KEY=... # JINA_API_KEY=... # XAI_API_KEY=... # OPENAI_API_BASE=https://api.openai.com/v1 # for openai-compatible providers # --- LLM (for agent responses) --- CODEGRAPH_LLM_PROVIDER=ollama # ollama | openai | anthropic | openai-compatible | xai | lmstudio CODEGRAPH_MODEL=qwen2.5-coder:14b CODEGRAPH_CONTEXT_WINDOW=32768 #MCP_CODE_AGENT_MAX_OUTPUT_TOKENS=52000 # Hard-cap since f.ex. claude code doesn't support more than 64K and might crash on such outputs # --- Server / daemon --- CODEGRAPH_HTTP_HOST=127.0.0.1 CODEGRAPH_HTTP_PORT=3003 # CODEGRAPH_DAEMON_AUTO_START=true # CODEGRAPH_WATCH=1 # enable file watching when supported # --- Logging --- RUST_LOG=info # --- AutoAgents --- # CODEGRAPH_AUTOAGENTS_EXPERIMENTAL=1 # MCP_CODE_AGENT_MAX_OUTPUT_TOKENS=4096 # hard cap for autoagents responses # --- Jina specific (when embedding provider=jina) --- # JINA_API_KEY=... #JINA_API_BASE=https://api.jina.ai/v1 #CODEGRAPH_EMBEDDING_MODEL=jina-embeddings-v4 #JINA_ENABLE_RERANKING=true #JINA_RERANKING_MODEL=jina-reranker-v3 #JINA_MAX_TOKENS=256 #JINA_MAX_TEXTS=24 #JINA_REQUEST_DELAY_MS=100 #JINA_RERANKING_TOP_N=10 #JINA_LATE_CHUNKING=false # better embeddings with jina-embeddings-v4 #JINA_TRUNCATE=true #JINA_API_TASK=code.passage #CODEGRAPH_SYMBOL_BATCH_SIZE=64 #CODEGRAPH_SYMBOL_MAX_CONCURRENT=1 #JINA_REL_BATCH_SIZE=50 #JINA_REL_MAX_TEXTS=50 # --- Agent Configuration --- CODEGRAPH_AGENT_ARCHITECTURE=rig # rig (default) | react | lats | reflexion # Note: 'rig' backend automatically selects best sub-architecture # based on task complexity (LATS for deep tasks, ReAct for speed). # Rig Agent Internal Tuning # CODEGRAPH_AGENT_MAX_STEPS=8 # max tool calls per task (Hard cap: 10, default: 8) # CODEGRAPH_AGENT_MEMORY_WINDOW=40 # number of turns to keep in history # Dynamic Context Throttling (Rig Agent) CODEGRAPH_CONTEXT_WINDOW=128000 # Set this to your model's actual token limit. # Rig Agent automatically downgrades tier (Detailed -> Terse) # when context usage > 80% to prevent overflow. # AutoAgents specific (Legacy) # CODEGRAPH_LATS_SELECTION_PROVIDER=openai # CODEGRAPH_LATS_SELECTION_MODEL=gpt-5.1-codex-mini # CODEGRAPH_LATS_EXPANSION_PROVIDER=openai # CODEGRAPH_LATS_EXPANSION_MODEL=gpt-5.1-codex # CODEGRAPH_LATS_EVALUATION_PROVIDER=openai # CODEGRAPH_LATS_EVALUATION_MODEL=gpt-5.1 # CODEGRAPH_LATS_BEAM_WIDTH=3 # Number of best paths to keep (default: 3) # CODEGRAPH_LATS_MAX_DEPTH=5 # MCP Server Max Output Tokens MCP_CODE_AGENT_MAX_OUTPUT_TOKENS=58000 # Cap for final answers (Claude Code max ~64K)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Jakedismo/codegraph-rust'

If you have feedback or need assistance with the MCP directory API, please join our Discord server