Skip to main content
Glama
default.yaml1.16 kB
watch_directories: ["code_flow_graph"] ignored_patterns: ["venv", "**/__pycache__"] ignored_filenames: ["package-lock.json", "yarn.lock", "pnpm-lock.yaml", "uv.lock"] # Specific files to ignore chromadb_path: "./code_vectors_chroma" max_graph_depth: 3 embedding_model: "all-MiniLM-L6-v2" max_tokens: 256 language: "python" # Background cleanup configuration cleanup_interval_minutes: 30 # Summary Generation (Meta-RAG) summary_generation_enabled: false llm_config: api_key_env_var: "OPENAI_API_KEY" base_url: "https://openrouter.ai/api/v1" model: "x-ai/grok-4.1-fast" max_tokens: 256 # Max tokens in LLM response per summary concurrency: 5 # Smart filtering to reduce costs min_complexity: 3 # Only summarize functions with complexity >= 3 min_nloc: 5 # Only summarize functions with >= 5 lines of code skip_private: true # Skip functions starting with _ (private) skip_test: true # Skip test functions (test_*, *_test) prioritize_entry_points: true # Summarize entry points first # Depth control summary_depth: "standard" # "minimal", "standard", "detailed" max_input_tokens: 2000 # Truncate function body if longer

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mrorigo/code-flow-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server