Skip to main content
Glama

Claude Context

by zilliztech
.env.exampleβ€’3.7 kB
# Claude Context Environment Variables Example # # Copy this file to ~/.context/.env as a global setting, modify the values as needed # # Usage: cp env.example ~/.context/.env # # Note: This file does not work if you put it in your codebase directory (because it may conflict with the environment variables of your codebase project) # ============================================================================= # Embedding Provider Configuration # ============================================================================= # Embedding provider: OpenAI, VoyageAI, Gemini, Ollama EMBEDDING_PROVIDER=OpenAI # Embedding model (provider-specific) EMBEDDING_MODEL=text-embedding-3-small # Embedding batch size for processing (default: 100) # You can customize it according to the throughput of your embedding model. Generally, larger batch size means less indexing time. EMBEDDING_BATCH_SIZE=100 # ============================================================================= # OpenAI Configuration # ============================================================================= # OpenAI API key OPENAI_API_KEY=your-openai-api-key-here # OpenAI base URL (optional, for custom endpoints) # OPENAI_BASE_URL=https://api.openai.com/v1 # ============================================================================= # VoyageAI Configuration # ============================================================================= # VoyageAI API key # VOYAGEAI_API_KEY=your-voyageai-api-key-here # ============================================================================= # Gemini Configuration # ============================================================================= # Google Gemini API key # GEMINI_API_KEY=your-gemini-api-key-here # Gemini API base URL (optional, for custom endpoints) # GEMINI_BASE_URL=https://generativelanguage.googleapis.com/v1beta # ============================================================================= # Ollama Configuration # ============================================================================= # Ollama model name # OLLAMA_MODEL= # Ollama host (default: http://localhost:11434) # OLLAMA_HOST=http://localhost:11434 # ============================================================================= # Vector Database Configuration (Milvus/Zilliz) # ============================================================================= # Milvus server address. It's optional when you get Zilliz Personal API Key. MILVUS_ADDRESS=your-zilliz-cloud-public-endpoint # Milvus authentication token. You can refer to this guide to get Zilliz Personal API Key as your Milvus token. # https://github.com/zilliztech/claude-context/blob/master/assets/signup_and_get_apikey.png MILVUS_TOKEN=your-zilliz-cloud-api-key # ============================================================================= # Code Splitter Configuration # ============================================================================= # Code splitter type: ast, langchain SPLITTER_TYPE=ast # ============================================================================= # Custom File Processing Configuration # ============================================================================= # Additional file extensions to include beyond defaults (comma-separated) # Example: .vue,.svelte,.astro,.twig,.blade.php # CUSTOM_EXTENSIONS=.vue,.svelte,.astro # Additional ignore patterns to exclude files/directories (comma-separated) # Example: temp/**,*.backup,private/**,uploads/** # CUSTOM_IGNORE_PATTERNS=temp/**,*.backup,private/** # Whether to use hybrid search mode. If true, it will use both dense vector and BM25; if false, it will use only dense vector search. # HYBRID_MODE=true

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zilliztech/claude-context'

If you have feedback or need assistance with the MCP directory API, please join our Discord server