Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| ASTLLM_WATCH | No | Watch working directory for source file changes and re-index automatically ('1' or 'true' to enable). Excluded dirs are never watched. | 0 |
| GITHUB_TOKEN | No | GitHub API token (higher rate limits, private repos) | |
| OPENAI_MODEL | No | The model to use with the OpenAI-compatible base URL (e.g. llama3) | |
| ASTLLM_PERSIST | No | Persist the index to ~/.astllm/{path}.json after every index, and pre-load it on startup ('1' or 'true' to enable) | 0 |
| GOOGLE_API_KEY | No | Enable Gemini Flash summaries | |
| ASTLLM_LOG_FILE | No | Log to file instead of stderr | |
| CODE_INDEX_PATH | No | Index storage directory | ~/.code-index |
| OPENAI_BASE_URL | No | Enable local LLM summaries (OpenAI-compatible, e.g. Ollama) | |
| ASTLLM_LOG_LEVEL | No | Log level: debug, info, warn, error | warn |
| ANTHROPIC_API_KEY | No | Enable Claude Haiku summaries | |
| ASTLLM_MAX_INDEX_FILES | No | Max files to index per repo | 500 |
| ASTLLM_MAX_FILE_SIZE_KB | No | Max file size to index (KB) | 500 |
Capabilities
Server capabilities have not been inspected yet.
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
No tools | |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |