Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
ASTLLM_WATCHNoWatch working directory for source file changes and re-index automatically ('1' or 'true' to enable). Excluded dirs are never watched.0
GITHUB_TOKENNoGitHub API token (higher rate limits, private repos)
OPENAI_MODELNoThe model to use with the OpenAI-compatible base URL (e.g. llama3)
ASTLLM_PERSISTNoPersist the index to ~/.astllm/{path}.json after every index, and pre-load it on startup ('1' or 'true' to enable)0
GOOGLE_API_KEYNoEnable Gemini Flash summaries
ASTLLM_LOG_FILENoLog to file instead of stderr
CODE_INDEX_PATHNoIndex storage directory~/.code-index
OPENAI_BASE_URLNoEnable local LLM summaries (OpenAI-compatible, e.g. Ollama)
ASTLLM_LOG_LEVELNoLog level: debug, info, warn, errorwarn
ANTHROPIC_API_KEYNoEnable Claude Haiku summaries
ASTLLM_MAX_INDEX_FILESNoMax files to index per repo500
ASTLLM_MAX_FILE_SIZE_KBNoMax file size to index (KB)500

Capabilities

Server capabilities have not been inspected yet.

Tools

Functions exposed to the LLM to take actions

NameDescription

No tools

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tluyben/astllm-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server