Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
PORTNoListening port for the HTTP server binary (cortex-scout).5000
RUST_LOGNoLog level. Keep warn for MCP stdio — info floods stderr and confuses MCP clients.warn
MAX_LINKSNoMax links followed per page crawl.100
LANCEDB_URINoDirectory path for persistent research memory. Omit to disable.
IP_LIST_PATHNoOptional path to ip.txt (one proxy per line: http://, socks5://). Leave unset to disable proxy support entirely, or point at an empty file to keep proxy tools available but inactive by default.
OPENAI_API_KEYNoAPI key for LLM synthesis. Omit for key-less local endpoints (Ollama).
OUTBOUND_LIMITNoMax concurrent outbound HTTP connections.32
SEARCH_ENGINESNoActive engines (comma-separated).google,bing,duckduckgo,brave
MODEL2VEC_MODELNoHuggingFace model ID or local path for embedding (e.g. minishlab/potion-base-8M).
OPENAI_BASE_URLNoOpenAI-compatible endpoint (OpenRouter, Ollama, LM Studio, etc.).https://api.openai.com/v1
CHROME_EXECUTABLENoOverride path to Chromium/Chrome/Brave binary.
CORTEX_SCOUT_PORTNoListening port for the HTTP server binary (cortex-scout).5000
HTTP_TIMEOUT_SECSNoPer-request read timeout (seconds).30
MAX_CONTENT_CHARSNoMax characters returned per scraped page.10000
PROXY_SOURCE_PATHNoOptional path to proxy_source.json (used by proxy_control grab).
SEARCH_CDP_FALLBACKNoRetry search engine fetches via native Chromium CDP when blocked.true
DEEP_RESEARCH_ENABLEDNoSet 0 to disable the deep_research tool at runtime.1
SEARCH_TIER2_NON_ROBOTNoSet 1 to allow hitl_web_fetch as last-resort search escalation.
DEEP_RESEARCH_LLM_MODELNoModel identifier (must be supported by the endpoint).gpt-4o-mini
DEEP_RESEARCH_SYNTHESISNoSet 0 to skip LLM synthesis (search+scrape only).1
HTTP_CONNECT_TIMEOUT_SECSNoTCP connect timeout (seconds).10
CORTEX_SCOUT_MEMORY_DISABLEDNoSet 1 to disable memory even when LANCEDB_URI is set.0
SEARCH_MAX_RESULTS_PER_ENGINENoResults per engine before merge/dedup.10
DEEP_RESEARCH_SYNTHESIS_MAX_TOKENSNoMax tokens for synthesis response. Use 4096+ for large-context models.1024
DEEP_RESEARCH_SYNTHESIS_MAX_SOURCESNoMax source documents fed to LLM synthesis.8
DEEP_RESEARCH_SYNTHESIS_MAX_CHARS_PER_SOURCENoMax characters extracted per source for synthesis.2500

Capabilities

Server capabilities have not been inspected yet.

Tools

Functions exposed to the LLM to take actions

NameDescription

No tools

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cortex-works/cortex-scout'

If you have feedback or need assistance with the MCP directory API, please join our Discord server