Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
OLLAMA_API_KEYNoOllama Cloud API key for authenticated access.
OLLAMA_CHAT_MODELNoChat model used for cluster labeling.llama3.2
OLLAMA_EMBED_MODELNoEmbedding model used for vector representations.nomic-embed-text
CONTEXTPLUS_EMBED_NUM_CTXNoOptional Ollama embed runtime num_ctx override.
CONTEXTPLUS_EMBED_NUM_GPUNoOptional Ollama embed runtime num_gpu override.
CONTEXTPLUS_EMBED_TRACKERNoEnable realtime embedding refresh on file changes.true
CONTEXTPLUS_EMBED_LOW_VRAMNoOptional Ollama embed runtime low_vram override.
CONTEXTPLUS_EMBED_MAIN_GPUNoOptional Ollama embed runtime main_gpu override.
CONTEXTPLUS_EMBED_NUM_BATCHNoOptional Ollama embed runtime num_batch override.
CONTEXTPLUS_EMBED_BATCH_SIZENoEmbedding batch size per GPU call, clamped to 5-10.8
CONTEXTPLUS_EMBED_NUM_THREADNoOptional Ollama embed runtime num_thread override.
CONTEXTPLUS_EMBED_CHUNK_CHARSNoPer-chunk character count before merging, clamped to 256-8000.2000
CONTEXTPLUS_MAX_EMBED_FILE_SIZENoSkip non-code text files larger than this many bytes.51200
CONTEXTPLUS_EMBED_TRACKER_MAX_FILESNoMax changed files processed per tracker tick, clamped to 5-10.8
CONTEXTPLUS_EMBED_TRACKER_DEBOUNCE_MSNoDebounce window before tracker refresh.700

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
logging
{}
resources
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
get_context_tree

Get the structural tree of the project with file headers, function names, classes, enums, and line ranges. Automatically reads 2-line headers for file purpose. Dynamic token-aware pruning: Level 2 (deep symbols) -> Level 1 (headers only) -> Level 0 (file names only) based on project size.

semantic_identifier_search

Search semantic intent at identifier level (functions, methods, classes, variables) with definition lines and ranked call sites. Uses embeddings over symbol signatures and source context, then returns line-numbered definition/call chains.

get_file_skeleton

Get detailed function signatures, class methods, and type definitions of a specific file WITHOUT reading the full body. Shows the API surface: function names, parameters, return types, and line ranges. Perfect for understanding how to use code without loading it all.

semantic_code_search

Search the codebase by MEANING, not just exact variable names. Uses Ollama embeddings over file headers and symbol names. Example: searching 'user authentication' finds files about login, sessions, JWT even if those exact words aren't used, with matched definition lines.

get_blast_radius

Before deleting or modifying code, check the BLAST RADIUS. Traces every file and line where a specific symbol (function, class, variable) is imported or used. Prevents orphaned code. Also warns if usage count is low (candidate for inlining).

run_static_analysis

Run the project's native linter/compiler to find unused variables, dead code, type errors, and syntax issues. Delegates detection to deterministic tools instead of LLM guessing. Supports TypeScript, Python, Rust, Go.

propose_commit

The ONLY way to write code. Validates the code against strict rules before saving: 2-line header comments, no inline comments, max nesting depth, max file length. Creates a shadow restore point before writing. REJECTS code that violates formatting rules.

list_restore_points

List all shadow restore points created by propose_commit. Each point captures the file state before the AI made changes. Use this to find a restore point ID for undoing a bad change.

undo_change

Restore files to their state before a specific AI change. Uses the shadow restore point system. Does NOT affect git history. Call list_restore_points first to find the point ID.

semantic_navigate

Browse the codebase by MEANING, not directory structure. Uses spectral clustering on Ollama embeddings to group semantically related files into labeled clusters. Inspired by Gabriella Gonzalez's semantic navigator. Requires Ollama running with an embedding model and a chat model for labeling.

get_feature_hub

Obsidian-style feature hub navigator. Hub files are .md files containing [[path/to/file]] wikilinks that act as a Map of Content. Modes: (1) No args = list all hubs, (2) hub_path or feature_name = show hub with bundled skeletons of all linked files, (3) show_orphans = find files not linked to any hub. Prevents orphaned code and enables graph-based codebase navigation.

upsert_memory_node

Create or update a memory node in the linking graph. Nodes represent concepts, files, symbols, or notes with auto-generated embeddings. If a node with the same label and type exists, it updates content and increments access count. Returns the node ID for use in create_relation.

create_relation

Create a typed edge between two memory nodes. Supports relation types: relates_to, depends_on, implements, references, similar_to, contains. Edges have weights (0-1) that decay over time via e^(-λt). Duplicate edges update weight instead of creating new ones.

search_memory_graph

Search the memory graph by meaning with graph traversal. First finds direct matches via embedding similarity, then traverses 1st/2nd-degree neighbors to discover linked context. Returns both direct hits and graph-connected neighbors with relevance scores.

prune_stale_links

Remove stale memory graph edges whose weight has decayed below threshold via e^(-λt) formula. Also removes orphan nodes with no edges, low access count, and >7 days since last access. Keeps the graph lean.

add_interlinked_context

Bulk-add multiple memory nodes with automatic similarity linking. Computes embeddings for all items, then creates similarity edges between any pair (new-to-new and new-to-existing) with cosine similarity ≥ 0.72. Ideal for importing related concepts, files, or notes at once.

retrieve_with_traversal

Start from a specific memory node and traverse the graph outward. Returns the starting node plus all reachable neighbors within the depth limit, scored by edge weight decay and depth penalty. Use after search_memory_graph to explore a specific node's neighborhood.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription
contextplus_instructions

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ForLoopCodes/contextplus'

If you have feedback or need assistance with the MCP directory API, please join our Discord server