Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
SUPABASE_KEYNoThe anon public API key for your Supabase project (required for session memory tools).
SUPABASE_URLNoThe URL of your Supabase project (required for session memory tools).
BRAVE_API_KEYYesBrave Search Pro subscription API key. This is the only strictly required key for basic operation.
GEMINI_API_KEYNoGoogle AI Studio API key for research paper analysis and Gemini integration.
DISCOVERY_ENGINE_LOCATIONNoGCP Discovery Engine location.global
DISCOVERY_ENGINE_ENGINE_IDNoYour Vertex AI Search / Discovery Engine app/engine ID.
DISCOVERY_ENGINE_COLLECTIONNoGCP Discovery Engine collection.default_collection
DISCOVERY_ENGINE_PROJECT_IDNoGCP project ID with Discovery Engine enabled for hybrid search.
DISCOVERY_ENGINE_SERVING_CONFIGNoGCP Discovery Engine serving config.default_serving_config

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{}
prompts
{}
resources
{
  "subscribe": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
brave_web_search

Performs a web search using the Brave Search API, ideal for general queries, news, articles, and online content. Use this for broad information gathering, recent events, or when you need diverse web sources. Supports pagination, content filtering, and freshness controls. Maximum 20 results per request, with offset for pagination.

brave_web_search_code_mode

Performs a web search using the Brave Search API, and then runs a custom JavaScript code string against the RAW API RESPONSE in a secure QuickJS sandbox. This drastically reduces context window usage by only returning the output of your script. Use this for broad information gathering, recent events, or when you need diverse web sources and only need specific parts of the result. Your script should read the 'DATA' global variable (a JSON string of the API response), process it, and use console.log() to print the desired output.

brave_local_search

Searches for local businesses and places using Brave's Local Search API. Best for queries related to physical locations, businesses, restaurants, services, etc. Returns detailed information including:

  • Business names and addresses

  • Ratings and review counts

  • Phone numbers and opening hours Use this when the query implies 'near me' or mentions specific locations. Automatically falls back to web search if no local results are found.

brave_local_search_code_mode

Performs a local search using Brave APIs, and then runs a custom JavaScript code string against the RAW API RESPONSE in a secure QuickJS sandbox. This reduces context window usage by only returning the output of your script. Use this for local/business lookups when you only need specific fields from large local payloads. Your script should read the 'DATA' global variable (a JSON string payload) and use console.log() to print the desired output.

code_mode_transform

A universal code-mode transformer. Takes RAW TEXT or JSON output from ANY MCP tool (GitHub, Firecrawl, chrome-devtools, camoufox, codegraphcontext, videoMcp, arxiv, etc.) and runs a custom JavaScript code string against it in a secure QuickJS sandbox. Use this as a second step after calling any tool that returns large payloads — pass the raw output as 'data' and a JS extraction script as 'code'. Your script reads the 'DATA' global variable (a string of the tool output) and uses console.log() to print only the fields you need. NEW in v2.1: Pass 'template' instead of 'code' for instant extraction. Available templates: github_issues, github_prs, jira_tickets, dom_links, dom_headings, api_endpoints, slack_messages, csv_summary. Example: { data: '', template: 'github_issues' } — no custom code needed.

brave_answers

Returns direct AI answers grounded in Brave Search using Brave AI Grounding. Uses an OpenAI-compatible chat completions endpoint and is best for concise answer generation with live web grounding.

gemini_research_paper_analysis

Performs in-depth analysis of research papers using Google's Gemini-2.0-flash model. Ideal for academic research, literature reviews, and deep understanding of scientific papers. Can extract key findings, provide critical evaluation, summarize complex research, and place papers within the broader research landscape. Best for long-form academic content that requires expert analysis.

session_save_ledger

Save an immutable session log entry to the session ledger. Use this at the END of each work session to record what was accomplished. The ledger is append-only — entries cannot be updated or deleted. This creates a permanent audit trail of all agent work sessions.

session_save_handoff

Upsert the latest project handoff state for the next session to consume on boot. This is the 'live context' that gets loaded when a new session starts. Calling this replaces the previous handoff for the same project (upsert on project).

v5.4 CRDT Merge: On version conflict, a CRDT OR-Map engine automatically merges your changes with concurrent work (Add-Wins OR-Set for arrays, Last-Writer-Wins for scalars). Pass expected_version to enable concurrency control.

v0.4.0 OCC: If you received a version number from session_load_context, /resume_session prompt, or memory resource attachment, you MUST pass it as expected_version to prevent overwriting another session's changes.

session_load_context

Load session context for a project using progressive context loading. Use this at the START of a new session to recover previous work state. Three levels available:

  • quick: Just the latest project state — keywords and open TODOs (~50 tokens)

  • standard: Project state plus recent session summaries and decisions (~200 tokens, recommended)

  • deep: Everything — full session history with all files changed, TODOs, and decisions (~1000+ tokens)

knowledge_searchA

Search accumulated knowledge across all sessions by keywords, category, or free text. The knowledge base grows automatically as sessions are saved — keywords are extracted from every ledger and handoff entry. Use this to find related past work, decisions, and context from previous sessions.

Categories available: debugging, architecture, deployment, testing, configuration, api-integration, data-migration, security, performance, documentation, ai-ml, ui-frontend, resume

knowledge_forgetA

Selectively forget (delete) accumulated knowledge entries. Like a brain pruning bad memories — remove outdated, incorrect, or irrelevant session entries to keep the knowledge base clean and relevant.

Forget modes:

  • By project: Clear all knowledge for a specific project

  • By category: Remove entries matching a category (e.g. 'debugging')

  • By age: Forget entries older than N days

  • Full reset: Wipe everything (requires confirm_all=true)

⚠️ This permanently deletes ledger entries. Handoff state is preserved unless explicitly cleared.

session_compact_ledgerA

Auto-compact old session ledger entries by rolling them up into AI-generated summaries. This prevents the ledger from growing indefinitely and keeps deep context loading fast.

How it works:

  1. Finds projects with more entries than the threshold

  2. Summarizes old entries using Gemini (keeps recent entries intact)

  3. Inserts a rollup entry and archives the originals (soft-delete)

Use dry_run=true to preview what would be compacted without executing.

session_search_memoryA

Search session history semantically (by meaning, not just keywords). Uses vector embeddings to find sessions with similar context, even when the exact wording differs. Requires pgvector extension in Supabase.

Complements knowledge_search (keyword-based) — use this when keyword search returns no results or when the query is phrased differently from stored summaries.

memory_historyA

View the timeline of past memory states for this project. Use this BEFORE memory_checkout to find the correct version to revert to. Shows version numbers, timestamps, and summaries of each saved state.

memory_checkoutA

Time travel! Restores the project's memory to a specific past version. This overwrites the current handoff state with the historical snapshot, like a Git revert — the version number moves forward (no data is lost). Call memory_history first to find the correct target_version.

session_save_imageA

Save a local image file into the project's permanent visual memory. Use this to remember UI states, diagrams, architecture graphs, or bug screenshots. The image is copied into Prism's media vault and indexed in the handoff metadata. On the next session_load_context, the agent will see a lightweight index of available images.

session_view_imageA

Retrieve an image from visual memory using its ID. Returns the image as Base64 inline content for the LLM to analyze. Use session_load_context first to see available image IDs.

session_health_checkA

Run integrity checks on the agent's memory (like fsck for filesystems). Scans for missing embeddings, duplicate entries, orphaned handoffs, and stale rollups.

Checks performed:

  1. Missing embeddings — entries that can't be found via semantic search

  2. Duplicate entries — near-identical summaries wasting context tokens

  3. Orphaned handoffs — handoff state with no backing ledger entries

  4. Stale rollups — compaction artifacts with no archived originals

Use auto_fix=true to automatically repair missing embeddings and clean up orphans.

session_forget_memoryA

Forget (delete) a specific memory entry by its ID. Supports two modes:

  • Soft delete (default): Tombstones the entry — it stays in the database for audit trails but is excluded from all search results. Reversible.

  • Hard delete: Permanently removes the entry from the database. Irreversible. Use only when GDPR Article 17 requires complete erasure.

⚠️ Soft delete is recommended for most use cases. The entry can be restored in the future if needed.

knowledge_set_retentionA

Set an automatic data retention policy (TTL) for a project's memory. Entries older than ttl_days will be soft-deleted (archived) automatically on every server startup and every 12 hours while running.

Use cases:

  • Set ttl_days: 90 to auto-expire sessions older than 3 months

  • Set ttl_days: 0 to disable auto-expiry (default)

Note: Rollup/compaction entries are never expired — only raw sessions.

session_save_experienceA

Record a typed experience event. Unlike session_save_ledger (flat logs), this captures structured behavioral data for pattern detection.

Event Types:

  • correction: Agent was corrected by user

  • success: Task completed successfully

  • failure: Task failed

  • learning: New knowledge acquired

  • validation_result: Verification sandbox passed or failed

knowledge_upvoteA

Upvote a memory entry to increase its importance (graduation). Entries with importance >= 7 become 'graduated' insights that always surface in behavioral warnings.

knowledge_downvoteA

Downvote a memory entry to decrease its importance. Importance cannot go below 0.

knowledge_sync_rulesA

Auto-sync graduated insights (importance >= 7) into your project's IDE rules file (.cursorrules or .clauderules). This bridges behavioral memory with static IDE context — turning dynamic agent learnings into always-on rules.

How it works:

  1. Fetches graduated insights from the ledger

  2. Formats them as markdown rules inside sentinel markers

  3. Idempotently writes them into the target file at the project's configured repo_path

Requirements: The project must have a repo_path configured in the dashboard.

Idempotency: Uses <!-- PRISM:AUTO-RULES:START --> / <!-- PRISM:AUTO-RULES:END --> sentinel markers. Running this tool multiple times produces the same file. User-maintained content outside the sentinels is never touched.

deep_storage_purgeA

v5.1 Deep Storage Mode: Purge high-precision float32 embedding vectors for entries that already have TurboQuant compressed blobs, reclaiming ~90% of vector storage. Only affects entries older than the specified threshold (default: 30 days, minimum: 7). Entries without compressed blobs are NEVER touched. Use dry_run=true to preview the impact before executing.

When to use: After running TurboQuant backfill (session_backfill_embeddings), call this tool to reclaim disk space from legacy float32 vectors that are no longer needed for search.

Safety: Tier-2 search (TurboQuant) maintains 95%+ accuracy with compressed blobs. Tier-3 (FTS5 keyword) search is completely unaffected.

session_export_memoryA

Export all of a project's memory to a local file. Fulfills GDPR Article 20 (Right to Data Portability) and the 'local-first' portability promise.

What is exported:

  • All session ledger entries (summaries, decisions, TODOs, file changes)

  • Current handoff state (live project context)

  • System settings (API keys are "REDACTED" for security)

  • Visual memory index (descriptions, captions, timestamps; not the raw files)

Formats:

  • json — machine-readable, suitable for import into another Prism instance

  • markdown — human-readable, ideal for static archiving

  • vault — Prism-Port: exports a compressed .zip of interrelated Markdown files with proper Obsidian/Logseq YAML frontmatter and [[Wikilinks]]

⚠️ Output directory must exist and be writable. Filenames are auto-generated: prism-export-<project>-<date>.(json|md|zip)

session_backfill_linksA

Retroactively create graph edges (memory links) for all existing entries in a project. This builds the associative memory graph from your existing session history.

Three strategies are run:

  1. Temporal Chaining: Links consecutive entries within the same conversation

  2. Keyword Overlap: Links entries sharing ≥3 keywords (bidirectional)

  3. Provenance: Links rollup summaries to their archived originals

All strategies use INSERT OR IGNORE — safe to re-run multiple times.

When to use: Run once after upgrading to v6.0 to populate the graph for existing memories. New entries are auto-linked on save (no manual action needed).

session_synthesize_edgesA

Step 3A Edge Synthesis: Scans recent project entries with embeddings, finds high-similarity but currently disconnected entries, and creates inferred links as 'synthesized_from'.

On-Demand Graph Enrichment: Use this tool periodically to discover semantic relationships between structurally disconnected memory nodes. It batch processes the newest active entries.

session_cognitive_routeB

Resolve an HDC compositional state into a nearest semantic concept with policy-gated routing. Returns concept, confidence, distance, ambiguity, convergence steps, and route outcome. Use this for explainable cognitive recall decisions in v6.5.

maintenance_vacuumA

Reclaim disk space after large purge operations by running VACUUM on the local SQLite database.

Best called after deep_storage_purge removes many entries — SQLite reclaims page allocations only when explicitly vacuumed, so the file size stays the same until you call this tool.

For remote (Supabase) backends, returns guidance on triggering maintenance via the dashboard.

Note: On large databases this may take up to 60 seconds. The tool runs synchronously so you will know when it is safe to proceed.

Prompts

Interactive templates invoked by user choice

NameDescription
resume_sessionLoad previous session context for a project. Automatically fetches handoff state and injects it before the LLM starts thinking — no tool call needed. Includes version tracking for concurrency control.

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dcostenco/BCBA'

If you have feedback or need assistance with the MCP directory API, please join our Discord server