Skip to main content
Glama
contextstream

ContextStream MCP Server

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
CONTEXTSTREAM_API_KEYYesYour API key from contextstream.io
CONTEXTSTREAM_API_URLNoAPI base URLhttps://api.contextstream.io

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": true
}
resources
{
  "listChanged": true
}
completions
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
flashC

Alias of instruct. Session-scoped instruction cache operations. Actions: bootstrap, get, push, ack, clear, stats, checkpoint, verify.

instructA

Session-scoped instruction cache operations. Actions: bootstrap, get, push, ack, clear, stats, checkpoint, verify. Compatibility aliases: ram, mem.

ramC

Alias of instruct. Session-scoped instruction cache operations. Actions: bootstrap, get, push, ack, clear, stats, checkpoint, verify.

memC

Alias of instruct. Session-scoped instruction cache operations. Actions: bootstrap, get, push, ack, clear, stats, checkpoint, verify.

tool_searchA

Search available tools and hidden operations, then call direct tools or use execute_operation for deferred capabilities.

execute_operationA

Execute a hidden or deferred capability returned by tool_search.

batch_operationsA

Execute multiple independent read-only operations in one call. Rejects write or destructive operations.

initA

Initialize a new conversation session and automatically retrieve relevant context. This is the FIRST tool AI assistants should call when starting a conversation. Returns: workspace info, project info, recent memory, recent decisions, relevant context, high-priority lessons, and ingest_recommendation.

The ingest_recommendation field indicates if the project needs indexing for code search:

  • If [INGEST_RECOMMENDED] appears, ask the user if they want to enable semantic code search

  • Benefits: AI-powered code understanding, dependency analysis, better context retrieval

  • If user agrees, run: project(action="ingest_local", path="<project_path>")

IMPORTANT: Pass the user's FIRST MESSAGE as context_hint to get semantically relevant context! Example: init(folder_path="/path/to/project", context_hint="how do I implement auth?")

This does semantic search on the first message. You only need context on subsequent messages.

generate_rulesC

Generate AI rule files for editors (Cursor, Cline, Kilo Code, Roo Code, Claude Code, GitHub Copilot, Aider). Defaults to the current project folder; no folder_path required when run from a project. Supported editors: codex, opencode, cursor, windsurf, cline, kilo, roo, claude, aider, antigravity, copilot

generate_editor_rulesB

Generate AI rule files for editors (Cursor, Cline, Kilo Code, Roo Code, Claude Code, GitHub Copilot, Aider). These rules instruct the AI to automatically use ContextStream for memory and context. Supported editors: codex, opencode, cursor, windsurf, cline, kilo, roo, claude, aider, antigravity, copilot

contextA

CALL THIS BEFORE EVERY AI RESPONSE to get relevant context.

This is the KEY tool for token-efficient AI interactions. It:

  1. Analyzes the user's message to understand what context is needed

  2. Retrieves only relevant context in a minified, token-efficient format

  3. Replaces the need to include full chat history in prompts

Format options:

  • 'minified': Ultra-compact D:decision|P:preference|M:memory (default, ~200 tokens)

  • 'readable': Line-separated with labels

  • 'structured': JSON-like grouped format

Type codes: W=Workspace, P=Project, D=Decision, M=Memory, I=Insight, T=Task, L=Lesson

Context Pack:

  • mode='pack' adds code context + distillation (higher credit cost)

Example usage:

  1. User asks "how should I implement auth?"

  2. AI calls context(user_message="how should I implement auth?")

  3. Gets: "W:Maker|P:contextstream|D:Use JWT for auth|D:No session cookies|M:Auth API at /auth/..."

  4. AI responds with relevant context already loaded

This saves ~80% tokens compared to including full chat history.

searchA

Search workspace memory and knowledge. Modes: auto (recommended), semantic (meaning-based), hybrid (legacy alias for auto), keyword (exact match), pattern (regex), exhaustive (all matches like grep), refactor (word-boundary matching for symbol renaming), team (cross-project team search - team plans only), crawl (deep multi-modal search).

Output formats: full (default, includes content), paths (file paths only - 80% token savings), minimal (compact - 60% savings), count (match counts only - 90% savings).

sessionB

Session and memory management — NOT for codebase/file search (use the 'search' tool for that). LESSONS LIVE HERE: when a mistake or correction happens, call action='capture_lesson' (NEVER write lessons to ~/.claude/.../memory/, .cursorrules, or other local markdown — local files are invisible to [LESSONS_WARNING] auto-surfacing on future turns and across sessions). PAST SESSIONS LIVE HERE: use action='recall' FIRST when the user references "last time", "previous", "yesterday", or is continuing prior work — full-text transcripts are indexed across every prior session. context() may surface [GROUNDING]; use action='ground' with user_message for a one-shot bundle (recall + docs + decisions + lessons + skills) outside context(). Actions: capture (save decision/insight), capture_lesson (mistakes/corrections — title+trigger+impact+prevention), get_lessons (retrieve lessons), recall (retrieve past conversation context via ranked fusion of transcripts/snapshots/docs/decisions), ground (one-shot prior-work bundle), remember (quick save), user_context (get preferences), summary (workspace summary), compress (compress chat), delta (changes since timestamp), smart_search (searches MEMORY/conversation history only, not code), decision_trace (trace decision provenance), restore_context (restore state after compaction). Plan actions: capture_plan, get_plan, update_plan, list_plans. Suggested rules actions: list_suggested_rules, suggested_rule_action, suggested_rules_stats. Team actions: team_decisions, team_lessons, team_plans.

entityB

Unified CRUD across taxonomy expansion entities. Kinds: ticket, handoff, backlog_view, incident, release, experiment, goal, key_result, sprint, review, risk. Actions: list, get, create, update, delete. Body is free-form JSON forwarded to the API; workspace_id/project_id default to active scope when omitted.

capsuleA

ContextCapsule: portable, shareable, hydrate-on-demand snapshots of project context. Use capsule when the user pastes a /c/ link or capsule token, asks for a handoff/share/team/external-agent link, wants to bootstrap a fresh agent with project state, asks for a paste-ready handoff prompt (bootstrap prompt / prompt for another LLM), wants share-token graphs, or wants to list/audit capsules. Do not use capsule for normal turn-by-turn retrieval; use context instead. Team share links are authenticated and reusable by default; external_agent/public_link/support shares are token-gated and single-use by default.

memoryC

Memory operations for events and nodes. Event actions: create_event, get_event, update_event, delete_event, list_events, distill_event, import_batch (bulk import array of events). Node actions: create_node, get_node, update_node, delete_node, list_nodes, supersede_node. Query actions: search, decisions, timeline, summary. Task actions: create_task (create task, optionally linked to plan), get_task, update_task (can link/unlink task to plan via plan_id), delete_task, list_tasks, reorder_tasks. Todo actions: create_todo, list_todos, get_todo, update_todo, delete_todo, complete_todo. Diagram actions: create_diagram, list_diagrams, get_diagram, update_diagram, delete_diagram. Doc actions: create_doc, list_docs, get_doc, update_doc, delete_doc, create_roadmap. Transcript actions: list_transcripts (list saved conversations), get_transcript (get full transcript by ID), search_transcripts (semantic search across conversations), search_archive (remote Atlas archive; local npm returns unavailable), delete_transcript. Team actions (team plans only): team_tasks, team_todos, team_diagrams, team_docs.

graphB

Code graph analysis. Actions: dependencies (module deps), impact (change impact), call_path (function call path), related (related nodes), path (path between nodes), decisions (decision history), ingest (build graph), circular_dependencies, unused_code, contradictions, usages (reverse deps).

projectC

Project management. Actions: list, get, create, update, delete, index (trigger indexing), overview, statistics, files, index_status, index_history (audit trail of indexed files), ingest_local (index local folder), team_projects (list all team projects - team plans only), recent_changes (git log/diff for recent file changes).

workspaceC

Workspace management. Actions: list, get, create, delete, associate (link folder to workspace), bootstrap (create workspace and initialize), team_members (list members with access - team plans only), index_settings (get/update multi-machine sync settings - admin only).

reminderC

Reminder management. Actions: list, active (pending/overdue), create, snooze, complete, dismiss.

mediaA

Media operations for video/audio/image assets. Enables AI agents to index, search, and retrieve media with semantic understanding - solving the "LLM as video editor has no context" problem for tools like Remotion.

Actions:

  • index: Index a local media file or external URL. Triggers ML processing (Whisper transcription, CLIP embeddings, keyframe extraction).

  • status: Check indexing progress for a content_id. Returns transcript_available, keyframe_count, duration.

  • search: Semantic search across indexed media. Returns timestamps, transcript excerpts, keyframe URLs.

  • get_clip: Get clip details for a time range. Supports output_format: remotion (frame-based props), ffmpeg (timecodes), raw.

  • list: List indexed media assets.

  • delete: Remove a media asset from the index.

Example workflow:

  1. media(action="index", file_path="/path/to/video.mp4") → get content_id

  2. media(action="status", content_id="...") → wait for indexed

  3. media(action="search", query="where John explains authentication") → get timestamps

  4. media(action="get_clip", content_id="...", start="1:34", end="2:15", output_format="remotion") → get Remotion props

helpA

Utility and help. Actions: tools (list available tools), auth (current user), version (server version), editor_rules (generate AI editor rules and install hooks for real-time file indexing), enable_bundle (enable tool bundle in progressive mode), team_status (team subscription info - team plans only).

Prompts

Interactive templates invoked by user choice

NameDescription
explore-codebaseGet an overview of a project codebase structure and key components
capture-decisionDocument an architectural or technical decision in workspace memory
review-contextBuild context for reviewing code changes
investigate-bugBuild context for debugging an issue
explore-knowledgeNavigate and understand the knowledge graph for a workspace
onboard-to-projectGenerate onboarding context for a new team member
analyze-refactoringAnalyze a codebase for refactoring opportunities
build-contextBuild comprehensive context for an LLM task
smart-searchSearch across memory, decisions, and code for a query
recall-contextRetrieve relevant past decisions and memory for a query
session-summaryGet a compact summary of workspace/project context
capture-lessonRecord a lesson learned from an error or correction
capture-preferenceSave a user preference to memory
capture-taskCapture an action item into memory
capture-bugCapture a bug report into workspace memory
capture-featureCapture a feature request into workspace memory
generate-planGenerate a development plan from a description
generate-tasksGenerate actionable tasks from a plan or description
token-budget-contextGet the most relevant context that fits within a token budget
find-todosScan the codebase for TODO/FIXME/HACK notes and summarize
generate-editor-rulesGenerate ContextStream AI rule files for your editor
index-local-repoIngest local files into ContextStream for indexing/search

Resources

Contextual data attached and managed by the client

NameDescription
contextstream-openapiMachine-readable OpenAPI from the configured API endpoint
contextstream-workspacesList of accessible workspaces

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/contextstream/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server