Integrates with Google's Gemini API to provide AI-powered reflection and analytical capabilities through Google's language models
Connects to local Ollama instances to run open-source language models for reflection and analysis while maintaining full offline capability
Connects to OpenAI's API to leverage GPT models for reflection, differential diagnosis, and intelligent analysis of user notes and questions
Reflection MCP Server
Part of the LXD MCP Suite — a cohesive set of MCP servers for learning experience design (coaching, Kanban, stories, and optional LLM adapters).
What it is
Lightweight reflection MCP server (stdio) that detects available providers and stores short local memories.
Why it helps
Optional tailoring/validation for other servers; stays small and safe. Works fully offline with local memory only.
Lightweight reflection and differential diagnosis MCP server.
- Detects provider from environment/.env (OpenAI, Anthropic, Gemini, Ollama) and uses a lightweight local model if no network provider is available.
- Stores short, bounded memories per
key
in.local_context/reflections/<key>.jsonl
. - Exposes MCP tools over stdio:
reflection_handshake(user_key, name)
reflect(key, input)
ask(key, question)
note(key, note)
memories(key, limit?)
summarize(key)
Quickstart
Register with an MCP client (example)
- Claude Desktop (config snippet):
Environment variables
- OpenAI:
OPENAI_API_KEY
,OPENAI_BASE_URL
(optional),OPENAI_MODEL
(default:gpt-4o-mini
) - Anthropic:
ANTHROPIC_API_KEY
,ANTHROPIC_BASE_URL
(optional),ANTHROPIC_MODEL
(default:claude-3-haiku-20240307
) - Gemini:
GOOGLE_API_KEY
,GEMINI_BASE_URL
(optional),GEMINI_MODEL
(default:gemini-1.5-flash
) - Ollama:
OLLAMA_BASE_URL
orOLLAMA_HOST
,OLLAMA_MODEL
(default:llama3.1:8b-instruct
)
If no provider key is found or requests fail, the server falls back to a local lightweight reflector.
File layout
reflection_mcp/mcp_server.py
: MCP stdio serverreflection_mcp/provider.py
: provider detection + HTTP clientutils/reflection_memory.py
: shared local memory store (JSONL)
Install (local PATH)
Run at Login
macOS (launchd)
Linux (systemd user)
License
Proprietary/internal by default. Add a license if open-sourcing.
Internal Use Only — not licensed for external distribution or hosting.
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Enables reflective thinking and memory storage with support for multiple AI providers (OpenAI, Anthropic, Gemini, Ollama) or local fallback. Stores contextual memories locally and provides tools for reflection, questioning, note-taking, and summarization across different conversation keys.