This server provides workflow management tools with specialized memory storage, task evaluation, environment verification, and reasoning capabilities.
Memory Storage (
mem_save): Record structured project memories withwhat,why,outcome, and optional context fields to<projectPath>/.memory/memory.json. Automatically handles FIFO eviction based on token count to maintain a ~1000 token budget.Task Evaluation (
evaluate_task): Assess task complexity to determine if tasks are multi-step, have unclear requirements, can be broken into subtasks, or have high bug risk.Environment Verification (
env_verify): Mandatory safety check before package installation to block unsafe operations and ensure proper environment validation.Thought Processing (
think): Log complex reasoning processes or cache internal thoughts for reference without making external changes.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Workflow MCP Serversave that we installed pandas for data analysis and it worked with our current Python version"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Memory MCP
This repository implements a small MCP server that exposes a mem_save tool for recording structured memory entries to a per-project JSON file (.memory/memory.json). The server is intentionally simple: it stores entries provided by the AI and enforces a token-budgeted FIFO eviction policy.
Installation
Configuration
Add this server to your MCP client configuration. Example:
Features
This server does not summarize or decide what to store by itself. User should decide when to call mem_save.
Tool:
mem_save— record memories for a project (JSON storage)Token estimation and FIFO eviction to keep estimated tokens ≤ 1000
Auto-creates
.memory/memory.jsonif missing
mem_save behavior
Inputs:
projectPath(string): absolute project path provided by the AIentries(array): list of memory entry objects. Each entry has:what(string)why(string)outcome(string)task_context,constraints,dependencies(optional strings)
Storage:
Memory is stored at
<projectPath>/.memory/memory.json:{ "entries": [ ... ], "meta": { "total_entries": number, "estimated_tokens": number, "last_updated": "YYYY-MM-DD" } }
Token estimation (approximate):
Chinese chars ≈ 1.3 tokens each
English letters ≈ 0.3 tokens each
Other symbols ≈ 0.6 tokens each
Eviction policy:
After appending new entries, the server computes an estimated total token usage.
If the total exceeds 1000 tokens, it removes the oldest entries (FIFO) one-by-one until the total is ≤ 1000.
Return value:
On success the tool returns a JSON text message with
success: trueand a short summary.
License
MIT