Provides a checkpointing system that utilizes git stash to save and restore the state of the working directory, enabling safe rollbacks during automated reasoning and code modification loops.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Claude Prompts MCP Serveranalyze @CAGEERF :: 'cite sources'"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Claude Prompts MCP Server
Hot-reloadable prompts with chains, gates, and structured reasoning for AI assistants.
Quick Start • Features • Syntax • Docs
Quick Start
Claude Code (Recommended)
# Step 1: Add marketplace (first time only)
/plugin marketplace add minipuft/minipuft-plugins
# Step 2: Install
/plugin install claude-prompts@minipuft
# Step 3: Try it
>>tech_evaluation_chain library:'zod' context:'API validation'The plugin adds hooks that fix common issues:
Problem | Hook Fix |
Model ignores | Detects syntax, suggests correct MCP call |
Chain step forgotten | Injects |
Gate review skipped | Reminds |
Raw MCP works, but models sometimes miss the syntax. The hooks catch that. → hooks/README.md
Load plugin from local source for development:
git clone https://github.com/minipuft/claude-prompts ~/Applications/claude-prompts
cd ~/Applications/claude-prompts/server && npm install && npm run build
claude --plugin-dir ~/Applications/claude-promptsEdit hooks/prompts → restart Claude Code. Edit TypeScript → rebuild first.
User Data: Custom prompts stored in ~/.local/share/claude-prompts/ persist across updates.
User Install — Add to ~/.config/opencode/opencode.json:
{
"mcp": {
"claude-prompts": {
"type": "local",
"command": ["npx", "-y", "claude-prompts@latest"]
}
}
}Development Setup — Use the opencode-prompts plugin (includes hooks):
git clone https://github.com/minipuft/opencode-prompts ~/Applications/opencode-prompts
cd ~/Applications/opencode-prompts && npm install
ln -s ~/Applications/opencode-prompts ~/.config/opencode/plugin/opencode-promptsThen point MCP to your local server in ~/.config/opencode/opencode.json:
{
"mcp": {
"claude-prompts": {
"type": "local",
"command": ["node", "~/Applications/opencode-prompts/server/dist/index.js", "--transport=stdio"],
"environment": { "MCP_RESOURCES_PATH": "~/Applications/opencode-prompts/server" }
}
}
}User Install:
gemini extensions install https://github.com/minipuft/gemini-promptsDevelopment Setup — Link local source:
git clone https://github.com/minipuft/gemini-prompts ~/Applications/gemini-prompts
cd ~/Applications/gemini-prompts && npm install
gemini link . # Links extension from current directoryTo unlink: gemini unlink 'gemini-prompts'
Same tools (prompt_engine, resource_manager, system_control) with Gemini-optimized hooks.
Custom resources? See Custom Resources for MCP_RESOURCES_PATH setup.
Option A: GitHub Release (recommended)
Download
claude-prompts-{version}.mcpbfrom ReleasesDrag into Claude Desktop Settings → MCP Servers
Done
The .mcpb bundle is self-contained (~5MB)—no npm required.
Option B: NPX (auto-updates)
Add to your config file:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest"]
}
}
}Restart Claude Desktop and test: >>research_chain topic:'remote team policies'
Add to your MCP config file:
Client | Config Location |
Cursor |
|
Windsurf |
|
Zed |
|
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest"]
}
}
}Restart and test: resource_manager(resource_type:"prompt", action:"list")
git clone https://github.com/minipuft/claude-prompts.git
cd claude-prompts/server
npm install && npm run build && npm testPoint your MCP config to server/dist/index.js. The esbuild bundle is self-contained.
Transport options: --transport=stdio (default), --transport=streamable-http (HTTP clients).
Custom Resources
Use your own prompts without cloning. Add MCP_RESOURCES_PATH to any MCP config:
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest"],
"env": {
"MCP_RESOURCES_PATH": "/path/to/your/resources"
}
}
}
}Your resources directory can contain: prompts/, gates/, methodologies/, styles/.
Fine-grained overrides (optional):
Env Var | What It Overrides |
| All resources (recommended) |
| Just prompts |
| Just gates |
| Just methodologies |
Note: With npx, paths resolve relative to npm cache. Always use absolute paths with
MCP_RESOURCES_PATH.
See CLI Configuration for all options.
What You Get
🔥 Hot Reload
Edit prompts, test immediately. Better yet—ask Claude to fix them:
User: "The code_review prompt is too verbose"
Claude: resource_manager(action:"update", id:"code_review", ...)
User: "Test it"
Claude: prompt_engine(command:">>code_review") # Uses updated version instantly🔗 Chains
Break complex tasks into steps with -->:
analyze code --> identify issues --> propose fixes --> generate testsEach step's output flows to the next. Add quality gates between steps.
🧠 Frameworks
Inject structured thinking patterns:
@CAGEERF Review this architecture # Context → Analysis → Goals → Execution → Evaluation → Refinement
@ReACT Debug this error # Reason → Act → Observe loops🛡️ Gates
Quality criteria Claude self-checks:
Summarize this :: 'under 200 words' :: 'include key statistics'Failed gates can retry automatically or pause for your decision.
✨ Judge Selection
Let Claude pick the right tools:
%judge Help me refactor this codebaseClaude analyzes available frameworks, gates, and styles, then applies the best combination.
📊 MCP Resources
Token-efficient read-only access for discovery and context recovery:
resource://prompt/ # List all prompts (4x fewer tokens than tool call)
resource://session/ # Active chains (recover context after compaction)
resource://metrics/pipeline # System health (lean aggregates, not raw samples)Use chainId directly: resource://session/chain-quick_decision#1 → same ID used to resume.
Configuration (in config.json):
"resources": {
"registerWithMcp": false, // Master switch (default: off for token efficiency)
"prompts": { "enabled": true }, // resource://prompt/...
"gates": { "enabled": true }, // resource://gate/...
"methodologies": { "enabled": true }, // resource://methodology/...
"observability": { // resource://session/..., resource://metrics/...
"enabled": true,
"sessions": true,
"metrics": true
},
"logs": { "enabled": true, "maxEntries": 500, "defaultLevel": "info" }
}Why disabled by default? Tools provide more efficient discovery:
resource_manager(action:"list")returns compact summary (~300 tokens)Use
detail:"full"when you need descriptionsMCP Resources bulk-loads everything (~5000+ tokens)
📜 Version History
Every update is versioned. Compare, rollback, undo:
resource_manager(action:"history", id:"code_review")
resource_manager(action:"rollback", id:"code_review", version:2, confirm:true)🔄 Checkpoints
Save working directory state before risky changes. Restore instantly if something breaks:
# Checkpoint before refactoring
resource_manager(resource_type:"checkpoint", action:"create", name:"pre-refactor")
# Something broke? Rollback to checkpoint
resource_manager(resource_type:"checkpoint", action:"rollback", name:"pre-refactor", confirm:true)
# List all checkpoints
resource_manager(resource_type:"checkpoint", action:"list")Uses git stash under the hood. Pairs with verification gates for safe autonomous loops.
✅ Verification Gates (Ralph Loops)
Ground-truth validation via shell commands—Claude keeps trying until tests pass:
# You say this:
>>implement-feature :: verify:"npm test" loop:true
# Claude does this:
# 1. Implements feature
# 2. Runs npm test → FAIL
# 3. Reads error, fixes code
# 4. Runs npm test → FAIL
# 5. Tries again...
# 6. Runs npm test → PASS ✓
# You get working code.Context Isolation: After 3 failed attempts, spawns a fresh Claude instance with session context—no context rot, fresh perspective, automatic handoff.
Preset | Max Tries | Timeout | Use Case |
| 1 | 30s | Quick iteration |
| 5 | 5 min | CI validation |
| 10 | 10 min | Large test suites |
Override options: max:15 (custom attempts), timeout:120 (custom seconds).
# Custom limits for stubborn tests
>>fix-flaky-test :: verify:"npm test" :full max:8 timeout:180 loop:trueSee Ralph Loops Guide for autonomous verification patterns and cost tracking.
Syntax Reference
Symbol | Name | What It Does | Example |
| Prompt | Execute template |
|
| Chain | Pipe to next step |
|
| Repeat | Run prompt N times |
|
| Framework | Inject methodology |
|
| Gate | Add quality criteria |
|
| Modifier | Toggle behavior |
|
| Style | Apply formatting |
|
Modifiers:
%clean— No framework/gate injection%lean— Gates only, skip framework%guided— Force framework injection%judge— Claude selects best resources
Using Gates
# Inline (quick)
Research AI :: 'use recent sources' --> Summarize :: 'be concise'
# With framework
@CAGEERF Explain React hooks :: 'include examples'
# Programmatic
prompt_engine({
command: ">>code_review",
gates: [{ name: "Security", criteria: ["No hardcoded secrets"] }]
})Severity | Behavior |
Critical/High | Must pass (blocking) |
Medium/Low | Warns, continues (advisory) |
See Gates Guide for full schema.
Configuration
Customize via server/config.json:
Section | Setting | Default | Description |
|
|
| Prompts directory (hot-reloaded) |
|
| enabled | Auto-inject methodology guidance |
|
|
| Quality gate definitions |
|
|
| Enable |
The Three Tools
Tool | Purpose |
| Execute prompts with frameworks and gates |
| CRUD for prompts, gates, methodologies, checkpoints |
| Status, analytics, health checks |
prompt_engine(command:"@CAGEERF >>analysis topic:'AI safety'")
resource_manager(resource_type:"prompt", action:"list")
resource_manager(resource_type:"checkpoint", action:"create", name:"backup")
system_control(action:"status")How It Works
%%{init: {'theme': 'neutral', 'themeVariables': {'background':'#0b1224','primaryColor':'#e2e8f0','primaryBorderColor':'#1f2937','primaryTextColor':'#0f172a','lineColor':'#94a3b8','fontFamily':'"DM Sans","Segoe UI",sans-serif','fontSize':'14px','edgeLabelBackground':'#0b1224'}}}%%
flowchart TB
classDef actor fill:#0f172a,stroke:#cbd5e1,stroke-width:1.5px,color:#f8fafc;
classDef server fill:#111827,stroke:#fbbf24,stroke-width:1.8px,color:#f8fafc;
classDef process fill:#e2e8f0,stroke:#1f2937,stroke-width:1.6px,color:#0f172a;
classDef client fill:#f4d0ff,stroke:#a855f7,stroke-width:1.6px,color:#2e1065;
classDef clientbg fill:#1a0a24,stroke:#a855f7,stroke-width:1.8px,color:#f8fafc;
classDef decision fill:#fef3c7,stroke:#f59e0b,stroke-width:1.6px,color:#78350f;
linkStyle default stroke:#94a3b8,stroke-width:2px
User["1. User sends command"]:::actor
Example[">>analyze @CAGEERF :: 'cite sources'"]:::actor
User --> Example --> Parse
subgraph Server["MCP Server"]
direction TB
Parse["2. Parse operators"]:::process
Inject["3. Inject framework + gates"]:::process
Render["4. Render prompt"]:::process
Decide{"6. Route verdict"}:::decision
Parse --> Inject --> Render
end
Server:::server
subgraph Client["Claude (Client)"]
direction TB
Execute["5. Run prompt + check gates"]:::client
end
Client:::clientbg
Render -->|"Prompt with gate criteria"| Execute
Execute -->|"Verdict + output"| Decide
Decide -->|"PASS → render next step"| Render
Decide -->|"FAIL → render retry prompt"| Render
Decide -->|"Done"| Result["7. Return to user"]:::actorThe feedback loop: Command with operators → Parse and inject methodology/gates → Claude executes and self-evaluates → Route: next step (PASS), retry (FAIL), or return result (done).
Documentation
MCP Tooling Guide — Full command reference
Prompt Authoring — Tutorial
Chains — Multi-step patterns
Gates — Quality validation
Ralph Loops — Autonomous verification patterns
Architecture — System internals
Contributing
cd server
npm install
npm run build # esbuild bundles to dist/index.js
npm test # Run test suite
npm run validate:all # Full CI validationThe build produces a self-contained bundle (~4.5MB). server/dist/ is gitignored—CI builds fresh from source.
See CONTRIBUTING.md for workflow details.