Integrates with Daniel Miessler's Fabric repository on GitHub, syncing patterns and strategies to expose powerful AI prompts and prompt engineering strategies for various tasks like extracting wisdom, summarizing, and more.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Fabric MCP Serversummarize this article with the cot strategy"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Fabric MCP Server (Docker)
A Model Context Protocol (MCP) server that exposes Daniel Miessler's Fabric patterns and strategies to MCP-compliant clients. It automatically syncs with the upstream Fabric repository, allowing you to use its powerful prompts directly within your AI workflow.
Features
Dynamic Sync: Clones or updates the Fabric repository every time the server starts.
Pattern Prompts: Automatically creates an MCP prompt for every folder in
patterns/.Smart Descriptions: Extracts meaningful descriptions from each pattern's
system.mdfile to help LLMs understand when to use each pattern.Strategy Support: Every prompt includes an optional
strategyargument to prepend context fromstrategies/. Natural language phrases like "with strategy cot" or "using cot strategy" are recognized.User Input: Every prompt requires an
inputargument for user-provided content (text, URL, etc.).Execute Pattern Tool: Exposes an
execute_patterntool so AI agents can execute any Fabric pattern programmatically with optional strategy.List Patterns Tool: Exposes a
list_patternstool so AI agents can discover available Fabric patterns programmatically.List Strategies Tool: Exposes a
list_strategiestool to discover available strategies and their descriptions.Research Resources: Exposes research documents as MCP resources in both markdown and PDF formats, accessible via custom URIs (e.g.,
markdown://researches/the-prompt-report,pdf://researches/the-prompt-report).Resource Subscriptions: Supports subscribing to markdown resource changes, notifying clients when files are modified.
Project Structure
Prerequisites
Docker installed and running.
Build
Build the Docker image locally:
Running the Server
1. Manual Test (Stdio)
To test if the server starts and syncs the repository correctly:
Note: The server communicates via JSON-RPC over stdin/stdout. You will see logs on stderr and can interact via the MCP Inspector.
2. Access via MCP Gateway (Claude Desktop, etc.)
To use this server with a gateway like Claude Desktop, add the following to your claude_desktop_config.json (usually located at ~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
3. Access via Gemini CLI
To use this server with the Gemini CLI, add the following to your ~/.gemini/settings.json:
4. Usage with MCP Gateway "Run" Command
If you are using a gateway that supports running servers via a direct command string, use:
5. Usage via Docker MCP Gateway
If you are using the Docker MCP Gateway CLI, you can run this server directly using:
6. Global Registration (Docker MCP Catalog)
To make this server visible to all Docker MCP clients (like the docker mcp CLI) using the configuration files in ~/.docker/mcp/:
Build the Image:
docker build -t fabric-mcp-server .Create a Local Catalog: Create a file named
local.yamlin your catalogs directory (usually~/.docker/mcp/catalogs/local.yaml):version: 3 name: local-catalog displayName: Local Catalog registry: fabric: title: Fabric description: Fabric patterns and strategies type: server image: fabric-mcp-server:latest tools: - execute_pattern - list_patterns - list_strategies - list_research_resources - read_research_resource prompts: [] # One prompt per pattern in the Fabric repository resources: {} metadata: category: productivity tags: - fabric - ai - prompts owner: localEnable the Server: Edit your registry file (usually
~/.docker/mcp/registry.yaml) and add thefabricentry underregistry::registry: # ... other servers ... fabric: ref: ""Verify: Run
docker mcp list(if available) or simply try running it:docker mcp gateway run fabric
Debugging with MCP Inspector
The MCP Inspector is an interactive developer tool for testing and debugging MCP servers. It allows you to inspect resources, test prompts, execute tools, and monitor server notifications.
Running the Inspector
To debug this Fabric MCP server with MCP Inspector, run:
This will launch the Inspector UI in your browser, connecting to the Fabric MCP server running in Docker.
Inspector Features
Resources Tab: Browse and inspect research documents (markdown and PDF)
Prompts Tab: Test Fabric patterns with custom inputs and strategies
Tools Tab: Execute
list_patterns,list_strategies,execute_pattern, and research resource toolsNotifications Pane: Monitor server logs and notifications
Development Workflow
Build the Docker image:
docker build -t fabric-mcp-server .Launch the Inspector with the command above
Make changes to your server code
Rebuild the image and reconnect the Inspector to test
How it Works
Startup: The server clones
https://github.com/danielmiessler/fabricinto the container.Prompts: It scans folders like
extract_wisdom,summarize, etc. inpatterns/.Resources: It scans the
resources/markdown/researches/andresources/pdf/researches/folders for research documents and exposes them as MCP resources with custom URIs.Subscriptions: Clients can subscribe to markdown resource changes. The server monitors the markdown resources folder and sends notifications when files are modified (PDF subscriptions are not supported).
Execution:
When you select a prompt (e.g.,
extract_wisdom), it reads thesystem.mdfile.If you provide a
strategy(e.g.,cot), it fetches the strategy JSON, extracts the prompt content, and prepends it to the system message.The
inputargument content is appended to the end of the prompt.
Tools:
Use
execute_patternto run any Fabric pattern with content and optional strategy.Use
list_patternsto discover all available Fabric patterns.Use
list_strategiesto discover available strategies and their descriptions.Use
list_research_resourcesto discover all available research documents.Use
read_research_resourceto read a specific research document by name (supports both human-readable names like "The Prompt Report" and slugs like "the-prompt-report").
MCP Prompts, Tools, and Resources
Important Note: In the Model Context Protocol (MCP):
Prompts are templates exposed by servers for the client (you) to use
Tools are functions the AI agent can call directly
Resources are data sources (files, documents) that can be read by clients
As an AI agent, LLMs primarily see Tools. They may not have direct visibility into user-facing Prompts (like
analyze_bill_shortoryoutube_summary) from connected MCP servers. Those are typically accessed via your client's UI (e.g., typing/in your chat interface with Claude Desktop).This is why we expose:
Prompts: For clients that support the
/command interface
execute_pattern: So AI agents can run any pattern programmatically via natural language
list_patterns: So AI agents can discover available patterns
list_research_resources: So AI agents can discover available research documents
read_research_resource: So AI agents can read research documents via natural languageResources: Research documents accessible via
markdown://researches/{title}andpdf://researches/{title}URIs
Usage Examples
Using Prompts via Client UI
MCP Prompts are accessed through your client's UI. For example, in Claude Desktop, type / followed by the pattern name:
Client Action | Result |
Type | Invokes the pattern with prompts for |
Type | Invokes the summarize pattern |
Type | Invokes the extract wisdom pattern |
Using Tools via Natural Language
AI agents can use the available tools to discover and execute patterns:
Natural Language Prompt | Tool Called | Arguments |
"What Fabric patterns are available?" |
| - |
"Show me the available strategies" |
| - |
"What research documents are available?" |
| - |
"List all research papers" |
| - |
"Read the Prompt Report research paper" |
|
|
"Show me the content of the prompt report" |
|
|
"Summarize the key findings from the prompt report" |
|
|
"Create a micro summary of https://youtu.be/abc123" |
|
|
"Summarize this article with strategy cot: https://example.com/article" |
|
|
"Extract wisdom using chain-of-thought from this text: [text]" |
|
|
Using Resources
MCP Resources are data sources exposed by the server. Research documents are available in both markdown and PDF formats via custom URIs:
Resource URI | Description |
| The Prompt Report (Markdown format) |
| The Prompt Report (PDF format) |
Accessing Resources via Client UI
MCP Resources are accessed differently than Prompts. In Gemini CLI, use the @ prefix to reference and embed resource content into your chat:
Feature | Access Method | Function |
MCP Prompts |
| Runs a pre-defined prompt template |
MCP Resources |
| Fetches and attaches data (files, docs) to your message |
Client Action | Result |
Type | Embeds the markdown research document into your chat context |
Type | Embeds the PDF research document into your chat context |
Tip: Type @ in Gemini CLI to see auto-completion for all available resources from connected MCP servers.
Example usage:
Adding New Resources
To add new research documents:
Place markdown files in
resources/markdown/researches/Place PDF files in
resources/pdf/researches/
Files will be automatically discovered and exposed with a URL-friendly slug based on the filename.
Resource Subscriptions
Clients can subscribe to markdown resource changes to receive notifications when files are modified:
Subscribe: Call
resources/subscribewith a markdown resource URI to start receiving change notificationsUnsubscribe: Call
resources/unsubscribeto stop receiving notifications
When a subscribed markdown file is modified, the server sends a notifications/resources/updated notification to the client.
Note: PDF resources do not support subscriptions.
Prompt Arguments
When invoking a prompt (via / command), you can provide:
Argument | Required | Description | Example |
| Yes | The content to analyze (text, URL, etc.) |
|
| No | Thinking strategy to enhance analysis |
|
Tip: For best results when using strategies, include phrases like:
"with strategy cot"
"using strategy tot"
"apply cot strategy"