Integrations
doc-lib-mcp MCP server
A Model Context Protocol (MCP) server for document ingestion, chunking, semantic search, and note management.
Components
Resources
- Implements a simple note storage system with:
- Custom
note://
URI scheme for accessing individual notes - Each note resource has a name, description, and
text/plain
mimetype
- Custom
Prompts
- Provides a prompt:
- summarize-notes: Creates summaries of all stored notes
- Optional "style" argument to control detail level (brief/detailed)
- Generates prompt combining all current notes with style preference
- summarize-notes: Creates summaries of all stored notes
Tools
The server implements a wide range of tools:
- add-note: Add a new note to the in-memory note store
- Arguments:
name
(string),content
(string)
- Arguments:
- ingest-string: Ingest and chunk a markdown or plain text string provided via message
- Arguments:
content
(string, required),source
(string, optional),tags
(list of strings, optional)
- Arguments:
- ingest-markdown: Ingest and chunk a markdown (.md) file
- Arguments:
path
(string)
- Arguments:
- ingest-python: Ingest and chunk a Python (.py) file
- Arguments:
path
(string)
- Arguments:
- ingest-openapi: Ingest and chunk an OpenAPI JSON file
- Arguments:
path
(string)
- Arguments:
- ingest-html: Ingest and chunk an HTML file
- Arguments:
path
(string)
- Arguments:
- ingest-html-url: Ingest and chunk HTML content from a URL (optionally using Playwright for dynamic content)
- Arguments:
url
(string),dynamic
(boolean, optional)
- Arguments:
- smart_ingestion: Extracts all technically relevant content from a file using Gemini, then chunks it using robust markdown logic.
- Arguments:
path
(string, required): File path to ingest.prompt
(string, optional): Custom prompt to use for Gemini.tags
(list of strings, optional): Optional list of tags for classification.
- Uses Gemini 2.0 Flash 001 to extract only code, configuration, markdown structure, and technical definitions (no summaries or commentary).
- Passes the extracted content to a mistune 3.x-based chunker that preserves both code blocks and markdown/narrative content as separate chunks.
- Each chunk is embedded and stored for semantic search and retrieval.
- Arguments:
- search-chunks: Semantic search over ingested content
- Arguments:
query
(string): The semantic search query.top_k
(integer, optional, default 3): Number of top results to return.type
(string, optional): Filter results by chunk type (e.g.,code
,html
,markdown
).tag
(string, optional): Filter results by tag in chunk metadata.
- Returns the most relevant chunks for a given query, optionally filtered by type and/or tag.
- Arguments:
- delete-source: Delete all chunks from a given source
- Arguments:
source
(string)
- Arguments:
- delete-chunk-by-id: Delete one or more chunks by id
- Arguments:
id
(integer, optional),ids
(list of integers, optional) - You can delete a single chunk by specifying
id
, or delete multiple chunks at once by specifyingids
.
- Arguments:
- update-chunk-type: Update the type attribute for a chunk by id
- Arguments:
id
(integer, required),type
(string, required)
- Arguments:
- ingest-batch: Ingest and chunk multiple documentation files (markdown, OpenAPI JSON, Python) in batch
- Arguments:
paths
(list of strings)
- Arguments:
- list-sources: List all unique sources (file paths) that have been ingested and stored in memory, with optional filtering by tag or semantic search.
- Arguments:
tag
(string, optional): Filter sources by tag in chunk metadata.query
(string, optional): Semantic search query to find relevant sources.top_k
(integer, optional, default 10): Number of top sources to return when using query.
- Arguments:
- get-context: Retrieve relevant content chunks (content only) for use as AI context, with filtering by tag, type, and semantic similarity.
- Arguments:
query
(string, optional): The semantic search query.tag
(string, optional): Filter results by a specific tag in chunk metadata.type
(string, optional): Filter results by chunk type (e.g., 'code', 'markdown').top_k
(integer, optional, default 5): The number of top relevant chunks to retrieve.
- Arguments:
- update-chunk-metadata: Update the metadata field for a chunk by id
- Arguments:
id
(integer),metadata
(object)
- Arguments:
- tag-chunks-by-source: Adds specified tags to the metadata of all chunks associated with a given source (URL or file path). Merges with existing tags.
- Arguments:
source
(string),tags
(list of strings)
- Arguments:
- list-notes: List all currently stored notes and their content.
Chunking and Code Extraction
- Markdown, Python, OpenAPI, and HTML files are split into logical chunks for efficient retrieval and search.
- The markdown chunker uses mistune 3.x's AST API and regex to robustly split content by code blocks and narrative, preserving all original formatting.
- Both code blocks and markdown/narrative content are preserved as separate chunks.
- The HTML chunker uses the
readability-lxml
library to extract main content first, then extracts block code snippets from<pre>
tags as dedicated "code" chunks. Inline<code>
content remains part of the narrative chunks.
Semantic Search
- The
search-chunks
tool performs vector-based semantic search over all ingested content, returning the most relevant chunks for a given query. - Supports optional
type
andtag
arguments to filter results by chunk type (e.g.,code
,html
,markdown
) and/or by tag in chunk metadata, before semantic ranking. - This enables highly targeted retrieval, such as "all code chunks tagged with 'langfuse' relevant to 'cost and usage'".
Metadata Management
- Chunks include a
metadata
field for categorization and tagging. - The
update-chunk-metadata
tool allows updating metadata for any chunk by its id. - The
tag-chunks-by-source
tool allows adding tags to all chunks from a specific source in one operation. Tagging merges new tags with existing ones, preserving previous tags.
Configuration
[TODO: Add configuration details specific to your implementation]
Quickstart
Install
Claude Desktop
On MacOS: ~/Library/Application\ Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
Development
Building and Publishing
To prepare the package for distribution:
- Sync dependencies and update lockfile:
- Build package distributions:
This will create source and wheel distributions in the dist/
directory.
- Publish to PyPI:
Note: You'll need to set PyPI credentials via environment variables or command flags:
- Token:
--token
orUV_PUBLISH_TOKEN
- Or username/password:
--username
/UV_PUBLISH_USERNAME
and--password
/UV_PUBLISH_PASSWORD
Debugging
Since MCP servers run over stdio, debugging can be challenging. For the best debugging experience, we strongly recommend using the MCP Inspector.
You can launch the MCP Inspector via npm
with this command:
Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
A Model Context Protocol server for ingesting, chunking and semantically searching documentation files, with support for markdown, Python, OpenAPI, HTML files and URLs.
Related MCP Servers
- AsecurityAlicenseAqualityA Model Context Protocol server enabling advanced search and content extraction using the Tavily API, with rich customization and integration options.Last updated -4571JavaScriptMIT License
- AsecurityAlicenseAqualityA Model Context Protocol server that provides tools for code modification and generation via Large Language Models, allowing users to create, modify, rewrite, and delete files using structured XML instructions.Last updated -12PythonMIT License
- -securityAlicense-qualityA server that provides document processing capabilities using the Model Context Protocol, allowing conversion of documents to markdown, extraction of tables, and processing of document images.Last updated -6PythonMIT License
- -securityFlicense-qualityA simple Model Context Protocol server that enables searching and retrieving relevant documentation snippets from Langchain, Llama Index, and OpenAI official documentation.Last updated -Python