Provides access to Docker documentation through llms.txt parsing and semantic search capabilities
Enables access to Drizzle ORM documentation through llms.txt parsing and semantic search functionality
Provides access to Hono web framework documentation through llms.txt parsing and semantic search
Enables retrieval of Zod schema validation library documentation through llms.txt parsing and semantic search
llms-txt-mcp
Fast documentation access for Claude Code via llms.txt parsing.
The Problem
Ever seen this error?
You're not alone. This is mcpdoc failing on AI SDK documentation.
mcpdoc fails at scale:
- 🐌 5+ second structure discovery
- 💣 1,500 tokens wasted just to list sections
- ❌ Timeouts on files like AI SDK's 30K+ line llms.txt
- 🗑️ Context pollution - your conversation drowns in documentation dumps
AI SDK's documentation (ai-sdk.dev/llms.txt) breaks mcpdoc due to size.
The Problem in Action
Here's what happens when you try to get AI SDK documentation for building a chatbot:
mcpdoc: Token Limit Exceeded
Result: 251,431 tokens attempted → Token limit exceeded
Context7: Drowning in Noise
Result: 15,000 tokens of context pollution
llms-txt-mcp
Result: <100 tokens
Why This Exists
Built to solve the problem of large documentation files timing out or consuming excessive tokens.
Solution
Operation | mcpdoc | Context7 | llms-txt-mcp |
---|---|---|---|
AI SDK Chatbot Docs | 251,431 tokens → ERROR | 15,000 tokens | <100 tokens |
Structure Discovery | 5+ seconds | 2-3 seconds | Fast |
Context Usage | Fails completely | 15K tokens | 50 tokens |
Large File Support | Timeouts | Truncates | Streams |
AI SDK llms.txt (30K+ lines) | Fails | Partial | 132 sections |
Token Usage
Quick Start
For Claude Desktop:
Add to ~/Library/Application Support/Claude/claude_desktop_config.json
:
That's it. Claude Code can now access AI SDK docs instantly.
How It Works
Key insight: Search first, fetch later. Never dump entire documentation.
- Parse: Handles both AI SDK's YAML frontmatter and standard markdown
- Index: Embeds sections with
BAAI/bge-small-en-v1.5
- Search: Semantic search returns top-k results (default: 10)
- Get: Fetch exactly what you need with byte-capped responses
Features
🚀 Instant Startup
- Lazy model loading for fast server startup
- Preindexing with stale source detection
- Background indexing - server available immediately
🎯 Surgical Access
- Search first - find relevant sections without dumping everything
- Byte-capped responses - protect your context window (default: 75KB)
- Human-readable IDs - use canonical URLs like
https://ai-sdk.dev/llms.txt#rag-agent
📦 Zero Config Required
🔄 Smart Caching
- TTL-based refresh (default: 24h)
- ETag/Last-Modified validation
- Persistent storage option for instant subsequent starts
🎨 Claude Code Optimized
- Minimal tool signatures
- Predictable responses
- No timeout surprises
Usage Examples
Search Documentation
Retrieve Specific Sections
List Available Sources
Configuration
Basic (Most Users)
With Options
Advanced Flags
--max-get-bytes N
- Byte limit for responses (default: 75000)--embed-model MODEL
- Change embedding model (default: BAAI/bge-small-en-v1.5)--no-preindex
- Disable automatic pre-indexing on startup--no-background-preindex
- Wait for indexing to complete before serving
Note: The default max-get-bytes
is 75KB. In practice, going 80KB+ can push responses close to a 25,000-token cap in some clients, so 75KB is a safe default.
Performance
Benchmarks on AI SDK llms.txt (30K+ lines, 132 sections):
Metric | Performance |
---|---|
Parse time | Fast (<2s for 30K+ lines) |
Index time (first run) | Fast initial indexing |
Index time (cached) | Instant (0ms) |
Search latency | Fast semantic search |
Memory usage | Lightweight |
Model size | Small embedding model |
Test Results:
When to Use What
Tool | Best For | Avoid When |
---|---|---|
llms-txt-mcp | AI SDK, large docs, Claude Code, search-first access | You need non-llms.txt formats |
mcpdoc | Simple markdown files, small documentation | Large files, AI SDK docs, context matters |
context7 | Broad knowledge base, multiple sources | You need freshness control, deterministic sources |
Development
Setup
Development Workflow
Local Testing with Inspector
Architecture
Key Design Decisions:
- Simple, flat structure following KISS principles
- Streaming parser for large file support
- Lazy loading for instant startup
- Search-first to minimize context usage
Contributing
Issues and PRs welcome! Please ensure:
- Tests pass (
uv run pytest
) - Code is formatted (
uv run ruff format .
) - Types check (
uv run mypy src/
)
Credits
Built on FastMCP and the Model Context Protocol.
License
MIT - See LICENSE
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Enables fast, token-efficient access to large documentation files in llms.txt format through semantic search. Solves token limit issues by searching first and retrieving only relevant sections instead of dumping entire documentation.
Related MCP Servers
- -securityFlicense-qualityEnables LLMs to perform semantic search and document management using ChromaDB, supporting natural language queries with intuitive similarity metrics for retrieval augmented generation applications.Last updated -Python
- -securityAlicense-qualityProvides a semantic memory layer that integrates LLMs with OpenSearch, enabling storage and retrieval of memories within the OpenSearch engine.Last updated -4PythonApache 2.0
- AsecurityFlicenseAqualityA server that helps discover and analyze websites implementing the llms.txt standard, allowing users to check if websites have llms.txt files and list known compliant websites.Last updated -266863JavaScript
- -securityAlicense-qualityAn MCP server that provides tools to load and fetch documentation from any llms.txt source, giving users full control over context retrieval for LLMs in IDE agents and applications.Last updated -578PythonMIT License