Provides comprehensive intelligence and analysis for YouTube, enabling semantic search across transcripts, visual content indexing with OCR and scene descriptions, audience sentiment tracking, and the extraction of structured benchmark data and competitive trends.
π What is VidLens?
Stop watching 10 videos to answer one question. VidLens searches YouTube, reads the transcripts, and synthesizes what creators actually said β across multiple videos, with timestamps, benchmark charts, and sources.
VidLens is a Model Context Protocol server that gives AI agents deep, reliable access to YouTube. Not just transcripts β full intelligence: search, analysis, visual search, and auto-generated comparison charts.
No API key required to start. Every tool has a three-tier fallback chain (YouTube API β yt-dlp β page extraction) so nothing breaks when quota runs out or keys aren't configured.
Try it β paste any of these into Claude:
"I'm thinking about buying the M5 Max MacBook Pro. Search YouTube for top tech reviewers and tell me what they're saying. Is it worth the upgrade from M3/M4?"
VidLens finds 10+ reviews, reads the transcripts, extracts benchmark scores, and presents comparison charts β all from one prompt.
"I want to understand how AI agents work. Search YouTube for the best videos for a beginner and summarize what I need to know."
Discovers videos across creators, ranks by learning value, and prepares transcripts for follow-up questions.
"Search YouTube for reviews comparing the iPhone 17 Pro vs Samsung S26 Ultra. What do reviewers agree on? Where do they disagree?"
Searches, reads transcripts from multiple reviewers, and synthesizes consensus vs disagreements with sources.
π― Core Capabilities
π Explore β One Prompt, Full Pipeline
Ask a question about YouTube and VidLens does the rest: searches, ranks by creator match and freshness, reads transcripts, extracts benchmark data, and presents comparison charts automatically. Works for product research, learning, competitive analysis β anything on YouTube.
π Semantic Search Across Playlists
Import entire playlists or video sets, index every transcript with Gemini embeddings, and search across hundreds of hours of content by meaning β not just keywords.
ποΈ Visual Search β See What's In Videos
Extract keyframes, describe them with Gemini Vision, run OCR on slides and whiteboards, and search by what you see β not just what's said.
π Intelligence Layer β Not Just Data
Sentiment analysis, niche trend discovery, content gap detection, hook pattern analysis, upload timing recommendations. The LLM does the thinking β VidLens gives it the right data.
β‘ Zero Config, Always Works
No API key needed to start. Three-tier fallback chain on every tool. Nothing breaks when quota runs out. Keys are optional power-ups.
π¬ Full Media Pipeline
Download videos/audio/thumbnails. Extract keyframes. Index comments for semantic search. Build a local knowledge base from any YouTube content.
β‘ Why VidLens?
π Quick Start
1. Install
npx vidlens-mcp setupThis auto-detects your MCP clients (Claude Desktop, Claude Code) and configures both.
2. Or configure manually
Claude Desktop β add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"vidlens-mcp": {
"command": "npx",
"args": ["-y", "vidlens-mcp", "serve"]
}
}
}Claude Code β add to ~/.claude/settings.json:
{
"mcpServers": {
"vidlens-mcp": {
"command": "npx",
"args": ["-y", "vidlens-mcp", "serve"]
}
}
}3. Restart your MCP client
Fully quit and reopen Claude Desktop (βQ). Claude Code picks up changes automatically.
4. Try it
Start with "Search YouTube" to activate VidLens:
"Search YouTube for the top M5 Max MacBook Pro reviews and tell me if it's worth upgrading from M4."
"Search YouTube for the best videos about agentic AI for a beginner."
"Import this playlist and search across all videos for mentions of machine learning."
"Search this video's frames for the benchmark comparison chart."
"What's trending in the AI coding niche right now?"
π§° Tools - 41 across 10 modules
π Explore - YouTube Discovery & Research
The front door β one prompt, full pipeline
Tool | What it does |
| Intent-aware search with multi-query ranking, parallel enrichment, transcript summaries, structured benchmark data, and background indexing. One call replaces 5-8 individual tool calls. |
πΊ Core - Video & Channel Intelligence
Always available, no API key needed
Tool | What it does |
| Search YouTube by query with metadata |
| Deep metadata - tags, engagement, language, category |
| Channel stats, description, recent uploads |
| Browse a channel's full video library |
| Full transcript with timestamps and chapters |
| Top comments with likes and engagement |
| List all videos in any playlist |
π Knowledge Base - Semantic Search
Index transcripts and search across them with natural language
Tool | What it does |
| Index an entire playlist's transcripts |
| Index specific videos by URL/ID |
| Natural language search across indexed content |
| Browse your indexed collections |
| Scope searches to one collection |
| Search across all collections |
| Delete a collection and its index |
π¬ Sentiment & Analysis
Understand what audiences think and feel
Tool | What it does |
| Comment sentiment with themes and risk signals |
| Compare performance across multiple videos |
| Playlist-level engagement analytics |
| Complete single-video deep analysis |
π― Creator Intelligence
Insights for content strategy
Tool | What it does |
| Analyze what makes video openings work |
| Tag and title optimization insights |
| Short-form vs long-form performance |
| Best times to publish for engagement |
π Discovery & Trends
Find what's working in any niche
Tool | What it does |
| Momentum, saturation, content gaps in any topic |
| Channel landscape and top performers |
π¬ Media Assets
Download and manage video files locally
Tool | What it does |
| Download video, audio, or thumbnails |
| Browse stored media files |
| Clean up downloaded assets |
| Extract key frames from videos |
| Storage usage and diagnostics |
πΌοΈ Visual Search
Three-layer visual intelligence. Not transcript reuse.
Tool | What it does |
| Extract frames, run Apple Vision OCR + feature prints, Gemini frame descriptions, and Gemini semantic embeddings |
| Search visual frames using semantic embeddings + lexical matching. Returns actual image paths + timestamps as evidence |
| Image-to-image frame similarity using Apple Vision feature prints |
Three layers, all real:
Apple Vision feature prints β image-to-image similarity (find frames that look alike)
Gemini 2.5 Flash frame descriptions β natural language scene understanding per frame
Gemini semantic embeddings β 768-dim embedding retrieval over OCR + description text for true textβvisual search
What you always get back: frame path on disk, timestamp, source video URL/title, match explanation, OCR text, visual description.
What is NOT happening: no transcript embeddings are reused for visual search. This is a separate visual index.
π Comment Knowledge Base
Index and semantically search YouTube comments
Tool | What it does |
| Index a video's comments for search |
| Natural language search over comment corpus |
| Browse comment collections |
| Scope comment searches |
| Search all comment collections |
| Delete a comment collection |
π₯ Diagnostics
Health checks and pre-flight validation
Tool | What it does |
| Full system diagnostic report |
| Validate before importing content |
π API Keys (Optional)
VidLens works without any API keys. Add them to unlock more capabilities:
Key | What it unlocks | Free? | How to get it |
| Better metadata, comment API, search via YouTube API | β Free tier (10,000 units/day) | Google Cloud Console β APIs β Enable YouTube Data API v3 β Credentials β Create API Key |
| Higher-quality embeddings for semantic search (768d vs 384d) | β Free tier | Google AI Studio β Get API Key |
β οΈ These are separate keys from separate Google services. A Gemini key will NOT work for YouTube API calls and vice versa. Create them independently.
# Configure via setup wizard
npx vidlens-mcp setup --youtube-api-key YOUR_YOUTUBE_KEY --gemini-api-key YOUR_GEMINI_KEY
# Or via environment variables
export YOUTUBE_API_KEY=your_youtube_key
export GEMINI_API_KEY=your_gemini_keyπ» CLI
npx vidlens-mcp # Start MCP server (stdio)
npx vidlens-mcp serve # Start MCP server (explicit)
npx vidlens-mcp setup # Auto-configure Claude Desktop + Claude Code
npx vidlens-mcp doctor # Run diagnostics
npx vidlens-mcp version # Print version
npx vidlens-mcp help # Usage guideDoctor - diagnose issues
npx vidlens-mcp doctor --no-liveChecks: Node.js version, yt-dlp availability, API key validation, data directory health, MCP client registration (Claude Desktop, Claude Code).
π± Works Everywhere β Desktop, Cowork, Phone
VidLens works across the full Claude ecosystem. Set it up once, use it everywhere.
Claude Desktop β Chat
The classic experience. Ask a question, get charts and analysis inline. Best for interactive research sessions.
Claude Desktop β Cowork Projects (March 2026)
Create a persistent research project with VidLens connected. Claude remembers context across sessions β last week's competitive research informs this week's analysis. Set up scheduled tasks that run automatically:
"Every Monday, search YouTube for new AI agent framework videos and compare to last week's findings."
Claude Dispatch β From Your Phone (March 2026)
Trigger any VidLens research from the Claude mobile app. Ask from your phone, Claude Desktop runs the tools locally, results come back to your pocket:
"Run my competitive research project β what new M5 Max content dropped this weekend?"
Claude Code β Remote Control
Start a Claude Code session with claude --remote-control, then continue from any browser or your phone at claude.ai/code. Full tool access, full context.
Note: Your Mac must be awake with Claude Desktop open for Cowork, Dispatch, and scheduled tasks to execute.
ποΈ Architecture
System Overview
How the Fallback Chain Works
Every tool that touches YouTube data uses the same resilience pattern:
Every response includes a provenance field telling you exactly which tier served the data and whether anything was partial. No silent degradation β you always know what happened.
Visual Search Pipeline
Visual search is not transcript reuse. It's a dedicated three-layer index:
Three layers, all real:
Apple Vision feature prints β image-to-image similarity (find frames that look alike)
Gemini Vision frame descriptions β natural language scene understanding per frame
Gemini semantic embeddings β 768-dim retrieval over OCR + description text
Data Storage
Everything lives in a single directory. No external databases, no Docker, no infrastructure.
One directory. Portable. Back it up by copying. Delete it to start fresh.
π Requirements
Requirement | Status | Notes |
Node.js β₯ 22 | Required | Uses |
yt-dlp | Recommended |
|
ffmpeg | Optional | Needed for frame extraction and visual indexing |
YouTube API key | Optional | Unlocks comments, better metadata |
Gemini API key | Optional | Upgrades transcript embeddings and frame descriptions for visual search |
macOS Apple Vision | Automatic on macOS | Powers native OCR and image similarity for visual search |
π§ Troubleshooting
"Tool not found" in Claude Desktop
Fully quit Claude Desktop (βQ, not just close window) and reopen. MCP servers only load on startup.
"YOUTUBE_API_KEY not configured" warning
This is informational, not an error. VidLens works without it. Add a key only if you need comments/sentiment features.
"API_KEY_SERVICE_BLOCKED" error
Your API key has restrictions. Create a new unrestricted key in Google Cloud Console, or remove the API restriction from the existing key.
Gemini key doesn't work for YouTube API
These are separate services. You need a YouTube API key from Google Cloud Console AND a Gemini key from Google AI Studio. They are not interchangeable.
Build errors
npx vidlens-mcp doctor # Run diagnostics
npx vidlens-mcp doctor --no-live # Skip network checksπ License
MIT
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.