Fetches real-world code examples and source code snippets directly from the official LangChain repository on GitHub.
Provides comprehensive access to official LangChain documentation, including real-time search, tutorial discovery, and detailed API references for specific classes.
Retrieves the latest version information and release details for the LangChain package from PyPI.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@LangChain Documentation MCP Servershow me code examples for implementing a RAG chain"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
LangChain Documentation MCP Server
A comprehensive dual-mode server that provides real-time access to LangChain documentation, API references, and code examples. Supports both FastAPI web service and native Model Context Protocol (MCP) server modes, fetching live data from official LangChain sources.
π Features
οΏ½οΈ Dual Server Modes - Run as FastAPI web service or native MCP server
οΏ½π Live Documentation Search - Search through official LangChain documentation in real-time
π API Reference Lookup - Get detailed API references from GitHub source code
π GitHub Code Examples - Fetch real code examples from the LangChain repository
π Tutorial Discovery - Find and access LangChain tutorials and guides
π¦ Version Tracking - Get latest version information from PyPI
π Direct API Search - Search specifically through API reference documentation
π MCP Protocol Support - Native Model Context Protocol implementation
π Data Sources
This server fetches live data from:
python.langchain.com - Official LangChain documentation
GitHub LangChain Repository - Source code and examples
PyPI - Latest version and release information
π API Endpoints
Core Endpoints
GET /- API documentation (Swagger UI)GET /health- Health check and service status
LangChain Documentation
GET /search- Search general documentationGET /search/api- Search API reference specificallyGET /api-reference/{class_name}- Get detailed API reference for a classGET /examples/github- Get real code examples from GitHubGET /tutorials- Get tutorials and guidesGET /latest-version- Get latest LangChain version info
π Quick Start
Option 1: Docker Compose (Recommended)
Clone the repository
git clone https://github.com/LiteObject/langchain-mcp-server.git cd langchain-mcp-serverStart the FastAPI server
docker-compose up --buildAccess the API
API Documentation: http://localhost:8080/docs
Health Check: http://localhost:8080/health
Option 2: Local Development
FastAPI Mode
Install dependencies
pip install -r requirements.txtRun the FastAPI server
# Using the main entry point python run.py # Or using the dedicated script python scripts/run_fastapi.py # Or directly with uvicorn uvicorn src.api.fastapi_app:app --host 0.0.0.0 --port 8000
MCP Server Mode
Install dependencies
pip install -r requirements.txtRun the MCP server
# Using the main entry point python run.py mcp # Or using the dedicated script python scripts/run_mcp.py
π Usage Examples
Search Documentation
# Search for "ChatOpenAI" in documentation
curl "http://localhost:8080/search?query=ChatOpenAI&limit=5"
# Search API reference specifically
curl "http://localhost:8080/search/api?query=embeddings"Get API Reference
# Get detailed API reference for ChatOpenAI
curl "http://localhost:8080/api-reference/ChatOpenAI"
# Get API reference for LLMChain
curl "http://localhost:8080/api-reference/LLMChain"Fetch Code Examples
# Get real examples from GitHub
curl "http://localhost:8080/examples/github?query=chatbot&limit=3"
# Get general examples
curl "http://localhost:8080/examples/github"Get Tutorials
# Fetch all available tutorials
curl "http://localhost:8080/tutorials"Version Information
# Get latest version from PyPI
curl "http://localhost:8080/latest-version"π MCP Server Usage
When running in MCP mode, the server provides the following tools:
Available MCP Tools
search_langchain_docs- Search LangChain documentationsearch_api_reference- Search API reference specificallyget_api_reference- Get detailed API reference for a classget_github_examples- Get code examples from GitHubget_tutorials- Get available tutorialsget_latest_version- Get latest LangChain version
MCP Client Integration
{
"mcpServers": {
"langchain-docs": {
"command": "python",
"args": ["path/to/langchain-mcp-server/run.py", "mcp"],
"env": {
"PYTHONPATH": "path/to/langchain-mcp-server"
}
}
}
}π οΈ Configuration
Environment Variables
Variable | Description | Default |
| Server host address | 0.0.0.0 |
| Server port | 8000 |
| Enable debug mode | False |
| Logging level | INFO |
| Timeout for external API calls | 30 seconds |
| GitHub API token (optional) | None |
Docker Configuration
The service runs on port 8080 by default to avoid conflicts. You can modify this in docker-compose.yml:
ports:
- "8080:8000" # Host:Containerπ§ Development
Project Structure
βββ src/ # Main source code package
β βββ main.py # Main entry point with dual mode support
β βββ api/ # API layer
β β βββ fastapi_app.py # FastAPI application
β β βββ mcp_server.py # Native MCP server implementation
β βββ config/ # Configuration management
β β βββ settings.py # Application settings
β β βββ logging.py # Logging configuration
β βββ models/ # Data models and schemas
β β βββ schemas.py # Pydantic models
β βββ services/ # Business logic
β β βββ langchain_service.py # LangChain documentation service
β βββ utils/ # Utility modules
β βββ exceptions.py # Custom exceptions
β βββ helpers.py # Helper functions
βββ scripts/ # Convenience scripts
β βββ run_fastapi.py # Run FastAPI mode
β βββ run_mcp.py # Run MCP mode
β βββ health_check.py # Health check utility
βββ tests/ # Test suite
β βββ test_api.py # API tests
β βββ test_services.py # Service tests
β βββ test_integration.py # Integration tests
βββ docs/ # Documentation
β βββ API.md # API documentation
βββ logs/ # Log files
βββ run.py # Simple entry point
βββ requirements.txt # Python dependencies
βββ pyproject.toml # Project configuration
βββ Dockerfile # Docker configuration
βββ docker-compose.yml # Docker Compose setup
βββ DOCKER.md # Docker documentation
βββ README.md # This fileKey Dependencies
FastAPI - Web framework for REST API mode
MCP - Native Model Context Protocol support
FastAPI-MCP - MCP integration for FastAPI
httpx - Async HTTP client for external API calls
BeautifulSoup4 - HTML parsing for documentation scraping
Pydantic - Data validation and settings management
uvicorn - ASGI server for FastAPI
Adding New Endpoints
Define Pydantic models for request/response
Add endpoint function with proper type hints
Include comprehensive docstrings
Add error handling with specific exceptions
Update health check endpoint count
π Error Handling
The server includes robust error handling for:
Network failures - Graceful degradation when external APIs are unavailable
Rate limiting - Handles GitHub API rate limits
Invalid requests - Proper HTTP status codes and error messages
Timeouts - Configurable request timeouts
π Health Monitoring
The /health endpoint provides:
Service status
Available endpoints count
Data source URLs
Current timestamp
Updated documentation sections
π Security Considerations
Rate Limiting - Consider implementing rate limiting for production
CORS - Configure CORS headers if needed for web access
API Keys - Add GitHub token for higher API limits
Input Validation - All inputs are validated using Pydantic
π Production Deployment
For production use, consider:
Caching - Add Redis/Memcached for response caching
Rate Limiting - Implement request rate limiting
Monitoring - Add application monitoring and logging
Load Balancing - Use multiple instances behind a load balancer
Database - Store frequently accessed data
CI/CD - Set up automated deployment pipeline
π€ Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
π License
This project is licensed under the MIT License - see the LICENSE file for details.
π Related Links
π Support
If you encounter any issues:
Check the health endpoint for service status (FastAPI mode)
Review Docker logs:
docker-compose logsCheck application logs in the
logs/directoryEnsure network connectivity to external APIs
Verify all dependencies are installed correctly
For MCP mode issues, check the MCP client configuration
Note: This server requires internet connectivity to fetch live data from LangChain's official sources. API rate limits may apply for GitHub API calls.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.