LangChain Documentation MCP Server
Fetches real-world code examples and source code snippets directly from the official LangChain repository on GitHub.
Provides comprehensive access to official LangChain documentation, including real-time search, tutorial discovery, and detailed API references for specific classes.
Retrieves the latest version information and release details for the LangChain package from PyPI.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@LangChain Documentation MCP Servershow me code examples for implementing a RAG chain"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
LangChain Documentation MCP Server
A comprehensive dual-mode server that provides real-time access to LangChain documentation, API references, and code examples. Supports both FastAPI web service and native Model Context Protocol (MCP) server modes, fetching live data from official LangChain sources.
🚀 Features
�️ Dual Server Modes - Run as FastAPI web service or native MCP server
�📚 Live Documentation Search - Search through official LangChain documentation in real-time
🔍 API Reference Lookup - Get detailed API references from GitHub source code
🐙 GitHub Code Examples - Fetch real code examples from the LangChain repository
📖 Tutorial Discovery - Find and access LangChain tutorials and guides
📦 Version Tracking - Get latest version information from PyPI
🔗 Direct API Search - Search specifically through API reference documentation
🔌 MCP Protocol Support - Native Model Context Protocol implementation
🌐 Data Sources
This server fetches live data from:
python.langchain.com - Official LangChain documentation
GitHub LangChain Repository - Source code and examples
PyPI - Latest version and release information
📋 API Endpoints
Core Endpoints
GET /- API documentation (Swagger UI)GET /health- Health check and service status
LangChain Documentation
GET /search- Search general documentationGET /search/api- Search API reference specificallyGET /api-reference/{class_name}- Get detailed API reference for a classGET /examples/github- Get real code examples from GitHubGET /tutorials- Get tutorials and guidesGET /latest-version- Get latest LangChain version info
🚀 Quick Start
Option 1: Docker Compose (Recommended)
Clone the repository
git clone https://github.com/LiteObject/langchain-mcp-server.git cd langchain-mcp-serverStart the FastAPI server
docker-compose up --buildAccess the API
API Documentation: http://localhost:8080/docs
Health Check: http://localhost:8080/health
Option 2: Local Development
FastAPI Mode
Install dependencies
pip install -r requirements.txtRun the FastAPI server
# Using the main entry point python run.py # Or using the dedicated script python scripts/run_fastapi.py # Or directly with uvicorn uvicorn src.api.fastapi_app:app --host 0.0.0.0 --port 8000
MCP Server Mode
Install dependencies
pip install -r requirements.txtRun the MCP server
# Using the main entry point python run.py mcp # Or using the dedicated script python scripts/run_mcp.py
📚 Usage Examples
Search Documentation
# Search for "ChatOpenAI" in documentation
curl "http://localhost:8080/search?query=ChatOpenAI&limit=5"
# Search API reference specifically
curl "http://localhost:8080/search/api?query=embeddings"Get API Reference
# Get detailed API reference for ChatOpenAI
curl "http://localhost:8080/api-reference/ChatOpenAI"
# Get API reference for LLMChain
curl "http://localhost:8080/api-reference/LLMChain"Fetch Code Examples
# Get real examples from GitHub
curl "http://localhost:8080/examples/github?query=chatbot&limit=3"
# Get general examples
curl "http://localhost:8080/examples/github"Get Tutorials
# Fetch all available tutorials
curl "http://localhost:8080/tutorials"Version Information
# Get latest version from PyPI
curl "http://localhost:8080/latest-version"🔌 MCP Server Usage
When running in MCP mode, the server provides the following tools:
Available MCP Tools
search_langchain_docs- Search LangChain documentationsearch_api_reference- Search API reference specificallyget_api_reference- Get detailed API reference for a classget_github_examples- Get code examples from GitHubget_tutorials- Get available tutorialsget_latest_version- Get latest LangChain version
MCP Client Integration
{
"mcpServers": {
"langchain-docs": {
"command": "python",
"args": ["path/to/langchain-mcp-server/run.py", "mcp"],
"env": {
"PYTHONPATH": "path/to/langchain-mcp-server"
}
}
}
}🛠️ Configuration
Environment Variables
Variable | Description | Default |
| Server host address | 0.0.0.0 |
| Server port | 8000 |
| Enable debug mode | False |
| Logging level | INFO |
| Timeout for external API calls | 30 seconds |
| GitHub API token (optional) | None |
Docker Configuration
The service runs on port 8080 by default to avoid conflicts. You can modify this in docker-compose.yml:
ports:
- "8080:8000" # Host:Container🔧 Development
Project Structure
├── src/ # Main source code package
│ ├── main.py # Main entry point with dual mode support
│ ├── api/ # API layer
│ │ ├── fastapi_app.py # FastAPI application
│ │ └── mcp_server.py # Native MCP server implementation
│ ├── config/ # Configuration management
│ │ ├── settings.py # Application settings
│ │ └── logging.py # Logging configuration
│ ├── models/ # Data models and schemas
│ │ └── schemas.py # Pydantic models
│ ├── services/ # Business logic
│ │ └── langchain_service.py # LangChain documentation service
│ └── utils/ # Utility modules
│ ├── exceptions.py # Custom exceptions
│ └── helpers.py # Helper functions
├── scripts/ # Convenience scripts
│ ├── run_fastapi.py # Run FastAPI mode
│ ├── run_mcp.py # Run MCP mode
│ └── health_check.py # Health check utility
├── tests/ # Test suite
│ ├── test_api.py # API tests
│ ├── test_services.py # Service tests
│ └── test_integration.py # Integration tests
├── docs/ # Documentation
│ └── API.md # API documentation
├── logs/ # Log files
├── run.py # Simple entry point
├── requirements.txt # Python dependencies
├── pyproject.toml # Project configuration
├── Dockerfile # Docker configuration
├── docker-compose.yml # Docker Compose setup
├── DOCKER.md # Docker documentation
└── README.md # This fileKey Dependencies
FastAPI - Web framework for REST API mode
MCP - Native Model Context Protocol support
FastAPI-MCP - MCP integration for FastAPI
httpx - Async HTTP client for external API calls
BeautifulSoup4 - HTML parsing for documentation scraping
Pydantic - Data validation and settings management
uvicorn - ASGI server for FastAPI
Adding New Endpoints
Define Pydantic models for request/response
Add endpoint function with proper type hints
Include comprehensive docstrings
Add error handling with specific exceptions
Update health check endpoint count
🐛 Error Handling
The server includes robust error handling for:
Network failures - Graceful degradation when external APIs are unavailable
Rate limiting - Handles GitHub API rate limits
Invalid requests - Proper HTTP status codes and error messages
Timeouts - Configurable request timeouts
📊 Health Monitoring
The /health endpoint provides:
Service status
Available endpoints count
Data source URLs
Current timestamp
Updated documentation sections
🔒 Security Considerations
Rate Limiting - Consider implementing rate limiting for production
CORS - Configure CORS headers if needed for web access
API Keys - Add GitHub token for higher API limits
Input Validation - All inputs are validated using Pydantic
🚀 Production Deployment
For production use, consider:
Caching - Add Redis/Memcached for response caching
Rate Limiting - Implement request rate limiting
Monitoring - Add application monitoring and logging
Load Balancing - Use multiple instances behind a load balancer
Database - Store frequently accessed data
CI/CD - Set up automated deployment pipeline
🤝 Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🔗 Related Links
🆘 Support
If you encounter any issues:
Check the health endpoint for service status (FastAPI mode)
Review Docker logs:
docker-compose logsCheck application logs in the
logs/directoryEnsure network connectivity to external APIs
Verify all dependencies are installed correctly
For MCP mode issues, check the MCP client configuration
Note: This server requires internet connectivity to fetch live data from LangChain's official sources. API rate limits may apply for GitHub API calls.
This server cannot be installed
Maintenance
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/LiteObject/langchain-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server