Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@LangChain Agent MCP ServerPlan a 3-day itinerary for Tokyo including food recommendations"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
LangChain Agent MCP Server
A production-ready MCP server exposing LangChain agent capabilities through the Model Context Protocol, deployed on Google Cloud Run.
š Overview
This is a standalone backend service that wraps a LangChain agent as a single, high-level MCP Tool. The server is built with FastAPI and deployed on Google Cloud Run, providing a scalable, production-ready solution for exposing AI agent capabilities to any MCP-compliant client.
Live Service: https://langchain-agent-mcp-server-554655392699.us-central1.run.app
⨠Features
ā MCP Compliance - Full Model Context Protocol support
ā LangChain Agent - Multi-step reasoning with ReAct pattern
ā Playwright Sandbox - Interactive preview of accessibility snapshots (NEW!)
ā Google Cloud Run - Scalable, serverless deployment
ā Tool Support - Extensible framework for custom tools
ā Production Ready - Error handling, logging, and monitoring
ā Docker Support - Containerized for easy deployment
šļø Architecture
Component | Technology | Purpose |
Backend Framework | FastAPI | High-performance, asynchronous web server |
Agent Framework | LangChain | Multi-step reasoning and tool execution |
Deployment | Google Cloud Run | Serverless, auto-scaling hosting |
Containerization | Docker | Consistent deployment environment |
Protocol | Model Context Protocol (MCP) | Standardized tool and context sharing |
š ļø Quick Start
Prerequisites
Python 3.11+
OpenAI API key
Google Cloud account (for Cloud Run deployment)
Docker (optional, for local testing)
Local Development
Clone the repository:
git clone https://github.com/mcpmessenger/LangchainMCP.git cd LangchainMCPInstall dependencies:
# Windows py -m pip install -r requirements.txt # Linux/Mac pip install -r requirements.txtSet up environment variables: Create a
.envfile:OPENAI_API_KEY=your-openai-api-key-here OPENAI_MODEL=gpt-4o-mini PORT=8000Run the server:
# Windows py run_server.py # Linux/Mac python run_server.pyTest the endpoints:
Health: http://localhost:8000/health
Manifest: http://localhost:8000/mcp/manifest
API Docs: http://localhost:8000/docs
Playwright Sandbox: http://localhost:8080/sandbox (after starting frontend)
Start the frontend (optional):
# Install frontend dependencies (first time only) npm install # Start frontend dev server npm run devThen visit http://localhost:8080/sandbox to use the Playwright Sandbox preview feature.
āļø Google Cloud Run Deployment
The server is designed for deployment on Google Cloud Run. See our comprehensive deployment guides:
DEPLOY_CLOUD_RUN_WINDOWS.md - Windows deployment guide
DEPLOY_CLOUD_RUN.md - General deployment guide
QUICK_DEPLOY.md - Quick reference
Quick Deploy
Current Deployment
Service URL: https://langchain-agent-mcp-server-554655392699.us-central1.run.app
Project: slashmcp
Region: us-central1
Status: ā Live and operational
š” API Endpoints
MCP Endpoints
Get Manifest
Returns the MCP manifest declaring available tools.
Response:
Invoke Tool
With System Instruction (Optional):
Response:
System Instructions
The agent_executor tool supports an optional system_instruction parameter that allows you to customize the agent's behavior on a per-invocation basis.
Usage:
Basic Query (uses default prompt):
{ "tool": "agent_executor", "arguments": { "query": "What is the weather today?" } }Query with Custom Instruction:
{ "tool": "agent_executor", "arguments": { "query": "Explain quantum computing", "system_instruction": "You are a physics professor. Explain concepts clearly and use examples." } }Personality Customization:
{ "tool": "agent_executor", "arguments": { "query": "Tell me about space", "system_instruction": "You are a pirate explaining complex topics. Use pirate terminology!" } }
Notes:
If
system_instructionis omitted, the agent uses its default promptEmpty or whitespace-only instructions are ignored (default prompt is used)
Each invocation with a custom instruction creates a new agent instance
Playwright Sandbox Endpoints
Generate Accessibility Snapshot
Response:
Features:
Generates structured accessibility snapshots of any website
Shows how AI "views" websites through structured data
Caching support for popular sites
Token count estimation
Windows-compatible (uses ProactorEventLoop)
Test Prompt Against Snapshot
Response:
Playwright Sandbox UI:
Visit http://localhost:8080/sandbox to use the interactive preview feature:
Enter any URL to generate a snapshot
View live website side-by-side with AI accessibility snapshot
Test prompts to find elements in the snapshot
See token savings compared to full HTML/screenshots
Other Endpoints
GET /- Server informationGET /health- Health checkGET /api/tasks- Safe task summaries (optional monitoring)GET /api/tasks/{task_id}- Safe task summary (optional monitoring)GET /docs- Interactive API documentation (Swagger UI)
š§ Configuration
Environment Variables
Variable | Description | Default | Required |
| OpenAI API key | - | ā Yes |
| OpenAI model to use |
| No |
| Server port |
| No |
| Optional API key for authentication | - | No |
| Maximum agent iterations |
| No |
| Default system prompt (Glazyr) | - | No |
| Enable verbose logging |
| No |
| Enforce /mcp/invoke policy gates |
| No |
| Max allowed query size |
| No |
| Comma-separated domain allowlist (query URLs) | - | No |
| Enable Redis state store + task monitoring | - | No |
| Task summary TTL |
| No |
| Recent task index size |
| No |
š Documentation
š Full Documentation Site - Complete documentation with examples (GitHub Pages)
Quick Links:
Getting Started - Set up and run locally
Examples - Code examples including "Build a RAG agent in 10 lines"
Deployment Guide - Deploy to Google Cloud Run
API Reference - Complete API documentation
Troubleshooting - Common issues and solutions
Build Docs Locally:
Additional Guides:
README_BACKEND.md - Complete technical documentation
PLAYWRIGHT_SANDBOX_SETUP.md - Playwright Sandbox setup and usage
BUG_REPORT_PLAYWRIGHT_NOTIMPLEMENTEDERROR.md - Windows compatibility fix documentation
DEPLOY_CLOUD_RUN_WINDOWS.md - Windows deployment guide
INSTALL_PREREQUISITES.md - Prerequisites installation
SLASHMCP_INTEGRATION.md - SlashMCP integration guide
docs/glazyr-integration.md - Glazyr integration notes (screenshots ā MCP invoke)
š§Ŗ Testing
š Playwright Sandbox Feature
The Playwright Sandbox is an interactive preview feature that demonstrates how AI agents "view" websites through structured accessibility data. This feature is particularly useful for understanding the value of structured snapshots compared to full HTML or screenshots.
Features
Dual-View Interface: See the live website alongside its structured accessibility snapshot
Token Efficiency: Compare token counts - snapshots are typically 90%+ smaller than full HTML
Interactive Testing: Test prompts to find elements in the snapshot
Caching: Popular sites are cached for faster demo results
Windows Compatible: Fixed
NotImplementedErroron Windows using ProactorEventLoop
Quick Start
Install Playwright:
py -m pip install playwright py -m playwright install chromiumStart Backend:
py run_server.pyStart Frontend:
npm install # First time only npm run devVisit Sandbox: Open http://localhost:8080/sandbox and try URLs like:
wikipedia.orggithub.comgoogle.com
How It Works
Enter a URL - The system navigates to the website using Playwright
Generate Snapshot - Extracts structured accessibility information (roles, names, descriptions)
View Comparison - See the live site vs. the AI's structured view
Test Prompts - Try asking the AI to find specific elements
Technical Details
Backend: FastAPI endpoint with Playwright integration
Frontend: React + Vite with TanStack Query
Event Loop: Uses ProactorEventLoop on Windows for subprocess support
Stealth Mode: Anti-bot detection measures for better compatibility
Error Handling: Graceful handling of sites that block automated access
See PLAYWRIGHT_SANDBOX_SETUP.md for detailed setup instructions.
šļø Project Structure
š Deployment Options
Google Cloud Run (Recommended)
Scalable - Auto-scales based on traffic
Serverless - Pay only for what you use
Managed - No infrastructure to manage
Fast - Low latency with global CDN
See DEPLOY_CLOUD_RUN_WINDOWS.md for detailed instructions.
Docker (Local/Other Platforms)
š Performance
P95 Latency: < 5 seconds for standard 3-step ReAct chains
Scalability: Horizontal scaling on Cloud Run
Uptime: 99.9% target (Cloud Run SLA)
Throughput: Handles concurrent requests efficiently
š Security
API key authentication (optional)
Environment variable management
Secret Manager integration (Cloud Run)
HTTPS by default (Cloud Run)
CORS configuration
š¤ Contributing
We welcome contributions! Please see our contributing guidelines.
Fork the repository
Create a feature branch
Make your changes
Submit a pull request
š License
This project is licensed under the MIT License.
š Links
GitHub Repository: https://github.com/mcpmessenger/LangchainMCP
Live Service: https://langchain-agent-mcp-server-554655392699.us-central1.run.app
API Documentation: https://langchain-agent-mcp-server-554655392699.us-central1.run.app/docs
Model Context Protocol: https://modelcontextprotocol.io/
š Acknowledgments
Built with LangChain
Deployed on Google Cloud Run
Uses FastAPI for the web framework
Status: ā Production-ready and deployed on Google Cloud Run