Agentic MCP Weather Intelligence System ๐ค๏ธ๐ค
A comprehensive Agentic Model Context Protocol (MCP) system that provides intelligent weather services through orchestrated multi-agent architecture. Built for scalable agentic applications with full Docker support and Streamlit Web UI for easy deployment and interaction.
๐ Key Features
๐ Web Interface
Streamlit Chat UI: ChatGPT-like interface at http://localhost:8501
Real-time Interactions: Direct communication with weather agents
Visual Dashboard: System health monitoring and agent status
Mobile Responsive: Works on desktop, tablet, and mobile devices
๐ค Multi-Agent Coordination
Smart Alert Agent: Proactive weather monitoring and personalized alerts
Weather Intelligence Agent: Multi-source data aggregation and analysis
Travel Agent: Location-based weather planning and recommendations
Agent Coordination Hub: Centralized orchestration of all weather agents
๐ณ Docker-First Architecture
Complete Containerization: Weather server + Ollama LLM + Streamlit UI + Setup automation
Multi-Service Orchestration: Production-ready microservices architecture
Production Ready: Optimized Dockerfile with security best practices and health checks
One-Command Deployment: Full system startup with ./start-docker.sh
๐ง Modular Architecture
Server Registry: Automatic discovery and management of MCP servers
Agent Orchestrator: Intelligent workflow coordination with local LLM
Multi-Agent Support: Extensible framework for specialized weather agents
Health Monitoring: Real-time status tracking with comprehensive health endpoints
API-First Design: RESTful APIs with interactive documentation at /docs
๐ค Advanced Agentic Capabilities
Natural Language Processing: Understand complex weather queries through LLM integration
Intelligent Task Routing: Automatically delegate queries to specialized agents
Multi-Location Coordination: Compare and analyze weather across multiple cities simultaneously
Proactive Alert System: Smart monitoring with personalized notifications and thresholds
Local LLM Integration: Ollama-powered reasoning and decision making
Context-Aware Responses: Maintain conversation history and learning
๐ Comprehensive Weather Services
Real-Time Weather: Current conditions for any city worldwide via multiple APIs
Advanced Forecasting: Detailed predictions using National Weather Service API
Smart Alert System: Weather warnings, emergency notifications, and custom thresholds
Multi-Source Intelligence: Data fusion from weather.gov, wttr.in, and additional sources
Travel Planning: Location-based weather analysis for trip planning and recommendations
๐๏ธ System Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Docker Network: weather-mcp-network โ
โ โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโ โ
โ โ Streamlit UI โ โ Weather MCP โ โ Ollama LLM โ โ Setup โ โ
โ โ :8501 โ โ Server :8000 โ โ Server :11434 โ โ Agent โ โ
โ โ โโโโโโโโโโโโโโโ โ โ โโโโโโโโโโโโโโโ โ โ โโโโโโโโโโโโโโโ โ โ (Init) โ โ
โ โ โ Chat UI โ โ โ โ MCP API โ โ โ โ Models: โ โ โ โโโโโโโ โ โ
โ โ โ Dashboard โโโผโโโผโโค Health โโโผโโโผโโค - llama3 โ โ โ โAuto โ โ โ
โ โ โ Monitoringโ โ โ โ Agent Hub โ โ โ โ - phi3 โ โ โ โSetupโ โ โ
โ โ โโโโโโโโโโโโโโโ โ โ โโโโโโโโโโโโโโโ โ โ โโโโโโโโโโโโโโโ โ โ โโโโโโโ โ โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโ โ
โ โ โ โ โ โ
โโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโ
โ โ โ โ
โ โ โ โ
โโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโ
โ Host System โ
โ ๐ http://localhost:8501 (Streamlit Chat Interface) โ
โ ๐ง http://localhost:8000 (Weather API + Agent Coordination) โ
โ ๐ค http://localhost:11434 (Ollama LLM Engine) โ
โ ๐ http://localhost:8000/docs (Interactive API Documentation) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Agent Coordination Flow
User Query โโบ Streamlit UI โโบ Agent Coordination Hub โโบ Specialized Agents
โ โ โ
โ โ โโโโโโโโโโโโโโโ
โ โโโโโโโโโโโโโโโโคSmart Alert โ
โ โAgent โ
โ โโโโโโโโโคWeather Intelโ
โ โ โTravel Agent โ
โ โ โโโโโโโโโโโโโโโ
โ โ โ
โ โผ โผ
โ Ollama LLM โโโบ API Results
โ โ โ
โโโโโโโโโโโโโ Response โโโโโโโโโดโโโโโโโโโโโโโโโ
โก TL;DR - Get Started in 3 Commands
git clone <your-repo-url> && cd weather
chmod +x *.sh && ./start-docker.sh
# โ
Chat Interface: http://localhost:8501
# โ
API Server: http://localhost:8000
# โ
System ready with Streamlit UI!
What you get instantly:
๐ Streamlit Chat Interface at http://localhost:8501 - ChatGPT-like weather assistant
๐ง Weather API at http://localhost:8000 - Full MCP server with agent coordination
๐ API Docs at http://localhost:8000/docs - Interactive OpenAPI documentation
๐ค Ollama LLM at http://localhost:11434 - Local AI models for intelligent responses
๐ Requirements
Docker (20.10 or higher) with Docker Compose
8GB+ RAM (for Ollama LLM models: llama3 + phi3)
15GB+ disk space (for container images + models + logs)
Internet connection (for weather APIs and initial model downloads)
Ports available: 8000 (API), 8501 (Streamlit), 11434 (Ollama)
๐ Quick Start with Docker
Option 1: Complete System with Convenience Scripts (Recommended)
# 1. Clone the repository
git clone <your-repo-url>
cd weather-mcp-agent
# 2. Make scripts executable (Linux/macOS)
chmod +x *.sh
# 3. Validate your environment (optional but recommended)
./validate-docker.sh
# 4. Start the complete system (one command!)
./start-docker.sh
# 5. For verbose output and logs
./start-docker.sh --verbose
# 6. Stop the system when done
./stop-docker.sh
Option 1b: Manual Docker Commands
# 1. Clone the repository
git clone <your-repo-url>
cd weather-mcp-agent
# 2. Start the complete system (Weather Server + Ollama + Models)
docker-compose up -d
# 3. Monitor the startup process
docker-compose logs -f
# 4. Wait for model downloads (first run only, may take 5-10 minutes)
# You'll see: "Models ready!" when everything is set up
# 5. Test the system
curl http://localhost:8000/health
System will be available at:
Option 2: Development Setup with Demo
# Start system and run demo
docker-compose --profile demo up
# Or run demo separately after system is up
docker-compose up -d
docker-compose run weather-demo
Option 3: Local Development (Non-Docker)
# 1. Install Ollama locally
brew install ollama # macOS
# or download from https://ollama.ai/download
# 2. Start Ollama and pull models
ollama serve &
ollama pull llama3
ollama pull phi3
# 3. Install Python dependencies
pip install -r requirements.txt
# 4. Environment configuration is ready!
# The .env file is already set up for local development
# For production, copy .env.production.template to .env.production and customize
# 5. Start the weather server
python main.py server
๐ณ Docker Management Commands
Convenience Scripts (Recommended)
# Development/testing
./start-docker.sh # Default setup
./start-docker.sh --dev # Development mode (live reload)
./start-docker.sh --demo # Include demo client
./start-docker.sh --verbose # Show detailed logs
# Production deployment
./start-docker.sh --prod # Production configuration
./start-docker.sh --prod --build # Production with fresh build
# Management
./stop-docker.sh # Stop (can restart)
./stop-docker.sh --cleanup # Remove containers
./stop-docker.sh --remove-data # Remove everything including models
Manual Docker Commands
# Default setup
docker-compose up -d
# Development mode (with live reload)
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
# Production mode (optimized settings)
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
# View logs (all services)
docker-compose logs -f
# View logs (specific service)
docker-compose logs -f weather-server
docker-compose logs -f ollama
# Stop all services
docker-compose down
# Restart services
docker-compose restart
# Rebuild and restart (after code changes)
docker-compose up -d --build
# Pull latest images
docker-compose pull
Environment Configurations
Environment | Features | Use Case |
Default | Standard settings, API key disabled | Local development & testing |
Development | Live reload, debug logging, relaxed security | Active development |
Production | Optimized performance, security enabled, resource limits | Production deployment |
Maintenance Commands
# Check service status
docker-compose ps
# Access container shell
docker-compose exec weather-server bash
docker-compose exec ollama bash
# View system resources
docker-compose top
# Clean up (removes containers, networks, volumes)
docker-compose down -v --remove-orphans
# Remove all unused Docker resources
docker system prune -a
Development Commands
# Run with demo profile
docker-compose --profile demo up
# Override environment variables
ENVIRONMENT=development docker-compose up -d
# Run single command in container
docker-compose run weather-server python --version
docker-compose run weather-server python demo.py
# Mount local code for development
# (uncomment volume in docker-compose.yml: - .:/app)
๐ Usage Examples
๐ Primary: Streamlit Chat Interface (Recommended)
Open
๐ฌ "What's the weather like in San Francisco right now?"
๐ฌ "Set up weather alerts for New York with temperature thresholds"
๐ฌ "Compare weather conditions in London, Paris, and Tokyo"
๐ฌ "Plan my outdoor activities for this weekend in Seattle"
๐ฌ "Any severe weather alerts for California today?"
๐ฌ "What's the best time to travel to Miami this week?"
Features:
๐ค Natural Language Processing: Just type like you're chatting with ChatGPT
๐ Visual Dashboard: Real-time agent status and system health monitoring
๐พ Conversation History: Maintains context across multiple queries
๐ฑ Mobile Responsive: Works perfectly on phones and tablets
๐ง API Testing (Advanced Users)
# System Health Check
curl http://localhost:8000/health
# Agent Coordination Status
curl http://localhost:8000/info
# Direct Weather Query
curl -X POST http://localhost:8000/tools/get_weather \
-H "Content-Type: application/json" \
-d '{"city": "San Francisco"}'
# Smart Alert Setup via API
curl -X POST http://localhost:8000/tools/setup_smart_alerts \
-H "Content-Type: application/json" \
-d '{
"locations": ["New York", "Boston"],
"alert_types": ["severe_weather", "temperature_extreme"],
"thresholds": {"temperature_high": 85, "temperature_low": 32}
}'
# Multi-Location Weather Intelligence
curl -X POST http://localhost:8000/tools/get_weather_intelligence \
-H "Content-Type: application/json" \
-d '{"locations": ["London", "Paris", "Rome"], "analysis_type": "comparison"}'
๐ Python Integration
# Direct Agent Usage
from agent_coordination_hub import AgentCoordinationHub
from smart_alert_agent import AlertAgent
# Initialize coordination system
hub = AgentCoordinationHub()
result = await hub.process_request("Weather in Tokyo with travel recommendations")
# Smart alerts with custom thresholds
alert_agent = AlertAgent()
config = {
"locations": ["San Francisco", "Seattle"],
"alert_types": ["severe_weather", "temperature_extreme"],
"thresholds": {"temperature_high": 80, "temperature_low": 40}
}
alerts = await alert_agent.setup_smart_alerts(config)
๐ ๏ธ Docker Troubleshooting
Common Issues and Solutions
Issue: Ollama container fails to start
# Check if port 11434 is already in use
lsof -i :11434
# If occupied, stop the local Ollama service
brew services stop ollama
# Check container logs
docker-compose logs ollama
Issue: Model download fails or times out
# Manually pull models with more verbose output
docker-compose exec ollama ollama pull llama3
docker-compose exec ollama ollama pull phi3
# Check available disk space (models are 4GB+ each)
docker system df
Issue: Weather server can't connect to Ollama
# Check network connectivity
docker-compose exec weather-server curl http://ollama:11434/api/version
# Verify Ollama health
curl http://localhost:11434/api/version
# Check container network
docker network ls
docker network inspect weather-mcp-network
Issue: Out of memory errors
# Check Docker memory limits
docker stats
# Increase Docker Desktop memory limit to 8GB+
# Docker Desktop > Settings > Resources > Memory
# Monitor container memory usage
docker-compose exec weather-server free -h
Issue: Port conflicts
# Check what's using port 8000
lsof -i :8000
# Use different ports
SERVER_PORT=8080 docker-compose up -d
# Or modify docker-compose.yml ports section
Performance Optimization
# Pre-pull all images
docker-compose pull
# Build with cache optimization
DOCKER_BUILDKIT=1 docker-compose build
# Limit container resources
# Add to docker-compose.yml under services:
# weather-server:
# deploy:
# resources:
# limits:
# memory: 2g
# reservations:
# memory: 1g
Logs and Debugging
# Detailed logging
LOG_LEVEL=DEBUG docker-compose up -d
# Follow all logs with timestamps
docker-compose logs -f -t
# Export logs for analysis
docker-compose logs > system-logs.txt
# Access container filesystems
docker-compose exec weather-server ls -la /app/logs/
โ๏ธ Environment Configuration
Environment Files Overview
This project includes a committed .env file optimized for local development:
File | Purpose | Committed to Git |
.env
| Local development defaults | โ
Yes |
.env.example
| Template with all options | โ
Yes |
.env.production.template
| Production template | โ
Yes |
.env.production
| Your production config | โ No (create locally) |
.env.local
| Personal overrides | โ No (ignored) |
Local Development
The .env file is ready to use with safe defaults:
# Clone and run immediately - no .env setup needed!
git clone <your-repo>
cd weather-mcp-agent
./start-docker.sh
Local Development Features:
โ
ENVIRONMENT=development
โ
Debug logging enabled
โ
CORS allows localhost origins
โ
API key requirement disabled
โ
High rate limits for testing
โ
Raw data and execution logs included
Customization
Create .env.local for personal overrides (ignored by git):
# .env.local - personal overrides
LOG_LEVEL=DEBUG
OLLAMA_MODEL=phi3
SERVER_PORT=8001
Environment Variables Priority
Environment variables (highest priority)
.env.local (personal overrides)
.env (committed defaults)
๐ Production Deployment with Docker
Step 1: Production Environment Configuration
# 1. Clone to production server
git clone <your-repo-url>
cd weather-mcp-agent
# 2. Create production environment file
cp .env.production.template .env.production
# 3. Edit production settings (REQUIRED - update all sensitive values)
nano .env.production
Key Production Settings:
ENVIRONMENT=production
API_KEY_REQUIRED=true
API_KEY=your-secure-production-key
ALLOWED_ORIGINS=https://yourdomain.com
RATE_LIMIT_PER_MINUTE=60
LOG_LEVEL=INFO
Step 2: SSL/TLS Configuration (Optional)
# Add SSL certificates to docker-compose.yml
mkdir -p ./ssl
# Copy your cert.pem and key.pem to ./ssl/
# Update .env.production
SSL_CERT_PATH=/app/ssl/cert.pem
SSL_KEY_PATH=/app/ssl/key.pem
Step 3: Production Deployment
# Method 1: Using convenience script (recommended)
./start-docker.sh --build
# Method 2: Manual deployment
docker-compose -f docker-compose.yml --env-file .env.production up -d
# Method 3: With custom configuration
docker-compose up -d --build
Step 4: Production Verification
# Check all services are running
docker-compose ps
# Verify health endpoints
curl https://yourdomain.com:8000/health
curl https://yourdomain.com:8000/info
# Check logs for any issues
docker-compose logs -f --tail=100
# Method 4: Docker with external Ollama
docker build -t weather-mcp .
docker run -p 8000:8000 --env-file .env --add-host=host.docker.internal:host-gateway weather-mcp
Production Endpoints:
๐ฅ GET /health - Comprehensive health check with service validation
โก GET /health/quick - Fast health check without external calls
๐ GET /info - Server capabilities and metadata
๐ค๏ธ POST /tools/get_weather - Current weather (rate limited)
๐
POST /tools/get_forecast - Weather forecast (validated coordinates)
๐จ POST /tools/get_alerts - Weather alerts (US states only)
Management Commands:
# Production server management
python main.py start # Start production server
python main.py status # System health and status
python main.py config # View current configuration
python main.py validate # Validate configuration
# Development/testing (disabled in production)
python main.py interactive # Interactive client mode
python main.py demo # System demonstration
python main.py servers # Server registry info
๐ Project Structure
weather/
โโโ ๐ Web Interface
โ โโโ streamlit_app.py # Streamlit Chat UI (Primary Interface)
โโโ ๐ค Agent Coordination System
โ โโโ agent_coordination_hub.py # Central agent coordinator
โ โโโ smart_alert_agent.py # Proactive weather monitoring agent
โ โโโ weather_intelligence_agent.py # Multi-source data analysis agent
โ โโโ travel_agent.py # Location-based travel planning agent
โโโ ๏ฟฝ Core MCP Server
โ โโโ main.py # Production entry point & CLI management
โ โโโ weather.py # Weather MCP server implementation
โ โโโ config.py # Configuration management
โ โโโ server_registry.py # Server discovery & health monitoring
โ โโโ health_server.py # Health check endpoints
โโโ ๐ Orchestration & Workflows
โ โโโ simple_orchestrator.py # Basic agentic workflow orchestrator
โ โโโ agent_orchestrator.py # Advanced LangGraph orchestrator
โโโ ๐ ๏ธ Development & Testing
โ โโโ mcp_client.py # Interactive client for testing
โ โโโ demo.py # System demonstration scripts
โ โโโ run_server.py # Alternative server startup
โโโ ๏ฟฝ Docker & Deployment
โ โโโ Dockerfile # Production container image
โ โโโ docker-compose.yml # Multi-service orchestration
โ โโโ start-docker.sh # Comprehensive startup script
โ โโโ stop-docker.sh # Clean shutdown script
โ โโโ setup-ollama.sh # Ollama model setup automation
โโโ ๐ Configuration & Dependencies
โ โโโ requirements.txt # Python dependencies
โ โโโ pyproject.toml # Project metadata (v0.2.0)
โ โโโ .env # Environment variables (Docker-ready)
โ โโโ .env.example # Configuration template
โโโ ๐ Documentation
โโโ README.md # This comprehensive guide
โโโ SETUP.md # Quick setup instructions
โโโ DOCKER.md # Docker-specific documentation
โโโ DEPLOYMENT.md # Production deployment guide
โโโ WORKING_SYSTEM_SUMMARY.md # System status & test cases
โโโ AGENT_COORDINATION_GUIDE.md # Agent development guide
โโโ CONTRIBUTING.md # Contribution guidelines
๐ฌ Interactive Usage Examples
The primary way to interact with your weather intelligence system:
๐ฎ Smart Weather Queries:
๐ฌ "What's the weather like in London right now?"
๐ค "๐ค๏ธ London Weather Update:
๐ก๏ธ Temperature: 15ยฐC (feels like 13ยฐC)
๐ง๏ธ Conditions: Light drizzle
๐จ Wind: 12 mph from the west
๐ Humidity: 78%"
๐ฌ "Set up weather alerts for my commute route"
๐ค "I'll set up smart alerts for your locations. What cities should I monitor?"
๐ฌ "Compare weather in New York, London, and Tokyo"
๐ค "๐ Multi-City Weather Comparison:
๐ฝ New York: 22ยฐC, Sunny, Perfect for outdoor activities
๐ฌ๐ง London: 15ยฐC, Overcast, Light jacket recommended
๐พ Tokyo: 28ยฐC, Humid, Stay hydrated!"
๏ฟฝ Advanced Features:
Conversation Memory: Maintains context across questions
Visual Dashboard: Real-time system health and agent status
Mobile Responsive: Perfect interface for phones and tablets
Multi-Agent Coordination: Automatic routing to specialized weather agents
๐ ๏ธ API Command Examples (Advanced Users)
# System Status & Health
curl http://localhost:8000/health
curl http://localhost:8000/info
# Agent Coordination
curl -X POST http://localhost:8000/agent/coordinate \
-H "Content-Type: application/json" \
-d '{"query": "Weather alerts for California with travel advice"}'
๐ ๏ธ API Integration Examples
Direct Server API Calls
# Get current weather
curl -X POST http://localhost:8000/tools/get_weather \
-H "Content-Type: application/json" \
-d '{"city": "London"}'
# Get forecast (requires coordinates)
curl -X POST http://localhost:8000/tools/get_forecast \
-H "Content-Type: application/json" \
-d '{"latitude": 51.5074, "longitude": -0.1278}'
# Get weather alerts (US states only)
curl -X POST http://localhost:8000/tools/get_alerts \
-H "Content-Type: application/json" \
-d '{"state": "CA"}'
Server Discovery & Health
# Check server health
curl http://localhost:8000/health
# Get server capabilities
curl http://localhost:8000/info
๐ง Extending the System
Adding New MCP Servers
# Register a new server in server_registry.py
from server_registry import registry, MCPServer
# Define new server
finance_server = MCPServer(
name="finance-server",
host="localhost",
port=8001,
description="Financial data and stock information",
tools=["get_stock_price", "get_market_news", "analyze_portfolio"],
tags=["finance", "stocks", "market"]
)
# Register it
registry.register_server(finance_server)
Custom Task Types
# Extend TaskType enum in simple_orchestrator.py
class TaskType(Enum):
WEATHER_QUERY = "weather_query"
FORECAST_ANALYSIS = "forecast_analysis"
ALERT_MONITORING = "alert_monitoring"
MULTI_LOCATION = "multi_location"
FINANCIAL_ANALYSIS = "financial_analysis" # New!
GENERAL_INQUIRY = "general_inquiry"
๐ฏ Agentic Design Principles
1. Modularity
Each component has a single, clear responsibility
Easy to extend with new servers and capabilities
Loose coupling between orchestrator and servers
2. Intelligent Routing
Task classification determines workflow path
Location extraction enables multi-location queries
Error handling with graceful fallbacks
3. Scalability
Server registry supports dynamic server addition
Health monitoring for automatic failover
Async operations for concurrent processing
4. Observability
๐ฎ Advanced Features (Optional)
LangGraph Integration
For more sophisticated agentic workflows, enable the advanced orchestrator:
# Install additional dependencies
uv add langgraph langchain-ollama
# Use agent_orchestrator.py instead of simple_orchestrator.py
from agent_orchestrator import WeatherOrchestrator
# Requires Ollama running locally
ollama serve
Multi-Agent Coordination
The system is designed to support multi-agent scenarios:
# Example: Weather + Travel planning agent
travel_query = "What's the weather like in my travel destinations this week?"
# โ Orchestrator coordinates:
# 1. Extract travel destinations
# 2. Get weather for each location
# 3. Get forecasts for travel dates
# 4. Provide travel recommendations
๐ Monitoring & Debugging
Health Checks
# Check all servers
๐ฌ You: servers
๐ Found 1 registered servers:
โ
Online: 1
โ Offline: 0
โ ๏ธ Error: 0
โ Unknown: 0
Execution Logs
Every query provides detailed execution tracing:
๐ Execution Log:
1. Classified task as: weather_query
2. Extracted locations: ['London']
3. Gathered weather data for 1 locations
4. Generated response
๐ Production Security
Environment Variables
Required for production deployment:
# Server Configuration
SERVER_HOST=0.0.0.0
SERVER_PORT=8000
ENVIRONMENT=production
# Security
API_KEY_REQUIRED=true
API_KEY=your-secure-api-key-here
RATE_LIMIT_PER_MINUTE=100
ALLOWED_ORIGINS=https://yourdomain.com
# Logging
LOG_LEVEL=INFO
LOG_FILE_PATH=/var/log/weather-mcp/server.log
Security Features
Input validation with Pydantic models
Rate limiting per endpoint (configurable)
API key authentication (optional)
CORS protection with configurable origins
Request size limits to prevent DoS
Comprehensive logging for audit trails
Error sanitization to prevent information leakage
๏ฟฝ Production Monitoring
Health Checks
# Quick health check
curl http://localhost:8000/health/quick
# Comprehensive health check (includes external APIs)
curl http://localhost:8000/health
# Server info and capabilities
curl http://localhost:8000/info
Logging
Production logs are structured and include:
# View logs (if using file logging)
tail -f /var/log/weather-mcp/server.log
# Check log level
python main.py config | grep -i log
Performance Optimization
Async request handling with httpx
Connection pooling for external APIs
Request timeout controls
Exponential backoff for API retries
Response caching (configurable)
Resource limits and rate limiting
๏ฟฝ๐จ Troubleshooting
Common Production Issues:
Server won't start: Check port availability and environment variables
# Check port usage
lsof -ti:8000
# Validate configuration
python main.py validate
# Check environment
python main.py config
Ollama connection issues: Ensure Ollama is running and accessible
# Check if Ollama is running
ollama list
# If not running, start Ollama
ollama serve
# Test connection to Ollama
curl http://localhost:11434/api/version
# Verify model is available
ollama run llama3 "Hello, test message"
High memory usage: Adjust worker count and connection limits
# Reduce workers in production
uvicorn weather:app --workers 2 --max-requests 1000
API timeouts: External weather services may be slow
# Check API status
curl -w "%{time_total}\n" http://localhost:8000/health
Rate limiting issues: Adjust limits in environment variables
# In .env file
RATE_LIMIT_PER_MINUTE=200
SERVER_TIMEOUT=45.0
Log Analysis
# Find errors in logs
grep -i error /var/log/weather-mcp/server.log
# Check API performance
grep "response_time" /var/log/weather-mcp/server.log
# Monitor rate limiting
grep "rate.*limit" /var/log/weather-mcp/server.log
๐ค Contributing
We welcome contributions! Here's how to get started:
Fork the repository
Create a feature branch: git checkout -b feature/amazing-feature
Make your changes
Add tests for new functionality
Run the test suite: uv run main.py test
Commit your changes: git commit -m 'Add amazing feature'
Push to the branch: git push origin feature/amazing-feature
Open a Pull Request
Areas for Contribution:
New MCP Servers: Add weather-adjacent services (traffic, events, etc.)
Enhanced NLP: Improve location extraction and query understanding
Advanced Orchestration: Implement complex multi-step workflows
Data Sources: Integrate additional weather APIs and services
Documentation: Improve guides and examples
Production Features: Add monitoring, caching, and performance improvements
Security Enhancements: Additional authentication methods and security hardening
โ
Docker Deployment Checklist
Pre-deployment
Docker Engine 20.10+ installed
Docker Compose v2.0+ or docker-compose v1.29+ installed
System has 8GB+ RAM available
10GB+ disk space for Ollama models
Internet connectivity for API access and model downloads
Environment Setup
Repository cloned and scripts made executable (chmod +x *.sh)
Environment validated (./validate-docker.sh)
Production environment configured (.env.production)
SSL certificates configured (if using HTTPS)
Deployment Verification
All containers started successfully (docker-compose ps)
Health checks passing (curl localhost:8000/health)
Ollama models downloaded (docker-compose logs ollama-setup)
Weather API endpoints responding (curl localhost:8000/tools/get_weather)
Logs show no errors (docker-compose logs)
Production Readiness
API keys configured and secure
Rate limiting configured appropriately
CORS settings configured for your domain
Monitoring and alerting configured
Backup strategy for configuration and data
Resource limits set for containers
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ฌ Demo Scenarios
๐ Open
๐โโ๏ธ Personal Planning
๐ฌ "Plan my outdoor workout routine for San Francisco this week"
๐ฌ "Should I bring an umbrella to my meeting in Seattle tomorrow?"
๐ฌ "What's the best day for a picnic in Central Park this weekend?"
๐ฌ "When should I schedule my outdoor photography session in London?"
โ๏ธ Travel Intelligence
๐ฌ "I'm flying from New York to Los Angeles tomorrow - any weather concerns?"
๐ฌ "Compare weather conditions for my business trip: Boston, Chicago, Denver"
๐ฌ "Best time to visit Tokyo this month based on weather patterns?"
๐ฌ "Should I pack winter clothes for my trip to Montreal next week?"
๐จ Smart Monitoring
๐ฌ "Set up weather alerts for my daily commute from Brooklyn to Manhattan"
๐ฌ "Monitor severe weather for my company's offices in California and Texas"
๐ฌ "Alert me if temperature drops below freezing in Chicago this week"
๐ฌ "Watch for storm systems affecting my weekend camping trip in Yosemite"
๐ข Business Applications
๐ฌ "Weather impact analysis for our retail stores in Florida, Georgia, and South Carolina"
๐ฌ "Construction weather forecast for our project sites in Denver and Phoenix"
๐ฌ "Event planning weather assessment for outdoor venues this month"
Each query demonstrates:
๐ค Multi-Agent Coordination: Automatic routing to specialized agents
๐ง Context Awareness: Understanding complex, multi-part requests
๐ Intelligent Analysis: Data fusion from multiple weather sources
๐ก Proactive Recommendations: Actionable insights beyond raw data
๐ Dependencies
See requirements.txt for the complete list of dependencies. Key packages:
FastAPI: REST API framework
LangChain: LLM integration (optional for advanced features)
LangGraph: Advanced agentic orchestration
MCP: Model Context Protocol implementation
Requests/HTTPX: HTTP client libraries
Pydantic: Data validation
๐ Acknowledgments
๐ Current System Status (Updated: October 16, 2025)
โ
Live Services
๐ค Active Agents
Smart Alert Agent: โ
Proactive weather monitoring with custom thresholds
Weather Intelligence Agent: โ
Multi-source data analysis and forecasting
Travel Agent: โ
Location-based planning and recommendations
Agent Coordination Hub: โ
Central orchestration and routing system
๐ก Try It Now
Open: http://localhost:8501 (Streamlit Chat)
Ask: "Set up weather alerts for San Francisco with temperature thresholds"
Watch: Multi-agent coordination in action!
๐ System Health
{
"status": "healthy",
"services": {
"nws_api": "available",
"wttr_in": "available",
"ollama": "healthy"
},
"performance": {
"response_time_ms": 2135.61,
"memory_usage": "available"
},
"environment": "production"
}
Built with โค๏ธ for the agentic AI community | Extensible โข Modular โข Production-Ready
Version 0.2.0 | Multi-Agent Coordination | Docker-Native | Streamlit Interface