Skip to main content
Glama

Agentic MCP Weather System

by Shivbaj

Agentic MCP Weather System šŸŒ¤ļøšŸ¤–

A comprehensive Agentic Model Context Protocol (MCP) system that provides intelligent weather services through orchestrated multi-server architecture. Built for scalable agentic applications with full Docker support for easy deployment.

🌟 Key Features

🐳 Docker-First Architecture

  • Complete Containerization: Everything runs in Docker containers

  • Multi-Service Orchestration: Weather server + Ollama LLM + Setup automation

  • Production Ready: Optimized Dockerfile with security best practices

  • One-Command Deployment: Full system startup with docker-compose up

šŸ”§ Modular Architecture

  • Server Registry: Automatic discovery and management of MCP servers

  • Agentic Orchestrator: Intelligent workflow coordination with local LLM

  • Multi-Server Support: Extensible framework for adding new MCP services

  • Health Monitoring: Real-time status tracking of all registered servers

šŸ¤– Agentic Capabilities

  • Natural Language Processing: Understand complex weather queries

  • Task Classification: Automatically route queries to appropriate handlers

  • Multi-Location Support: Compare weather across multiple cities

  • Local LLM Integration: Ollama-powered intelligent coordination

🌐 Weather Services

  • Current Weather: Real-time conditions for any city worldwide

  • Forecasting: Detailed predictions using NWS API

  • Alert Monitoring: Weather warnings and emergency notifications

  • Multi-Source Data: Integration with weather.gov and wttr.in APIs

šŸ—ļø Docker Architecture

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │ Docker Network │ │ │ │ │ │ weather-mcp-network │ │ ollama:11434 │ │ weather-mcp:8000 │ │ │ │ ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │ │ ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │ │ │────│ │ Ollama │ │────│ │ Weather MCP │ │ │ │ │ │ LLM Server │ │ │ │ Server │ │ │ │ │ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │ │ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │ │ ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā–¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │ │ ollama-setup │ │ │ (Model Downloader) │ │ │ - llama3 │ │ │ - phi3 │ │ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │ │ ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā–¼ā”€ā”€ā”€ā”€ā” │ Host System │ │ http://localhost:8000 (Weather API) │ │ http://localhost:11434 (Ollama API) │ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

⚔ TL;DR - Get Started in 4 Commands

git clone <your-repo-url> && cd weather-mcp-agent chmod +x *.sh ./validate-docker.sh # Check system requirements ./start-docker.sh --verbose # Start system with full logging # āœ… System ready at http://localhost:8000

šŸ“‹ Requirements

  • Docker (20.10 or higher)

  • Docker Compose (v2.0 or higher)

  • 8GB+ RAM (for Ollama LLM models)

  • Internet connection (for weather APIs and model downloads)

šŸš€ Quick Start with Docker

Option 1: Complete System with Convenience Scripts (Recommended)

# 1. Clone the repository git clone <your-repo-url> cd weather-mcp-agent # 2. Make scripts executable (Linux/macOS) chmod +x *.sh # 3. Validate your environment (optional but recommended) ./validate-docker.sh # 4. Start the complete system (one command!) ./start-docker.sh # 5. For verbose output and logs ./start-docker.sh --verbose # 6. Stop the system when done ./stop-docker.sh

Option 1b: Manual Docker Commands

# 1. Clone the repository git clone <your-repo-url> cd weather-mcp-agent # 2. Start the complete system (Weather Server + Ollama + Models) docker-compose up -d # 3. Monitor the startup process docker-compose logs -f # 4. Wait for model downloads (first run only, may take 5-10 minutes) # You'll see: "Models ready!" when everything is set up # 5. Test the system curl http://localhost:8000/health

System will be available at:

Option 2: Development Setup with Demo

# Start system and run demo docker-compose --profile demo up # Or run demo separately after system is up docker-compose up -d docker-compose run weather-demo

Option 3: Local Development (Non-Docker)

# 1. Install Ollama locally brew install ollama # macOS # or download from https://ollama.ai/download # 2. Start Ollama and pull models ollama serve & ollama pull llama3 ollama pull phi3 # 3. Install Python dependencies pip install -r requirements.txt # 4. Environment configuration is ready! # The .env file is already set up for local development # For production, copy .env.production.template to .env.production and customize # 5. Start the weather server python main.py server

🐳 Docker Management Commands

Convenience Scripts (Recommended)

# Development/testing ./start-docker.sh # Default setup ./start-docker.sh --dev # Development mode (live reload) ./start-docker.sh --demo # Include demo client ./start-docker.sh --verbose # Show detailed logs # Production deployment ./start-docker.sh --prod # Production configuration ./start-docker.sh --prod --build # Production with fresh build # Management ./stop-docker.sh # Stop (can restart) ./stop-docker.sh --cleanup # Remove containers ./stop-docker.sh --remove-data # Remove everything including models

Manual Docker Commands

# Default setup docker-compose up -d # Development mode (with live reload) docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d # Production mode (optimized settings) docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d # View logs (all services) docker-compose logs -f # View logs (specific service) docker-compose logs -f weather-server docker-compose logs -f ollama # Stop all services docker-compose down # Restart services docker-compose restart # Rebuild and restart (after code changes) docker-compose up -d --build # Pull latest images docker-compose pull

Environment Configurations

Environment

Features

Use Case

Default

Standard settings, API key disabled

Local development & testing

Development

Live reload, debug logging, relaxed security

Active development

Production

Optimized performance, security enabled, resource limits

Production deployment

Maintenance Commands

# Check service status docker-compose ps # Access container shell docker-compose exec weather-server bash docker-compose exec ollama bash # View system resources docker-compose top # Clean up (removes containers, networks, volumes) docker-compose down -v --remove-orphans # Remove all unused Docker resources docker system prune -a

Development Commands

# Run with demo profile docker-compose --profile demo up # Override environment variables ENVIRONMENT=development docker-compose up -d # Run single command in container docker-compose run weather-server python --version docker-compose run weather-server python demo.py # Mount local code for development # (uncomment volume in docker-compose.yml: - .:/app)

šŸ“š Usage Examples

Testing the Weather API

# Health check curl http://localhost:8000/health # Quick health check curl http://localhost:8000/health/quick # Server information curl http://localhost:8000/info # Get current weather curl -X POST http://localhost:8000/tools/get_weather \ -H "Content-Type: application/json" \ -d '{"city": "San Francisco"}' # Get weather forecast curl -X POST http://localhost:8000/tools/get_forecast \ -H "Content-Type: application/json" \ -d '{"latitude": 37.7749, "longitude": -122.4194}' # Get weather alerts curl -X POST http://localhost:8000/tools/get_alerts \ -H "Content-Type: application/json" \ -d '{"state": "CA"}'

Using the Python Client

# Run interactive demo docker-compose run weather-demo # Or if running locally python demo.py # Run orchestrator demo python agent_orchestrator.py

šŸ› ļø Docker Troubleshooting

Common Issues and Solutions

Issue: Ollama container fails to start

# Check if port 11434 is already in use lsof -i :11434 # If occupied, stop the local Ollama service brew services stop ollama # Check container logs docker-compose logs ollama

Issue: Model download fails or times out

# Manually pull models with more verbose output docker-compose exec ollama ollama pull llama3 docker-compose exec ollama ollama pull phi3 # Check available disk space (models are 4GB+ each) docker system df

Issue: Weather server can't connect to Ollama

# Check network connectivity docker-compose exec weather-server curl http://ollama:11434/api/version # Verify Ollama health curl http://localhost:11434/api/version # Check container network docker network ls docker network inspect weather-mcp-network

Issue: Out of memory errors

# Check Docker memory limits docker stats # Increase Docker Desktop memory limit to 8GB+ # Docker Desktop > Settings > Resources > Memory # Monitor container memory usage docker-compose exec weather-server free -h

Issue: Port conflicts

# Check what's using port 8000 lsof -i :8000 # Use different ports SERVER_PORT=8080 docker-compose up -d # Or modify docker-compose.yml ports section

Performance Optimization

# Pre-pull all images docker-compose pull # Build with cache optimization DOCKER_BUILDKIT=1 docker-compose build # Limit container resources # Add to docker-compose.yml under services: # weather-server: # deploy: # resources: # limits: # memory: 2g # reservations: # memory: 1g

Logs and Debugging

# Detailed logging LOG_LEVEL=DEBUG docker-compose up -d # Follow all logs with timestamps docker-compose logs -f -t # Export logs for analysis docker-compose logs > system-logs.txt # Access container filesystems docker-compose exec weather-server ls -la /app/logs/

āš™ļø Environment Configuration

Environment Files Overview

This project includes a committed .env file optimized for local development:

File

Purpose

Committed to Git

.env

Local development defaults

āœ… Yes

.env.example

Template with all options

āœ… Yes

.env.production.template

Production template

āœ… Yes

.env.production

Your production config

āŒ No (create locally)

.env.local

Personal overrides

āŒ No (ignored)

Local Development

The .env file is ready to use with safe defaults:

# Clone and run immediately - no .env setup needed! git clone <your-repo> cd weather-mcp-agent ./start-docker.sh

Local Development Features:

  • āœ… ENVIRONMENT=development

  • āœ… Debug logging enabled

  • āœ… CORS allows localhost origins

  • āœ… API key requirement disabled

  • āœ… High rate limits for testing

  • āœ… Raw data and execution logs included

Customization

Create .env.local for personal overrides (ignored by git):

# .env.local - personal overrides LOG_LEVEL=DEBUG OLLAMA_MODEL=phi3 SERVER_PORT=8001

Environment Variables Priority

  1. Environment variables (highest priority)

  2. .env.local (personal overrides)

  3. .env (committed defaults)

šŸš€ Production Deployment with Docker

Step 1: Production Environment Configuration

# 1. Clone to production server git clone <your-repo-url> cd weather-mcp-agent # 2. Create production environment file cp .env.production.template .env.production # 3. Edit production settings (REQUIRED - update all sensitive values) nano .env.production

Key Production Settings:

ENVIRONMENT=production API_KEY_REQUIRED=true API_KEY=your-secure-production-key ALLOWED_ORIGINS=https://yourdomain.com RATE_LIMIT_PER_MINUTE=60 LOG_LEVEL=INFO

Step 2: SSL/TLS Configuration (Optional)

# Add SSL certificates to docker-compose.yml mkdir -p ./ssl # Copy your cert.pem and key.pem to ./ssl/ # Update .env.production SSL_CERT_PATH=/app/ssl/cert.pem SSL_KEY_PATH=/app/ssl/key.pem

Step 3: Production Deployment

# Method 1: Using convenience script (recommended) ./start-docker.sh --build # Method 2: Manual deployment docker-compose -f docker-compose.yml --env-file .env.production up -d # Method 3: With custom configuration docker-compose up -d --build

Step 4: Production Verification

# Check all services are running docker-compose ps # Verify health endpoints curl https://yourdomain.com:8000/health curl https://yourdomain.com:8000/info # Check logs for any issues docker-compose logs -f --tail=100 # Method 4: Docker with external Ollama docker build -t weather-mcp . docker run -p 8000:8000 --env-file .env --add-host=host.docker.internal:host-gateway weather-mcp

Production Endpoints:

  • šŸ„ GET /health - Comprehensive health check with service validation

  • ⚔ GET /health/quick - Fast health check without external calls

  • šŸ“Š GET /info - Server capabilities and metadata

  • šŸŒ¤ļø POST /tools/get_weather - Current weather (rate limited)

  • šŸ“… POST /tools/get_forecast - Weather forecast (validated coordinates)

  • 🚨 POST /tools/get_alerts - Weather alerts (US states only)

Management Commands:

# Production server management python main.py start # Start production server python main.py status # System health and status python main.py config # View current configuration python main.py validate # Validate configuration # Development/testing (disabled in production) python main.py interactive # Interactive client mode python main.py demo # System demonstration python main.py servers # Server registry info

šŸ“ Project Structure

weather-mcp-agent/ ā”œā”€ā”€ main.py # šŸš€ Production entry point & CLI management ā”œā”€ā”€ weather.py # šŸŒ¤ļø Production weather MCP server ā”œā”€ā”€ config.py # āš™ļø Production configuration management ā”œā”€ā”€ server_registry.py # šŸ” Server discovery & management ā”œā”€ā”€ simple_orchestrator.py # šŸ¤– Agentic workflow orchestrator ā”œā”€ā”€ agent_orchestrator.py # 🧠 Advanced LangGraph orchestrator (optional) ā”œā”€ā”€ mcp_client.py # šŸ’¬ Interactive agentic client (dev only) ā”œā”€ā”€ demo.py # ļæ½ System demonstration script (dev only) ā”œā”€ā”€ run_server.py # ā–¶ļø Legacy server startup script ā”œā”€ā”€ requirements.txt # šŸ“¦ Production Python dependencies ā”œā”€ā”€ Dockerfile # 🐳 Production container configuration ā”œā”€ā”€ docker-compose.yml # šŸ™ Multi-container setup with Ollama ā”œā”€ā”€ setup-ollama.sh # šŸ¦™ Ollama installation and setup script ā”œā”€ā”€ .env.example # šŸ”§ Environment configuration template ā”œā”€ā”€ pyproject.toml # šŸ“ Project configuration ā”œā”€ā”€ LICENSE # šŸ“„ MIT License ā”œā”€ā”€ CONTRIBUTING.md # šŸ¤ Contribution guidelines ā”œā”€ā”€ SETUP.md # ⚔ Quick setup guide └── README.md # šŸ“š This comprehensive guide

šŸ’¬ Interactive Usage Examples

Basic Commands

šŸ’¬ You: servers # List all MCP servers šŸ’¬ You: status # Show system status šŸ’¬ You: server weather-server # Server details šŸ’¬ You: help # Show all commands

Natural Language Queries

šŸ’¬ You: What's the weather in London? šŸ¤– Agent: šŸŒ¤ļø Current weather in London: šŸŒ”ļø Temperature: 15°C šŸ“ Conditions: Partly cloudy šŸ’¬ You: Compare weather in New York and Paris šŸ¤– Agent: šŸ—ŗļø Weather comparison: šŸŒ¤ļø New York: 22°C, Clear skies šŸŒ¤ļø Paris: 18°C, Light rain šŸ’¬ You: Any weather alerts in California? šŸ¤– Agent: āœ… No active weather alerts for California šŸ’¬ You: Show me the forecast for Tokyo tomorrow šŸ¤– Agent: šŸ“… Forecast for Tokyo: [Detailed forecast information...]

šŸ› ļø API Integration Examples

Direct Server API Calls

# Get current weather curl -X POST http://localhost:8000/tools/get_weather \ -H "Content-Type: application/json" \ -d '{"city": "London"}' # Get forecast (requires coordinates) curl -X POST http://localhost:8000/tools/get_forecast \ -H "Content-Type: application/json" \ -d '{"latitude": 51.5074, "longitude": -0.1278}' # Get weather alerts (US states only) curl -X POST http://localhost:8000/tools/get_alerts \ -H "Content-Type: application/json" \ -d '{"state": "CA"}'

Server Discovery & Health

# Check server health curl http://localhost:8000/health # Get server capabilities curl http://localhost:8000/info

šŸ”§ Extending the System

Adding New MCP Servers

# Register a new server in server_registry.py from server_registry import registry, MCPServer # Define new server finance_server = MCPServer( name="finance-server", host="localhost", port=8001, description="Financial data and stock information", tools=["get_stock_price", "get_market_news", "analyze_portfolio"], tags=["finance", "stocks", "market"] ) # Register it registry.register_server(finance_server)

Custom Task Types

# Extend TaskType enum in simple_orchestrator.py class TaskType(Enum): WEATHER_QUERY = "weather_query" FORECAST_ANALYSIS = "forecast_analysis" ALERT_MONITORING = "alert_monitoring" MULTI_LOCATION = "multi_location" FINANCIAL_ANALYSIS = "financial_analysis" # New! GENERAL_INQUIRY = "general_inquiry"

šŸŽÆ Agentic Design Principles

1. Modularity

  • Each component has a single, clear responsibility

  • Easy to extend with new servers and capabilities

  • Loose coupling between orchestrator and servers

2. Intelligent Routing

  • Task classification determines workflow path

  • Location extraction enables multi-location queries

  • Error handling with graceful fallbacks

3. Scalability

  • Server registry supports dynamic server addition

  • Health monitoring for automatic failover

  • Async operations for concurrent processing

4. Observability

  • Detailed execution logs for debugging

  • Performance metrics and timing

  • Health status monitoring

šŸ”® Advanced Features (Optional)

LangGraph Integration

For more sophisticated agentic workflows, enable the advanced orchestrator:

# Install additional dependencies uv add langgraph langchain-ollama # Use agent_orchestrator.py instead of simple_orchestrator.py from agent_orchestrator import WeatherOrchestrator # Requires Ollama running locally ollama serve

Multi-Agent Coordination

The system is designed to support multi-agent scenarios:

# Example: Weather + Travel planning agent travel_query = "What's the weather like in my travel destinations this week?" # → Orchestrator coordinates: # 1. Extract travel destinations # 2. Get weather for each location # 3. Get forecasts for travel dates # 4. Provide travel recommendations

šŸ“Š Monitoring & Debugging

Health Checks

# Check all servers šŸ’¬ You: servers šŸ“Š Found 1 registered servers: āœ… Online: 1 āŒ Offline: 0 āš ļø Error: 0 ā“ Unknown: 0

Execution Logs

Every query provides detailed execution tracing:

šŸ” Execution Log: 1. Classified task as: weather_query 2. Extracted locations: ['London'] 3. Gathered weather data for 1 locations 4. Generated response

šŸ”’ Production Security

Environment Variables

Required for production deployment:

# Server Configuration SERVER_HOST=0.0.0.0 SERVER_PORT=8000 ENVIRONMENT=production # Security API_KEY_REQUIRED=true API_KEY=your-secure-api-key-here RATE_LIMIT_PER_MINUTE=100 ALLOWED_ORIGINS=https://yourdomain.com # Logging LOG_LEVEL=INFO LOG_FILE_PATH=/var/log/weather-mcp/server.log

Security Features

  • Input validation with Pydantic models

  • Rate limiting per endpoint (configurable)

  • API key authentication (optional)

  • CORS protection with configurable origins

  • Request size limits to prevent DoS

  • Comprehensive logging for audit trails

  • Error sanitization to prevent information leakage

ļæ½ Production Monitoring

Health Checks

# Quick health check curl http://localhost:8000/health/quick # Comprehensive health check (includes external APIs) curl http://localhost:8000/health # Server info and capabilities curl http://localhost:8000/info

Logging

Production logs are structured and include:

  • Request/response logging

  • Error tracking with stack traces

  • Performance metrics

  • Security events (rate limiting, auth failures)

# View logs (if using file logging) tail -f /var/log/weather-mcp/server.log # Check log level python main.py config | grep -i log

Performance Optimization

  • Async request handling with httpx

  • Connection pooling for external APIs

  • Request timeout controls

  • Exponential backoff for API retries

  • Response caching (configurable)

  • Resource limits and rate limiting

�🚨 Troubleshooting

Common Production Issues:

  1. Server won't start: Check port availability and environment variables

    # Check port usage lsof -ti:8000 # Validate configuration python main.py validate # Check environment python main.py config
  2. Ollama connection issues: Ensure Ollama is running and accessible

    # Check if Ollama is running ollama list # If not running, start Ollama ollama serve # Test connection to Ollama curl http://localhost:11434/api/version # Verify model is available ollama run llama3 "Hello, test message"
  3. High memory usage: Adjust worker count and connection limits

    # Reduce workers in production uvicorn weather:app --workers 2 --max-requests 1000
  4. API timeouts: External weather services may be slow

    # Check API status curl -w "%{time_total}\n" http://localhost:8000/health
  5. Rate limiting issues: Adjust limits in environment variables

    # In .env file RATE_LIMIT_PER_MINUTE=200 SERVER_TIMEOUT=45.0

Log Analysis

# Find errors in logs grep -i error /var/log/weather-mcp/server.log # Check API performance grep "response_time" /var/log/weather-mcp/server.log # Monitor rate limiting grep "rate.*limit" /var/log/weather-mcp/server.log

šŸ¤ Contributing

We welcome contributions! Here's how to get started:

  1. Fork the repository

  2. Create a feature branch: git checkout -b feature/amazing-feature

  3. Make your changes

  4. Add tests for new functionality

  5. Run the test suite: uv run main.py test

  6. Commit your changes: git commit -m 'Add amazing feature'

  7. Push to the branch: git push origin feature/amazing-feature

  8. Open a Pull Request

Areas for Contribution:

  • New MCP Servers: Add weather-adjacent services (traffic, events, etc.)

  • Enhanced NLP: Improve location extraction and query understanding

  • Advanced Orchestration: Implement complex multi-step workflows

  • Data Sources: Integrate additional weather APIs and services

  • Documentation: Improve guides and examples

  • Production Features: Add monitoring, caching, and performance improvements

  • Security Enhancements: Additional authentication methods and security hardening

āœ… Docker Deployment Checklist

Pre-deployment

  • Docker Engine 20.10+ installed

  • Docker Compose v2.0+ or docker-compose v1.29+ installed

  • System has 8GB+ RAM available

  • 10GB+ disk space for Ollama models

  • Internet connectivity for API access and model downloads

Environment Setup

  • Repository cloned and scripts made executable (chmod +x *.sh)

  • Environment validated (./validate-docker.sh)

  • Production environment configured (.env.production)

  • SSL certificates configured (if using HTTPS)

Deployment Verification

  • All containers started successfully (docker-compose ps)

  • Health checks passing (curl localhost:8000/health)

  • Ollama models downloaded (docker-compose logs ollama-setup)

  • Weather API endpoints responding (curl localhost:8000/tools/get_weather)

  • Logs show no errors (docker-compose logs)

Production Readiness

  • API keys configured and secure

  • Rate limiting configured appropriately

  • CORS settings configured for your domain

  • Monitoring and alerting configured

  • Backup strategy for configuration and data

  • Resource limits set for containers

šŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

šŸŽ¬ Demo Scenarios

Try these example workflows to see the agentic capabilities:

šŸ’¬ "Plan my outdoor activities based on weather in San Francisco this weekend" šŸ’¬ "Should I cancel my flight due to weather alerts in my departure city?" šŸ’¬ "Compare weather conditions across my company's office locations" šŸ’¬ "What's the best city for a picnic this Saturday based on weather?"

šŸ“š Dependencies

See requirements.txt for the complete list of dependencies. Key packages:

  • FastAPI: REST API framework

  • LangChain: LLM integration (optional for advanced features)

  • LangGraph: Advanced agentic orchestration

  • MCP: Model Context Protocol implementation

  • Requests/HTTPX: HTTP client libraries

  • Pydantic: Data validation

šŸ™ Acknowledgments


Built with ā¤ļø for the agentic AI community | Extensible • Modular • Production-Ready

-
security - not tested
A
license - permissive license
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

Provides intelligent weather services through an orchestrated multi-server architecture with Docker support, enabling natural language weather queries, forecasting, and alert monitoring powered by local LLM integration.

  1. 🌟 Key Features
    1. 🐳 Docker-First Architecture
    2. šŸ”§ Modular Architecture
    3. šŸ¤– Agentic Capabilities
    4. 🌐 Weather Services
  2. šŸ—ļø Docker Architecture
    1. ⚔ TL;DR - Get Started in 4 Commands
      1. šŸ“‹ Requirements
        1. šŸš€ Quick Start with Docker
          1. Option 1: Complete System with Convenience Scripts (Recommended)
          2. Option 1b: Manual Docker Commands
          3. Option 2: Development Setup with Demo
          4. Option 3: Local Development (Non-Docker)
        2. 🐳 Docker Management Commands
          1. Convenience Scripts (Recommended)
          2. Manual Docker Commands
          3. Environment Configurations
          4. Maintenance Commands
          5. Development Commands
        3. šŸ“š Usage Examples
          1. Testing the Weather API
          2. Using the Python Client
        4. šŸ› ļø Docker Troubleshooting
          1. Common Issues and Solutions
          2. Performance Optimization
          3. Logs and Debugging
        5. āš™ļø Environment Configuration
          1. Environment Files Overview
          2. Local Development
          3. Customization
          4. Environment Variables Priority
        6. šŸš€ Production Deployment with Docker
          1. Step 1: Production Environment Configuration
          2. Step 2: SSL/TLS Configuration (Optional)
          3. Step 3: Production Deployment
          4. Step 4: Production Verification
          5. Production Endpoints:
          6. Management Commands:
        7. šŸ“ Project Structure
          1. šŸ’¬ Interactive Usage Examples
            1. Basic Commands
            2. Natural Language Queries
          2. šŸ› ļø API Integration Examples
            1. Direct Server API Calls
            2. Server Discovery & Health
          3. šŸ”§ Extending the System
            1. Adding New MCP Servers
            2. Custom Task Types
          4. šŸŽÆ Agentic Design Principles
            1. 1. Modularity
            2. 2. Intelligent Routing
            3. 3. Scalability
            4. 4. Observability
          5. šŸ”® Advanced Features (Optional)
            1. LangGraph Integration
            2. Multi-Agent Coordination
          6. šŸ“Š Monitoring & Debugging
            1. Health Checks
            2. Execution Logs
          7. šŸ”’ Production Security
            1. Environment Variables
            2. Security Features
          8. ļæ½ Production Monitoring
            1. Health Checks
            2. Logging
            3. Performance Optimization
          9. �🚨 Troubleshooting
            1. Common Production Issues:
            2. Log Analysis
          10. šŸ¤ Contributing
            1. Areas for Contribution:
          11. āœ… Docker Deployment Checklist
            1. Pre-deployment
            2. Environment Setup
            3. Deployment Verification
            4. Production Readiness
          12. šŸ“„ License
            1. šŸŽ¬ Demo Scenarios
              1. šŸ“š Dependencies
                1. šŸ™ Acknowledgments

                  MCP directory API

                  We provide all the information about MCP servers via our MCP API.

                  curl -X GET 'https://glama.ai/api/mcp/v1/servers/Shivbaj/MCP'

                  If you have feedback or need assistance with the MCP directory API, please join our Discord server