The VayuChat MCP server enables natural language analysis of air quality and other datasets through predefined functions without requiring code for common tasks.
Data Management: Load CSV files into named DataFrames, list/unload datasets, inspect structure and details, query with pandas syntax, and sample rows.
Statistical Analysis: Generate statistical summaries, compare groups (e.g., cities), analyze weekday vs weekend patterns, examine hourly and temporal trends, perform correlation analysis, and identify top/bottom records.
Visualization: Create comparison charts (bar, box plots), time series plots, distribution histograms, hourly pattern plots, and weekday/weekend comparisons—all returned as base64-encoded PNG images.
Advanced Customization: Execute arbitrary Python code for complex analyses with access to pandas, numpy, matplotlib, and all loaded DataFrames.
Pre-loaded Datasets: Air quality data (PM2.5, PM10, NO2, SO2, CO, O3), government funding data, and city metadata for immediate analysis.
Hybrid Architecture: Combines natural language understanding with reliable predefined functions for consistent, user-friendly data exploration.
Provides access to numerical computing and array manipulation capabilities within the code execution environment for data analysis.
Enables loading CSV files into DataFrames, performing statistical summaries, querying datasets, and managing data structures through natural language.
Allows for the execution of Python code to perform complex data analysis and generate visualizations using pandas, numpy, and matplotlib.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@VayuChat MCPLoad sales.csv and plot a bar chart of total revenue by region."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
VayuChat MCP
Natural language data analysis for air quality data using MCP (Model Context Protocol).
Features
Pre-loaded Datasets
air_quality: Hourly PM2.5, PM10, NO2, SO2, CO, O3 readings for Delhi & Bangalore
funding: Government air quality funding by city/year (2020-2024)
city_info: City metadata - population, vehicles, industries, green cover
Analysis Tools (No Code Required!)
Function | Description |
| Show available tables |
| Display table data |
| Detailed statistics |
| Filter with pandas query |
| Weekday vs weekend analysis |
| Compare metrics across cities |
| Correlation analysis |
| Funding breakdown |
| Comprehensive city profile |
Visualization Tools
Function | Description |
| Bar/box charts |
| Time series charts |
| Weekday vs weekend bars |
| Funding over years |
| Hourly patterns |
Installation
Usage
As MCP Server (with Claude Code)
Add to your Claude Code MCP configuration:
As Gradio App (HF Spaces)
Then open http://localhost:7860
Deploy to Hugging Face Spaces
Create a new Space on HF (Gradio SDK)
Upload these files:
app.pyrequirements.txtsrc/folderdata/folder
Or connect your GitHub repo directly to HF Spaces.
Example Queries
Architecture
Why Predefined Functions vs LLM-Generated Code?
This project uses predefined MCP functions instead of letting the LLM generate arbitrary pandas/matplotlib code. Here's why:
Comparison Table
Aspect | Predefined Functions (This Approach) | LLM-Generated Code | Function-Calling LLM |
Reliability | ✅ Deterministic, always works | ❌ May hallucinate syntax | ⚠️ Better but can miss params |
Speed | ✅ Instant (no code generation) | ❌ Slow (generate → parse → execute) | ⚠️ Moderate |
Cost | ✅ Minimal tokens | ❌ Long prompts with schema | ⚠️ Moderate |
Security | ✅ No arbitrary code execution | ❌ Code injection risk | ✅ Safe |
Consistency | ✅ Same visualization style | ❌ Random styling each time | ✅ Consistent |
Model Size | ✅ Works with small/cheap models | ❌ Needs capable coder model | ⚠️ Needs fine-tuned model |
Flexibility | ❌ Limited to predefined queries | ✅ Infinite flexibility | ⚠️ Limited to defined functions |
Error Handling | ✅ Graceful, predictable | ❌ May crash, retry loops | ✅ Structured errors |
When to Use Each Approach
Use Predefined Functions (this approach) when:
You have a known, bounded set of analysis patterns
Users are non-technical (need consistent UX)
Cost/latency matters (production deployment)
You want guaranteed correct outputs
Using smaller/cheaper models (Haiku, GPT-3.5)
Use LLM-Generated Code when:
Exploratory data analysis with unknown patterns
Power users who can debug code
One-off analyses
Prototype/research phase
Use Function-Calling LLM when:
You have predefined functions BUT need better intent parsing
Using OpenAI/Claude with native function calling
Queries are ambiguous and need sophisticated NLU
The Hybrid Approach (Best of Both)
This gives you:
LLM's NLU capabilities for parsing complex queries
Predefined functions' reliability for execution
No code hallucination risk
Consistent outputs every time
Example: Same Query, Different Approaches
Query: "Compare PM2.5 on weekdays vs weekends for Delhi and Bangalore"
LLM-Generated Code (risky):
Predefined Function (reliable):
Cost Comparison (Approximate)
Approach | Tokens per Query | Cost (GPT-4) | Latency |
Predefined + Keyword Router | ~100 | $0.001 | <100ms |
Predefined + LLM Router | ~500 | $0.005 | ~500ms |
LLM-Generated Code | ~2000+ | $0.02+ | 2-5s |
For 1000 queries/day:
Predefined: ~$1-5/day
LLM Code Gen: ~$20+/day
Data Sources
Air quality data: Simulated based on real patterns from Indian cities
Funding data: Mock data representing typical government allocations
City info: Approximate real statistics
License
MIT