Weather Forecast MCP Server
Enables comprehensive Android device automation with 19 tools for device management, screen interaction (tap, swipe, type), screenshots, UI element listing, hardware button simulation, and orientation control via the mobile-mcp server.
Provides iOS device control and automation capabilities through the mobile-mcp server, supporting app management, screen interactions, and device control operations.
Provides an AI-powered conversational interface through a Telegram bot, enabling natural language interactions for task management, mobile automation, and information retrieval with support for per-user conversation history and chained tool calls.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Weather Forecast MCP Serverwhat's the forecast for Paris this weekend?"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
EasyPomodoro Project Consultant
AI-powered system for consulting on the EasyPomodoro Android project using MCP (Model Context Protocol) servers.
Project Overview
This system provides:
Telegram Bot - Interactive chat for project questions (text + voice)
REST API - Backend with MCP integration for AI-powered responses
PR Code Review - Automated pull request reviews via API
Voice Input - Send voice messages via Telegram (Russian, gpt-audio-mini)
User Personalization - Customizable user profiles for tailored responses
Local Development - Run server locally for debugging
Browse and analyze project code via GitHub Copilot MCP
Search project documentation using RAG (Retrieval Augmented Generation)
Architecture
┌─────────────────┐ ┌─────────────────┐
│ Telegram User │ │ GitHub Actions │
└────────┬────────┘ └────────┬────────┘
│ │
↓ ↓
┌─────────────────────────────────────────────────────────────┐
│ Telegram Bot Client │
│ (client/) │
│ - Handles /start command │
│ - Forwards messages to backend │
│ - Shows "Думаю..." indicator │
└─────────────────────────┬───────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────────────────────┐
│ Backend Server (server/) │
│ FastAPI + MCP │
│ │
│ Endpoints: │
│ ├─ POST /api/chat - General chat with AI │
│ ├─ POST /api/chat-voice - Voice input (gpt-audio-mini) │
│ ├─ POST /api/review-pr - AI code review for PRs │
│ ├─ GET /api/profile/:id - Get user profile │
│ └─ GET /health - Health check │
│ │
│ Components: │
│ ├─ chat_service.py - Message processing + tool loops │
│ ├─ audio_service.py - Voice message processing │
│ ├─ mcp_manager.py - MCP server connections │
│ ├─ openrouter_client.py - LLM API + audio models │
│ ├─ profile_manager.py - User personalization │
│ └─ prompts.py - System prompts │
└─────────────────────────┬───────────────────────────────────┘
│
┌───────────────┴───────────────┐
↓ ↓
┌──────────────────────┐ ┌──────────────────────┐
│ GitHub Copilot MCP │ │ RAG Specs MCP │
│ (HTTP Transport) │ │ (Python/stdio) │
│ │ │ │
│ URL: │ │ Tools: │
│ api.githubcopilot. │ │ - rag_query │
│ com/mcp/ │ │ - list_specs │
│ │ │ - get_spec_content │
│ Tools: │ │ - rebuild_index │
│ - get_file_contents │ │ - get_project_ │
│ - list_commits │ │ structure │
│ - get_commit │ │ │
│ - list_issues │ │ Uses: │
│ - issue_read │ │ - GitHub API │
│ - list_pull_requests │ │ - OpenRouter │
│ - pull_request_read │ │ Embeddings │
└──────────────────────┘ └──────────────────────┘API Endpoints
POST /api/chat
General chat endpoint for project questions.
Request:
{
"user_id": "string",
"message": "string"
}Response:
{
"response": "string",
"tool_calls_count": 0,
"mcp_used": false
}POST /api/review-pr
AI-powered code review for pull requests.
Request:
{
"pr_number": 123
}Response:
{
"review": "## Summary\n...",
"tool_calls_count": 5
}Review includes:
Documentation compliance check (via RAG)
Architecture and design patterns review
Kotlin/Android best practices
Security analysis
Performance considerations
File-by-file findings with line numbers
Verdict: APPROVE / REQUEST_CHANGES / COMMENT
POST /api/chat-voice
Process voice messages with gpt-audio-mini.
Request:
POST /api/chat-voice
Content-Type: multipart/form-data
user_id: string
audio: file (.oga, .mp3, .wav)Response:
{
"transcription": null,
"response": "AI model response",
"latency_ms": 1653,
"audio_tokens": 153,
"cost_usd": 0.000092
}Features:
Model:
openai/gpt-audio-minivia OpenRouterLanguage: Russian (configurable)
Max duration: 60 seconds
Max file size: 10 MB
Audio conversion via ffmpeg
No separate transcription (model directly processes audio)
GET /health
Health check endpoint.
Response:
{
"status": "healthy",
"mcp_connected": true,
"tools_count": 11
}GitHub Actions Integration
Use the PR review endpoint in your CI/CD pipeline:
name: AI Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
review:
runs-on: ubuntu-latest
steps:
- name: Request AI Review
id: review
run: |
RESPONSE=$(curl -s -X POST "${{ secrets.MCP_SERVER_URL }}/api/review-pr" \
-H "X-API-Key: ${{ secrets.MCP_API_KEY }}" \
-H "Content-Type: application/json" \
-d '{"pr_number": ${{ github.event.pull_request.number }}}')
echo "$RESPONSE" | jq -r '.review' > review.md
- name: Post Review Comment
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const review = fs.readFileSync('review.md', 'utf8');
github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: '## AI Code Review\n\n' + review
});Required secrets:
MCP_SERVER_URL- Backend server URL (e.g.,https://your-server.railway.app)MCP_API_KEY- API key for authentication
System Components
1. Backend Server (server/)
Files:
main.py- FastAPI application entry pointapp.py- API routes and endpointschat_service.py- Message processing with MCP tool integrationaudio_service.py- Voice message processingmcp_manager.py- MCP server connection managementmcp_http_transport.py- HTTP transport for GitHub Copilot MCPopenrouter_client.py- OpenRouter LLM + audio API integrationprompts.py- System prompts for different tasksschemas.py- Pydantic models for APIconversation.py- Per-user conversation historyprofile_manager.py- User profile managementprofile_storage.py- JSON storage for profilesauth.py- API key authenticationconfig.py- Configuration and environment variableslogger.py- Logging configurationDockerfile- Docker configuration with ffmpeg
2. Telegram Bot Client (client/)
Files:
main.py- Application entry pointbot.py- Telegram bot handlersbackend_client.py- HTTP client for backend APIconfig.py- Bot configurationlogger.py- Logging configuration
3. RAG MCP Server (server/mcp_rag/)
Files:
server.py- MCP server with RAG toolsgithub_fetcher.py- GitHub API client for /specs folderrag_engine.py- Vector search with OpenRouter embeddings
4. MCP Servers
GitHub Copilot MCP (HTTP)
URL: https://api.githubcopilot.com/mcp/
Transport: HTTP (Streamable HTTP transport, MCP spec 2025-03-26)
Essential Tools:
get_file_contents- Read file contents from repositorylist_commits/get_commit- View commit historylist_issues/issue_read- Work with issueslist_pull_requests/pull_request_read- Work with PRs
Authentication: GitHub Personal Access Token (PAT)
RAG Specs MCP (Python/stdio)
Tools:
rag_query- Search documentation with semantic similaritylist_specs- List available specification filesget_spec_content- Get full content of a spec filerebuild_index- Rebuild the RAG indexget_project_structure- Get directory tree
Target Repository: LebedAlIv2601/EasyPomodoro
Installation
Prerequisites
Python 3.12+ (recommended 3.14)
OpenRouter API key
GitHub Personal Access Token
Telegram bot token (for client)
ffmpeg (for voice message processing)
Server Setup
Clone repository:
git clone <repo-url>
cd McpSystemCreate virtual environment:
python3.14 -m venv venv
source venv/bin/activateInstall dependencies:
pip install -r requirements.txtConfigure environment:
cd server
cp .env.example .env
# Edit .env:
# BACKEND_API_KEY=your_secure_api_key
# OPENROUTER_API_KEY=your_openrouter_key
# GITHUB_TOKEN=your_github_patRun server:
python main.pyClient Setup
Configure environment:
cd client
cp .env.example .env
# Edit .env:
# TELEGRAM_BOT_TOKEN=your_bot_token
# BACKEND_URL=http://localhost:8000
# BACKEND_API_KEY=same_as_serverRun client:
python main.pyGitHub PAT Scopes
Create a Classic PAT with these scopes:
repo- Full repository accessread:org- Read organization data (optional)read:user- Read user data
Configuration
Server Environment Variables
Variable | Description |
| API key for authentication |
| OpenRouter API key |
| GitHub Personal Access Token |
| LLM model (default: |
| Server port (default: |
| Server host (default: |
Client Environment Variables
Variable | Description |
| Telegram bot token |
| Backend server URL |
| API key for backend |
Technology Stack
Python 3.14 - Main language
FastAPI - Backend API framework
python-telegram-bot - Telegram integration
MCP SDK - Model Context Protocol (HTTP + stdio transports)
httpx - Async HTTP client
OpenRouter - LLM API access
Pydantic - Data validation
Deployment
Local Development (Recommended for Testing)
For detailed instructions, see LOCAL_SETUP.md
Quick start:
# Terminal 1: Backend Server
cd server
source ../venv/bin/activate
python main.py
# Terminal 2: Telegram Bot
cd client
source ../venv/bin/activate
python main.pyPrerequisites for local run:
ffmpeg installed:
brew install ffmpeg(macOS)Environment variables configured in
server/.envClient configured for local server in
client/.env:BACKEND_URL=http://localhost:8000
Advantages:
Instant feedback on code changes
Full access to logs and debugging
No cloud deployment delays
Works offline (except API calls)
Railway Deployment
The server is designed for Railway deployment:
Connect repository to Railway
Set environment variables in Railway dashboard
Set Root Directory to
serverDeploy automatically on push
Required Railway variables:
BACKEND_API_KEYOPENROUTER_API_KEYGITHUB_TOKEN
Note: Dockerfile includes ffmpeg for voice processing.
Manual Testing
# Local server
curl http://localhost:8000/health
# Railway server
curl https://your-server.railway.app/health
# Chat
curl -X POST "http://localhost:8000/api/chat" \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"user_id": "test", "message": "What is the project structure?"}'
# PR Review
curl -X POST "http://localhost:8000/api/review-pr" \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"pr_number": 1}'Documentation
LOCAL_SETUP.md - Detailed local development guide
CLAUDE.md - Complete technical documentation
DEPLOY.md - Railway deployment instructions
PERSONALIZATION.md - User profile customization
Troubleshooting
GitHub Copilot MCP connection errors
Verify PAT has correct scopes (
repo,read:org)Check token is not expired
Check network connectivity to api.githubcopilot.com
Empty responses from PR review
Check logs for tool call errors
Verify
tool_choice: requiredis set for first iterationModel may not support function calling well - try different model
High latency
PR review may take 30-60 seconds due to multiple tool calls
Check OpenRouter rate limits
Voice input errors
ffmpeg not found: Install ffmpeg:
brew install ffmpegAudio conversion failed: Check ffmpeg installation and logs
Invalid API key: Sync
BACKEND_API_KEYin server/.env and client/.envOpenRouter 500 error: Check audio format and model availability
Port already in use (local)
# Kill process on port 8000
lsof -ti:8000 | xargs kill -9License
This project demonstrates MCP integration for AI-powered project consultation.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/LebedAlIv2601/McpSystem'
If you have feedback or need assistance with the MCP directory API, please join our Discord server