Integrates with Google Gemini CLI to provide AI-powered responses and multi-turn conversations for technical questions, code reviews, and documentation assistance.
Fast MCP Server with Google Gemini Integration
A comprehensive Model Context Protocol (MCP) server built with FastAPI that integrates with Google Gemini CLI for AI-powered task management and data processing.
๐ Features
FastAPI-based MCP Server: High-performance async server running on port 5000
Google Gemini Integration: Seamless integration with Gemini CLI for AI responses
Task Management Tools: Complete CRUD operations for task management
Data Analytics: Task statistics and CSV export capabilities
Comprehensive Logging: JSON-based operation logging for tracking and debugging
CLI Interface: Easy-to-use command-line tool for server interaction
RESTful API: Full API documentation with Swagger UI
Production Ready: Modular, clean, and well-documented code
๐ Prerequisites
Python 3.8 or higher
Google Gemini CLI installed and configured
curl (for CLI operations)
Git (for cloning the repository)
๐ ๏ธ Installation
1. Clone the Repository
2. Create Virtual Environment
3. Install Dependencies
4. Install Google Gemini CLI
Follow the official Gemini CLI installation guide:
Make sure Gemini CLI is properly configured with your API key.
๐ Quick Start
Start the Server
The server will start on http://localhost:5000
Check Server Status
List Available Tools
Use MCP Tools
Interact with Gemini
View Logs
Stop the Server
๐ API Documentation
Once the server is running, visit:
Swagger UI: http://localhost:5000/docs
ReDoc: http://localhost:5000/redoc
๐ง Available MCP Tools
1. list_tasks
Description: List all tasks with optional status filtering
Parameters:
status_filter
(optional): Filter by status (pending, in_progress, completed)
Example:
./mcp_cli.sh run list_tasks pending
2. create_task
Description: Create a new task
Parameters:
title
(required): Task titledescription
(required): Task descriptionpriority
(optional): Priority level (low, medium, high)assigned_to
(optional): User ID to assign the task
Example:
./mcp_cli.sh run create_task "New Feature" "Implement user dashboard" "high" "dev1"
3. update_task_status
Description: Update the status of an existing task
Parameters:
task_id
(required): ID of the task to updatenew_status
(required): New status (pending, in_progress, completed)
Example:
./mcp_cli.sh run update_task_status 1 "completed"
4. get_task_statistics
Description: Get comprehensive task statistics and analytics
Parameters: None
Example:
./mcp_cli.sh run get_task_statistics
5. search_tasks
Description: Search tasks by title or description
Parameters:
query
(required): Search query string
Example:
./mcp_cli.sh run search_tasks "authentication"
6. export_tasks_to_csv
Description: Export all tasks to a CSV file
Parameters: None
Example:
./mcp_cli.sh run export_tasks_to_csv
๐ค Gemini Integration
The server integrates with Google Gemini CLI to provide AI-powered responses:
Features
Multi-turn Conversations: Maintains context across interactions
Error Handling: Graceful handling of CLI failures and timeouts
Logging: All Gemini interactions are logged for tracking
Timeout Protection: 30-second timeout to prevent hanging requests
Usage Examples
๐ Project Structure
๐ Logging and Monitoring
Operation Logs
All MCP tool calls and Gemini interactions are logged to mcp_logs.json
:
Server Logs
Server logs are written to server.log
and include:
Server startup/shutdown events
Request/response details
Error messages and stack traces
Performance metrics
Health Monitoring
Check server health at any time:
๐ก๏ธ Error Handling
The server includes comprehensive error handling:
Tool Validation: Validates tool names and parameters
Data Persistence: Handles file I/O errors gracefully
Gemini CLI Integration: Manages CLI failures and timeouts
HTTP Errors: Proper HTTP status codes and error messages
Logging: All errors are logged for debugging
๐ง Configuration
Server Configuration
Port: 5000 (configurable in
server.py
)Host: 0.0.0.0 (all interfaces)
Data File:
sample_data.json
Log File:
mcp_logs.json
Gemini CLI Configuration
Ensure Gemini CLI is properly configured:
๐ Deployment
Development
Production
For production deployment, consider:
Using a process manager like PM2 or systemd
Setting up reverse proxy with Nginx
Implementing proper authentication
Adding rate limiting
Setting up monitoring and alerting
๐งช Testing
Manual Testing
Start the server:
./mcp_cli.sh start
Test each tool:
./mcp_cli.sh run <tool_name>
Test Gemini integration:
./mcp_cli.sh gemini "test prompt"
Check logs:
./mcp_cli.sh logs
Verify health:
./mcp_cli.sh health
API Testing
Use the Swagger UI at http://localhost:5000/docs to test API endpoints directly.
๐ Sample Data
The project includes sample_data.json
with:
5 sample tasks with different statuses and priorities
4 user profiles with roles and departments
2 project definitions with team assignments
๐ค Contributing
Fork the repository
Create a feature branch:
git checkout -b feature-name
Make your changes
Test thoroughly
Commit your changes:
git commit -m "Add feature"
Push to the branch:
git push origin feature-name
Submit a pull request
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Troubleshooting
Common Issues
Server won't start
Check if port 5000 is available
Verify virtual environment is activated
Check
server.log
for error details
Gemini CLI not found
Install Gemini CLI:
pip install google-generativeai
Configure API key:
gemini config set api_key YOUR_KEY
Permission denied on mcp_cli.sh
Make executable:
chmod +x mcp_cli.sh
Data file errors
Check file permissions
Verify JSON format in
sample_data.json
Getting Help
Check the logs:
./mcp_cli.sh logs
Review server logs:
tail -f server.log
Test server health:
./mcp_cli.sh health
Check API documentation: http://localhost:5000/docs
๐ฏ Roadmap
Add user authentication and authorization
Implement real-time notifications
Add more MCP tools (file processing, email, etc.)
Create web dashboard
Add unit and integration tests
Implement caching for better performance
Add Docker support
Create CI/CD pipeline
๐ Support
For support and questions:
Create an issue in the repository
Check the troubleshooting section
Review the API documentation
Built with โค๏ธ using FastAPI and Google Gemini
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
Enables AI-powered task management with Google Gemini integration, providing CRUD operations, task analytics, CSV export, and natural language interaction for comprehensive project management.
- ๐ Features
- ๐ Prerequisites
- ๐ ๏ธ Installation
- ๐ Quick Start
- ๐ API Documentation
- ๐ง Available MCP Tools
- ๐ค Gemini Integration
- ๐ Project Structure
- ๐ Logging and Monitoring
- ๐ก๏ธ Error Handling
- ๐ง Configuration
- ๐ Deployment
- ๐งช Testing
- ๐ Sample Data
- ๐ค Contributing
- ๐ License
- ๐ Troubleshooting
- ๐ฏ Roadmap
- ๐ Support