Provides Git operations including log, diff, and status commands for repository management in the current project.
Enables interaction with GitHub's API for managing pull requests, issues, and repositories.
Integrates local LLM capabilities for pattern analysis, code generation, and AI-powered code intelligence using models like qwen2.5, phi4, and mxbai-embed-large for embedding.
Portable MCP Toolkit
Reusable MCP server toolkit that works with ANY project.
What This Does
Provides AI-powered code intelligence for any codebase you open in Cursor:
š Semantic search - Find code by meaning (Qdrant)
š§ Pattern analysis - Extract patterns with local LLM (Ollama)
ā” Code generation - Generate code matching project style (phi4)
š Context optimization - 90% token savings vs reading full files
Quick Start
1. Install Dependencies
Node.js (for community MCP servers)
Python dependencies
Ollama (for local LLMs)
Download from: https://ollama.ai
Qdrant (for vector search)
Or install locally from: https://qdrant.tech
2. Configure Cursor
Copy MCP settings to Cursor
Edit the file and add your GitHub token (if not already added)
3. Use with ANY Project
Open any project in Cursor
Cursor automatically connects MCP servers to THIS project via ${workspaceFolder}
Tools Available
Community Tools (Standard)
filesystem - Read/write files in current project
git - Git operations (log, diff, status)
github - GitHub API (PRs, issues, repos)
Intelligence Tools (AI-Powered)
semantic_search(query) - Find code by meaning
analyze_patterns(type) - Extract code patterns
get_context(file, task) - Optimized context for generation
generate_code(file, task) - AI code generation
index_workspace() - Index project for search (run once)
Usage Examples
First Time with New Project
In Cursor chat:
This indexes the project for semantic search (takes 1-2 min).
Semantic Search
Returns relevant code snippets from current project.
Pattern Analysis
Analyzes async patterns in current project with local LLM.
Code Generation
Generates code matching your project's style.
How It Works
Architecture
Why This Saves Money
Without Intelligence Layer:
Cursor reads 5,000 tokens per file
10 files = 50,000 tokens
Cost: $0.15 per request
With Intelligence Layer:
Local search finds relevant code (FREE)
Local LLM compresses to 500 tokens (FREE)
Cursor only sees 500 tokens
Cost: $0.015 per request
90% cost reduction, same quality
Portability
This toolkit is completely portable:
ā Works with ANY programming language ā Works with ANY project structure ā Works with ANY codebase size ā Single setup, use everywhere
Just open a project in Cursor and all tools adapt automatically via ${workspaceFolder}.
Troubleshooting
MCP servers not appearing
Restart Cursor completely
Check
%APPDATA%\Cursor\User\mcp_settings.jsonexistsView Cursor logs: Help > Toggle Developer Tools > Console
Ollama not working
Check Ollama is running
Verify models installed
Qdrant not working
Check Qdrant is running
Semantic search returns "not indexed"
This only needs to run once per project.
Cost Comparison
Approach | Monthly Cost | Speed | Privacy |
Cursor only | $100-150 | Fast | Cloud |
This toolkit | $20 (Cursor) | Faster | Local |
Savings: $80-130/month
License
MIT
Contributing
This is a personal toolkit but contributions welcome!
This server cannot be installed