Provides Git operations including log, diff, and status commands for repository management in the current project.
Enables interaction with GitHub's API for managing pull requests, issues, and repositories.
Integrates local LLM capabilities for pattern analysis, code generation, and AI-powered code intelligence using models like qwen2.5, phi4, and mxbai-embed-large for embedding.
π AIStack-MCP
Enterprise-Grade MCP Orchestration for Modern Development
Dual-mode MCP orchestration that solves the isolation vs. coordination dilemmaβlocal-first, production-ready, and 90% cheaper than cloud-only approaches.
π Current Status (v1.2.0)
Latest Release: v1.2.0 β MCP Registry Integration & Template System
What's New in v1.2.0
Feature | Status | Description |
ποΈ MCP Registry | β NEW | Browse and install 500+ community MCP servers |
π Template System | β NEW | Pre-built configs (minimal, standard, full) |
π§ Server Installer | β NEW | One-command installation for npm/PyPI/Docker servers |
π Registry Search | β NEW | Search by keywords, category, runtime |
π§ Code Intelligence | β Stable | Semantic search, pattern analysis, code generation |
π Dual-Mode | β Stable | Single-repo isolation & multi-repo orchestration |
β 88 Tests Passing | β Stable | Comprehensive test coverage |
Quick Stats
Community Servers: 500+ available via registry
Templates: 3 pre-built configurations
Test Coverage: 88 unit tests passing
Documentation: 15+ guides and troubleshooting docs
Production Ready: CI/CD validated, enterprise-tested
π‘ Why This Matters
The Problem: MCP servers require careful isolation for security, but modern development often spans multiple repositories. You're forced to choose between safe isolation (one repo at a time) or productivity (cross-repo intelligence).
The Solution: AIStack-MCP provides dual-mode orchestrationβswitch between isolated single-repo mode and coordinated multi-repo mode with a single command. Get the best of both worlds.
Key Differentiators
What Makes This Different | Why It Matters |
π One-command mode switching | Switch context in seconds, not minutes |
ποΈ 2025 proven patterns | Git multi-repo support, MCP coordination |
π Production-ready security | Workspace isolation, explicit permissions |
π° 90% cost reduction | Local LLM + vector search = FREE intelligence |
β Enterprise validation | CI-ready scripts, health checks, monitoring |
π Table of Contents
β¨ Features
Core Capabilities
Feature | Description |
π Single-Repo Isolation | Portable
configs, maximum security, per-project permissions |
π Multi-Repo Orchestration | Cross-repo semantic search, unified context, CORE workspace coordination |
β‘ One-Command Switching |
/
with automatic validation |
π©Ί Health Monitoring | Real-time service checks, dependency validation, configuration verification |
π§ Local-First AI | Ollama (LLM inference) + Qdrant (vector search) = 100% local, 100% private |
π° 90% Cost Reduction | Pre-process with local AI, send only compressed context to Claude |
π Universal Compatibility | Works with Python, TypeScript, Rust, Go, Javaβany language, any framework |
Developer Experience
Feature | Description |
π§ Interactive Setup Wizard |
guides new users through complete setup |
π CI-Ready Validation |
with
mode for zero-warning builds |
π Dev Environment Dashboard |
shows service status, models, collections at a glance |
π Comprehensive Documentation | Troubleshooting guides, best practices, real-world examples |
π Production-Tested Patterns | Battle-tested configurations from enterprise deployments |
ποΈ Architecture
Data Flow & Cost Savings
You ask a question β Cursor receives your prompt
Local search first β Qdrant finds relevant code chunks (FREE)
Local compression β Ollama summarizes context (FREE)
Minimal transmission β Only 500-1000 tokens sent to Claude
Final generation β Claude generates with full understanding
Result: 90% fewer tokens, same quality, 100% privacy for local processing.
π Quick Start
Path 1: New Users (Recommended)
The wizard automatically:
β Checks all dependencies
β Guides mode selection
β Configures services
β Validates setup
Path 2: Experienced Users
Path 3: CI/CD Integration
π Community Tools (v1.2.0)
Browse 500+ MCP Servers
Search for tools
Popular servers
Install Community Tools
Install PostgreSQL server
Install Slack integration
Apply Templates
Minimal (search only)
Standard (recommended)
Full (all features)
See Registry Documentation for full guide.
π¦ Installation
System Requirements
Requirement | Minimum | Recommended |
OS | Windows 10 | Windows 11 |
Python | 3.8 | 3.11+ |
Node.js | 18.x | 20.x LTS |
RAM | 8 GB | 16 GB |
Disk | 10 GB | 20 GB (for models) |
Docker | Optional | Recommended |
Step 1: Prerequisites
Step 2: Python Dependencies
Step 3: Local AI Services
Download from ollama.ai
Install and start the service
Pull required models:
Verify:
Option A: Docker (Recommended)
Option B: Native Installation
Download from qdrant.tech
Verify:
Step 4: Configuration
π‘ Tip: If Cursor hangs on startup, ensure you're using the
cmd /cwrapper pattern. See Windows MCP Fix.
π Operating Modes
Mode Comparison
Feature | Single-Repo Mode | Multi-Repo Mode |
Isolation | β Maximum (per-repo) | β οΈ Shared (CORE access) |
Portability | β
| β Relative paths |
Security | β Explicit permissions | β οΈ CORE has all access |
Cross-repo search | β One repo only | β All linked repos |
Setup complexity | β Simple | ββ Requires linking |
Best for | Focused work, security | Multi-package, microservices |
Switching Modes
Multi-Repo Setup
π Usage Guide
Scenario 1: First-Time Setup
Expected Output:
Scenario 2: Daily Development
Scenario 3: Multi-Repo Development
Scenario 4: Team Onboarding
Share these commands with new team members:
Reference: docs/BEST_PRACTICES.md
π Project Structure
π οΈ Tools Reference
Available MCP Tools
Tool | Description | Example | Cost |
| Find code by meaning using vector similarity |
| FREE |
| Extract patterns using local LLM |
| FREE |
| Get optimized context for a task |
| FREE |
| Generate code matching project style |
| FREE |
| Build vector index (run once) |
| FREE |
| Health check and diagnostics |
| FREE |
When to Use Each Tool
Task | Recommended Tool | Why |
"Where is X implemented?" |
| Finds by meaning, not exact text |
"What patterns exist for Y?" |
| LLM extracts and summarizes |
"I need to modify file Z" |
| Provides optimized context |
"Add feature to file W" |
| Matches existing style |
"Is my setup correct?" |
| Comprehensive diagnostics |
β‘ Performance & Cost
Real-World Metrics
Metric | Without AIStack | With AIStack | Improvement |
Tokens per request | 50,000 | 5,000 | 90% reduction |
Monthly API cost | $100-150 | $20 | $80-130 saved |
Search latency | N/A | <100ms | Instant results |
Context accuracy | Variable | Optimized | Better responses |
Data privacy | Cloud-processed | Local-first | 100% private |
Cost Breakdown
Memory Footprint
Component | Memory Usage |
Ollama (idle) | ~500 MB |
Ollama (inference) | 4-8 GB |
Qdrant | ~200 MB |
MCP Servers | ~100 MB total |
π§ Troubleshooting
Issue: Cursor Crashes or Hangs on Startup (Windows)
Symptoms: Cursor freezes when MCP servers start, or crashes immediately.
Cause: Windows STDIO transport incompatibility with Python.
Solution:
Verification: .\scripts\switch_to_single_repo.ps1 generates correct config.
Issue: MCP Servers Not Appearing
Symptoms: No MCP tools available in Cursor chat.
Cause: Cursor didn't load the configuration.
Solution:
Restart Cursor completely (close all windows)
Check
.cursor/mcp.jsonexistsView logs: Help β Toggle Developer Tools β Console
Verification:
Issue: Semantic Search Returns Empty
Symptoms: semantic_search returns no results.
Cause: Workspace not indexed.
Solution:
Verification: Check Qdrant collections at http://localhost:6333/dashboard
Issue: Ollama Connection Failed
Symptoms: "Cannot connect to Ollama" errors.
Cause: Ollama service not running.
Solution:
Issue: Mode Switch Not Taking Effect
Symptoms: Config changes don't apply.
Cause: Cursor caches MCP configuration.
Solution:
Run
.\scripts\switch_to_*.ps1Completely restart Cursor (not just reload)
Check
.cursor/ACTIVE_MODE.txt
β FAQ
Copilot provides inline completions. AIStack-MCP provides:
Semantic search across your entire codebase
Pattern analysis using local LLMs
Cross-repo intelligence in multi-repo mode
90% cost reduction through local preprocessing
100% privacy for local processing
They complement each otherβuse both!
Cost: Local LLM inference is FREE
Privacy: Code never leaves your machine for search/analysis
Speed: Vector search is <100ms vs. network latency
Availability: Works offline once indexed
Currently optimized for Cursor IDE. VS Code support is on the roadmap (v1.1).
All of them! The system works with any text-based code:
Python, JavaScript, TypeScript
Rust, Go, Java, C#, C++
Ruby, PHP, Swift, Kotlin
And more...
Yes. AIStack-MCP includes:
CI-ready validation scripts
Comprehensive error handling
Health monitoring
Production-tested configurations
Enterprise security patterns
Single-repo mode: Maximum isolation, per-project permissions
Multi-repo mode: Explicit linking required, CORE workspace controlled
Local processing: Sensitive code never leaves your machine
Audit trail:
.cursor/ACTIVE_MODE.txttracks mode changes
See docs/BEST_PRACTICES.md for security guidelines.
Absolutely! Share the repository and have team members run:
See docs/BEST_PRACTICES.md for team workflows.
π Advanced Topics
1. Multi-Repo Orchestration Patterns
When to use multi-repo mode:
Python multi-package projects
Microservices architecture
Monorepo-style development with separate repos
Linking strategies:
Symlinks: Best for local development (requires Admin)
Clones: No Admin required, independent copies
Submodules: Version-controlled links
2. CI/CD Integration
3. Custom Tool Development
Extend mcp_intelligence_server.py:
4. Team Workflows
Decision tree for mode selection:
5. Production Deployment
πΊοΈ Roadmap
v1.2.0 β Current Release β
β MCP Registry integration (browse 500+ community servers)
β Template system (minimal, standard, full)
β Server installer (npm, PyPI, Docker)
β Community tools management scripts
β Dual-mode orchestration (single/multi-repo)
β Complete validation suite
β Interactive setup wizard
β Production-ready patterns
β Comprehensive documentation
v1.3.0 β Planned
π² VS Code extension support
π² Additional LLM backends (Claude local, GPT4All)
π² Enhanced caching layer
π² Performance dashboard
v2.0.0 β Future
π² Optional cloud sync
π² Team collaboration features
π² Admin dashboard
π² Usage analytics
π€ Contributing
We welcome contributions! Here's how to get started:
Reporting Bugs
Open an issue with:
Clear description of the problem
Steps to reproduce
Expected vs. actual behavior
System information (OS, Python version, etc.)
Feature Requests
Open a discussion to propose new features.
Development Setup
Pull Request Process
Fork the repository
Create a feature branch (
git checkout -b feature/amazing-feature)Make your changes
Run validation (
python scripts\validate_mcp_config.py --strict)Commit (
git commit -m 'feat: Add amazing feature')Push (
git push origin feature/amazing-feature)Open a Pull Request
Coding Standards
PowerShell: Follow PSScriptAnalyzer rules
Commits: Use Conventional Commits
π Acknowledgments
This project stands on the shoulders of giants:
Model Context Protocol β The foundation for AI-IDE integration
MCP Community Servers β Filesystem, Git, GitHub implementations
Ollama β Local LLM inference made simple
Qdrant β High-performance vector search
Cursor β The AI-first IDE
π Related Projects
MCP Specification β Protocol documentation
MCP Servers β Official server implementations
Ollama β Run LLMs locally
Qdrant β Vector similarity search
π License
This project is licensed under the MIT License β see the LICENSE file for details.
β Star this repo if it helped you!
Made with β€οΈ for the MCP community