Skip to main content
Glama

πŸš€ AIStack-MCP

Enterprise-Grade MCP Orchestration for Modern Development

Dual-mode MCP orchestration that solves the isolation vs. coordination dilemmaβ€”local-first, production-ready, and 90% cheaper than cloud-only approaches.

Build Status Version License Platform Python Code Style


πŸ“Š Current Status (v1.2.0)

Latest Release: v1.2.0 β€” MCP Registry Integration & Template System

What's New in v1.2.0

Feature

Status

Description

πŸ—‚οΈ

MCP Registry

βœ…

NEW

Browse and install 500+ community MCP servers

πŸ“‹

Template System

βœ…

NEW

Pre-built configs (minimal, standard, full)

πŸ”§

Server Installer

βœ…

NEW

One-command installation for npm/PyPI/Docker servers

πŸ”

Registry Search

βœ…

NEW

Search by keywords, category, runtime

🧠

Code Intelligence

βœ… Stable

Semantic search, pattern analysis, code generation

πŸ”„

Dual-Mode

βœ… Stable

Single-repo isolation & multi-repo orchestration

βœ…

88 Tests Passing

βœ… Stable

Comprehensive test coverage

Quick Stats

  • Community Servers: 500+ available via registry

  • Templates: 3 pre-built configurations

  • Test Coverage: 88 unit tests passing

  • Documentation: 15+ guides and troubleshooting docs

  • Production Ready: CI/CD validated, enterprise-tested


πŸ’‘ Why This Matters

The Problem: MCP servers require careful isolation for security, but modern development often spans multiple repositories. You're forced to choose between safe isolation (one repo at a time) or productivity (cross-repo intelligence).

The Solution: AIStack-MCP provides dual-mode orchestrationβ€”switch between isolated single-repo mode and coordinated multi-repo mode with a single command. Get the best of both worlds.

Key Differentiators

What Makes This Different

Why It Matters

πŸ”„

One-command mode switching

Switch context in seconds, not minutes

πŸ—οΈ

2025 proven patterns

Git multi-repo support, MCP coordination

πŸ”’

Production-ready security

Workspace isolation, explicit permissions

πŸ’°

90% cost reduction

Local LLM + vector search = FREE intelligence

βœ…

Enterprise validation

CI-ready scripts, health checks, monitoring


πŸ“‘ Table of Contents


✨ Features

Core Capabilities

Feature

Description

πŸ”’

Single-Repo Isolation

Portable

${workspaceFolder}

configs, maximum security, per-project permissions

🌐

Multi-Repo Orchestration

Cross-repo semantic search, unified context, CORE workspace coordination

⚑

One-Command Switching

switch_to_single_repo.ps1

/

switch_to_multi_repo.ps1

with automatic validation

🩺

Health Monitoring

Real-time service checks, dependency validation, configuration verification

🧠

Local-First AI

Ollama (LLM inference) + Qdrant (vector search) = 100% local, 100% private

πŸ’°

90% Cost Reduction

Pre-process with local AI, send only compressed context to Claude

🌍

Universal Compatibility

Works with Python, TypeScript, Rust, Go, Javaβ€”any language, any framework

Developer Experience

Feature

Description

πŸ§™

Interactive Setup Wizard

quickstart.ps1

guides new users through complete setup

πŸ”

CI-Ready Validation

validate_mcp_config.py

with

--strict

mode for zero-warning builds

πŸ“Š

Dev Environment Dashboard

dev_all.ps1

shows service status, models, collections at a glance

πŸ“š

Comprehensive Documentation

Troubleshooting guides, best practices, real-world examples

🏭

Production-Tested Patterns

Battle-tested configurations from enterprise deployments


πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ YOUR CODEBASE β”‚ β”‚ (Any Language β€’ Any Framework β€’ Any Size) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ AISTACK-MCP ORCHESTRATION LAYER β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ Filesystem β”‚ β”‚ Git β”‚ β”‚ Code Intelligence β”‚ β”‚ β”‚ β”‚ MCP β”‚ β”‚ MCP β”‚ β”‚ MCP β”‚ β”‚ β”‚ β”‚ (Read/Write) β”‚ β”‚ (History/Diff) β”‚ β”‚ (Search/Analyze) β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ Mode Orchestrator: Single-Repo ←→ Multi-Repo Switching β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ LOCAL AI STACK (FREE) β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ OLLAMA β”‚ β”‚ QDRANT β”‚ β”‚ β”‚ β”‚ β€’ LLM Inference β”‚ β”‚ β€’ Vector Search β”‚ β”‚ β”‚ β”‚ β€’ Pattern Analysis β”‚ β”‚ β€’ Semantic Indexing β”‚ β”‚ β”‚ β”‚ β€’ Code Generation β”‚ β”‚ β€’ 90% Token Compression β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ CURSOR + CLAUDE β”‚ β”‚ (Final Generation Only β€’ Minimal Token Usage) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Data Flow & Cost Savings

  1. You ask a question β†’ Cursor receives your prompt

  2. Local search first β†’ Qdrant finds relevant code chunks (FREE)

  3. Local compression β†’ Ollama summarizes context (FREE)

  4. Minimal transmission β†’ Only 500-1000 tokens sent to Claude

  5. Final generation β†’ Claude generates with full understanding

Result: 90% fewer tokens, same quality, 100% privacy for local processing.


πŸš€ Quick Start

# Clone and run the interactive wizard git clone https://github.com/mjdevaccount/AIStack-MCP.git cd AIStack-MCP .\scripts\quickstart.ps1

The wizard automatically:

  • βœ… Checks all dependencies

  • βœ… Guides mode selection

  • βœ… Configures services

  • βœ… Validates setup

Path 2: Experienced Users

# 1. Clone repository git clone https://github.com/mjdevaccount/AIStack-MCP.git cd AIStack-MCP # 2. Install Python dependencies python -m venv .venv .\.venv\Scripts\Activate.ps1 pip install -r requirements.txt # 3. Start services ollama serve # Terminal 1 docker run -d -p 6333:6333 qdrant/qdrant # Terminal 2 # 4. Pull required models ollama pull mxbai-embed-large ollama pull qwen2.5:7b # 5. Configure mode .\scripts\switch_to_single_repo.ps1 # 6. Open in Cursor cursor .

Path 3: CI/CD Integration

# .github/workflows/validate.yml - name: Validate MCP Configuration run: | python scripts/validate_mcp_config.py --test-generation --strict

🌐 Community Tools (v1.2.0)

Browse 500+ MCP Servers

Search for tools

.\scripts\list_registry_tools.ps1 -Search "database"

Popular servers

.\scripts\list_registry_tools.ps1 -Popular

Install Community Tools

Install PostgreSQL server

.\scripts\install_community_tool.ps1 -ServerId "io.modelcontextprotocol/server-postgres"

Install Slack integration

.\scripts\install_community_tool.ps1 -ServerId "io.modelcontextprotocol/server-slack"

Apply Templates

Minimal (search only)

.\scripts\apply_template.ps1 -Template minimal

Standard (recommended)

.\scripts\apply_template.ps1 -Template standard

Full (all features)

.\scripts\apply_template.ps1 -Template full

See Registry Documentation for full guide.


πŸ“¦ Installation

System Requirements

Requirement

Minimum

Recommended

OS

Windows 10

Windows 11

Python

3.8

3.11+

Node.js

18.x

20.x LTS

RAM

8 GB

16 GB

Disk

10 GB

20 GB (for models)

Docker

Optional

Recommended

Step 1: Prerequisites

# Install Node.js (for MCP community servers) winget install OpenJS.NodeJS # Install Python (if not present) winget install Python.Python.3.11 # Verify installations node --version # Should show v18+ python --version # Should show 3.8+

Step 2: Python Dependencies

cd C:\AIStack-MCP # Create virtual environment python -m venv .venv .\.venv\Scripts\Activate.ps1 # Install dependencies pip install -r requirements.txt

Step 3: Local AI Services

  1. Download from ollama.ai

  2. Install and start the service

  3. Pull required models:

ollama pull mxbai-embed-large # Required: embeddings ollama pull qwen2.5:7b # Recommended: analysis ollama pull phi4:14b # Optional: code generation
  1. Verify:

ollama list

Option A: Docker (Recommended)

docker run -d -p 6333:6333 -v qdrant_storage:/qdrant/storage qdrant/qdrant

Option B: Native Installation

Verify:

curl http://localhost:6333/collections

Step 4: Configuration

# Run the quickstart wizard (recommended) .\scripts\quickstart.ps1 # Or manually configure single-repo mode .\scripts\switch_to_single_repo.ps1

πŸ’‘ Tip: If Cursor hangs on startup, ensure you're using the cmd /c wrapper pattern. See Windows MCP Fix.


πŸ”„ Operating Modes

Mode Comparison

Feature

Single-Repo Mode

Multi-Repo Mode

Isolation

βœ… Maximum (per-repo)

⚠️ Shared (CORE access)

Portability

βœ…

${workspaceFolder}

βœ… Relative paths

Security

βœ… Explicit permissions

⚠️ CORE has all access

Cross-repo search

❌ One repo only

βœ… All linked repos

Setup complexity

⭐ Simple

⭐⭐ Requires linking

Best for

Focused work, security

Multi-package, microservices

Switching Modes

# Switch to single-repo (isolated, portable) .\scripts\switch_to_single_repo.ps1 # Switch to multi-repo (orchestrated) .\scripts\switch_to_multi_repo.ps1 # Check current mode Get-Content .cursor\ACTIVE_MODE.txt

Multi-Repo Setup

# 1. Link repositories (requires Admin for symlinks) .\scripts\link_repo.ps1 -TargetPath "C:\Projects\backend-api" .\scripts\link_repo.ps1 -TargetPath "C:\Projects\frontend-app" # 2. Or clone directly (no Admin required) .\scripts\link_repo.ps1 -TargetPath "https://github.com/org/repo" -Clone # 3. Activate multi-repo mode .\scripts\switch_to_multi_repo.ps1 # 4. Restart Cursor

πŸ“– Usage Guide

Scenario 1: First-Time Setup

# 1. Run quickstart wizard .\scripts\quickstart.ps1 # 2. Open project in Cursor cursor C:\AIStack-MCP # 3. In Cursor chat, index your workspace Use code-intelligence to index_workspace # 4. Verify setup Use code-intelligence to validate_workspace_config

Expected Output:

βœ… Workspace: C:\AIStack-MCP (accessible) βœ… Ollama: Connected (3 models available) βœ… Qdrant: Connected (1 collection indexed) βœ… Configuration: Valid

Scenario 2: Daily Development

# Semantic search (find code by meaning) Use code-intelligence to semantic_search for "error handling patterns" # Pattern analysis (extract patterns with LLM) Use code-intelligence to analyze_patterns for "async" # Get optimized context for a file Use code-intelligence to get_context for src/utils.py with task "add retry logic" # Generate code matching project style Use code-intelligence to generate_code for src/api.py with task "add pagination"

Scenario 3: Multi-Repo Development

# Morning: Link all related repos .\scripts\link_repo.ps1 -TargetPath "C:\Projects\shared-libs" .\scripts\link_repo.ps1 -TargetPath "C:\Projects\backend" .\scripts\link_repo.ps1 -TargetPath "C:\Projects\frontend" # Activate multi-repo mode .\scripts\switch_to_multi_repo.ps1 # Now in Cursor: search across ALL linked repos Use code-intelligence to semantic_search for "authentication flow"

Scenario 4: Team Onboarding

Share these commands with new team members:

# Complete setup in one command git clone https://github.com/your-org/AIStack-MCP.git cd AIStack-MCP .\scripts\quickstart.ps1

Reference: docs/BEST_PRACTICES.md


πŸ“ Project Structure

AIStack-MCP/ β”œβ”€β”€ .cursor/ β”‚ β”œβ”€β”€ mcp.json # 🎯 Active MCP configuration β”‚ └── ACTIVE_MODE.txt # πŸ“ Current mode indicator β”‚ β”œβ”€β”€ docs/ β”‚ β”œβ”€β”€ WORKSPACE_PATTERN.md # πŸ“ Isolation best practices β”‚ β”œβ”€β”€ BEST_PRACTICES.md # πŸ‘₯ Team usage guidelines β”‚ β”œβ”€β”€ SETUP.md # πŸ“‹ Detailed setup guide β”‚ └── troubleshooting/ # πŸ”§ Platform-specific fixes β”‚ β”œβ”€β”€ WINDOWS_MCP_FIX.md β”‚ └── MCP_TROUBLESHOOTING.md β”‚ β”œβ”€β”€ scripts/ β”‚ β”œβ”€β”€ quickstart.ps1 # 🌟 Interactive setup wizard β”‚ β”œβ”€β”€ switch_to_single_repo.ps1 # πŸ”’ Activate isolated mode β”‚ β”œβ”€β”€ switch_to_multi_repo.ps1 # 🌐 Activate orchestration mode β”‚ β”œβ”€β”€ link_repo.ps1 # πŸ”— Repository linking helper β”‚ β”œβ”€β”€ validate_mcp_config.py # βœ… CI-ready validation β”‚ β”œβ”€β”€ validate_workspace.py # 🩺 Workspace diagnostics β”‚ β”œβ”€β”€ dev_all.ps1 # πŸ“Š Dev environment status β”‚ └── mcp_config_builder.py # πŸ—οΈ Config generator β”‚ β”œβ”€β”€ workspaces/ # πŸ“‚ Multi-repo links (gitignored) β”‚ └── README.md β”‚ β”œβ”€β”€ python_agent/ # πŸ€– Agent implementation β”‚ β”œβ”€β”€ agents/ β”‚ β”œβ”€β”€ tools/ β”‚ └── mcp_production_server.py β”‚ β”œβ”€β”€ mcp_intelligence_server.py # 🧠 Main MCP server β”œβ”€β”€ requirements.txt # πŸ“¦ Python dependencies β”œβ”€β”€ docker-compose.yml # 🐳 Service orchestration └── README.md # πŸ“– You are here

πŸ› οΈ Tools Reference

Available MCP Tools

Tool

Description

Example

Cost

semantic_search

Find code by meaning using vector similarity

semantic_search for "retry logic"

FREE

analyze_patterns

Extract patterns using local LLM

analyze_patterns for "error handling"

FREE

get_context

Get optimized context for a task

get_context for utils.py

FREE

generate_code

Generate code matching project style

generate_code for api.py

FREE

index_workspace

Build vector index (run once)

index_workspace

FREE

validate_workspace_config

Health check and diagnostics

validate_workspace_config

FREE

When to Use Each Tool

Task

Recommended Tool

Why

"Where is X implemented?"

semantic_search

Finds by meaning, not exact text

"What patterns exist for Y?"

analyze_patterns

LLM extracts and summarizes

"I need to modify file Z"

get_context

Provides optimized context

"Add feature to file W"

generate_code

Matches existing style

"Is my setup correct?"

validate_workspace_config

Comprehensive diagnostics


⚑ Performance & Cost

Real-World Metrics

Metric

Without AIStack

With AIStack

Improvement

Tokens per request

50,000

5,000

90% reduction

Monthly API cost

$100-150

$20

$80-130 saved

Search latency

N/A

<100ms

Instant results

Context accuracy

Variable

Optimized

Better responses

Data privacy

Cloud-processed

Local-first

100% private

Cost Breakdown

WITHOUT AISTACK-MCP: β”œβ”€β”€ Cursor reads 5,000 tokens/file β”œβ”€β”€ 10 files per request = 50,000 tokens β”œβ”€β”€ ~100 requests/day = 5M tokens └── Monthly cost: $100-150 WITH AISTACK-MCP: β”œβ”€β”€ Local search finds relevant code (FREE) β”œβ”€β”€ Local LLM compresses to 500 tokens (FREE) β”œβ”€β”€ Only compressed context sent to Claude └── Monthly cost: ~$20 (Cursor subscription only) SAVINGS: $80-130/month per developer

Memory Footprint

Component

Memory Usage

Ollama (idle)

~500 MB

Ollama (inference)

4-8 GB

Qdrant

~200 MB

MCP Servers

~100 MB total


πŸ”§ Troubleshooting

Issue: Cursor Crashes or Hangs on Startup (Windows)

Symptoms: Cursor freezes when MCP servers start, or crashes immediately.

Cause: Windows STDIO transport incompatibility with Python.

Solution:

// Use cmd /c wrapper in .cursor/mcp.json { "command": "cmd", "args": ["/c", "python", "..."] }

Verification: .\scripts\switch_to_single_repo.ps1 generates correct config.

πŸ“– Full Guide


Issue: MCP Servers Not Appearing

Symptoms: No MCP tools available in Cursor chat.

Cause: Cursor didn't load the configuration.

Solution:

  1. Restart Cursor completely (close all windows)

  2. Check .cursor/mcp.json exists

  3. View logs: Help β†’ Toggle Developer Tools β†’ Console

Verification:

python scripts\validate_mcp_config.py

Issue: Semantic Search Returns Empty

Symptoms: semantic_search returns no results.

Cause: Workspace not indexed.

Solution:

Use code-intelligence to index_workspace

Verification: Check Qdrant collections at http://localhost:6333/dashboard


Issue: Ollama Connection Failed

Symptoms: "Cannot connect to Ollama" errors.

Cause: Ollama service not running.

Solution:

# Start Ollama ollama serve # Verify ollama list

Issue: Mode Switch Not Taking Effect

Symptoms: Config changes don't apply.

Cause: Cursor caches MCP configuration.

Solution:

  1. Run .\scripts\switch_to_*.ps1

  2. Completely restart Cursor (not just reload)

  3. Check .cursor/ACTIVE_MODE.txt

πŸ“– More Troubleshooting


❓ FAQ

Copilot provides inline completions. AIStack-MCP provides:

  • Semantic search across your entire codebase

  • Pattern analysis using local LLMs

  • Cross-repo intelligence in multi-repo mode

  • 90% cost reduction through local preprocessing

  • 100% privacy for local processing

They complement each otherβ€”use both!

  • Cost: Local LLM inference is FREE

  • Privacy: Code never leaves your machine for search/analysis

  • Speed: Vector search is <100ms vs. network latency

  • Availability: Works offline once indexed

Currently optimized for Cursor IDE. VS Code support is on the roadmap (v1.1).

All of them! The system works with any text-based code:

  • Python, JavaScript, TypeScript

  • Rust, Go, Java, C#, C++

  • Ruby, PHP, Swift, Kotlin

  • And more...

Yes. AIStack-MCP includes:

  • CI-ready validation scripts

  • Comprehensive error handling

  • Health monitoring

  • Production-tested configurations

  • Enterprise security patterns

  • Single-repo mode: Maximum isolation, per-project permissions

  • Multi-repo mode: Explicit linking required, CORE workspace controlled

  • Local processing: Sensitive code never leaves your machine

  • Audit trail: .cursor/ACTIVE_MODE.txt tracks mode changes

See docs/BEST_PRACTICES.md for security guidelines.

Absolutely! Share the repository and have team members run:

.\scripts\quickstart.ps1

See docs/BEST_PRACTICES.md for team workflows.

git pull origin main pip install -r requirements.txt --upgrade .\scripts\switch_to_single_repo.ps1 # Regenerate config

πŸŽ“ Advanced Topics

1. Multi-Repo Orchestration Patterns

When to use multi-repo mode:

  • Python multi-package projects

  • Microservices architecture

  • Monorepo-style development with separate repos

Linking strategies:

  • Symlinks: Best for local development (requires Admin)

  • Clones: No Admin required, independent copies

  • Submodules: Version-controlled links

πŸ“– Full Guide

2. CI/CD Integration

# .github/workflows/validate.yml name: Validate MCP Config on: [push, pull_request] jobs: validate: runs-on: windows-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: python-version: '3.11' - run: pip install -r requirements.txt - run: python scripts/validate_mcp_config.py --test-generation --strict

3. Custom Tool Development

Extend mcp_intelligence_server.py:

@mcp.tool() async def my_custom_tool(query: str) -> str: """Your custom tool description.""" # Implementation return result

4. Team Workflows

Decision tree for mode selection:

Working on ONE repo? β†’ Single-repo mode Working on 2-5 related repos? β†’ Multi-repo mode Working on 6+ repos? β†’ Split into focused workspaces

πŸ“– Full Guide

5. Production Deployment

# docker-compose.yml (included) services: qdrant: image: qdrant/qdrant ports: - "6333:6333" volumes: - qdrant_storage:/qdrant/storage

πŸ—ΊοΈ Roadmap

v1.2.0 β€” Current Release βœ…

  • βœ… MCP Registry integration (browse 500+ community servers)

  • βœ… Template system (minimal, standard, full)

  • βœ… Server installer (npm, PyPI, Docker)

  • βœ… Community tools management scripts

  • βœ… Dual-mode orchestration (single/multi-repo)

  • βœ… Complete validation suite

  • βœ… Interactive setup wizard

  • βœ… Production-ready patterns

  • βœ… Comprehensive documentation

v1.3.0 β€” Planned

  • πŸ”² VS Code extension support

  • πŸ”² Additional LLM backends (Claude local, GPT4All)

  • πŸ”² Enhanced caching layer

  • πŸ”² Performance dashboard

v2.0.0 β€” Future

  • πŸ”² Optional cloud sync

  • πŸ”² Team collaboration features

  • πŸ”² Admin dashboard

  • πŸ”² Usage analytics


🀝 Contributing

We welcome contributions! Here's how to get started:

Reporting Bugs

Open an issue with:

  • Clear description of the problem

  • Steps to reproduce

  • Expected vs. actual behavior

  • System information (OS, Python version, etc.)

Feature Requests

Open a discussion to propose new features.

Development Setup

# Fork and clone git clone https://github.com/YOUR_USERNAME/AIStack-MCP.git cd AIStack-MCP # Install dependencies python -m venv .venv .\.venv\Scripts\Activate.ps1 pip install -r requirements.txt # Run validation python scripts\validate_mcp_config.py --test-generation

Pull Request Process

  1. Fork the repository

  2. Create a feature branch (git checkout -b feature/amazing-feature)

  3. Make your changes

  4. Run validation (python scripts\validate_mcp_config.py --strict)

  5. Commit (git commit -m 'feat: Add amazing feature')

  6. Push (git push origin feature/amazing-feature)

  7. Open a Pull Request

Coding Standards


πŸ™ Acknowledgments

This project stands on the shoulders of giants:



πŸ“„ License

This project is licensed under the MIT License β€” see the LICENSE file for details.

MIT License Copyright (c) 2025 AIStack-MCP Contributors Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software...

⭐ Star this repo if it helped you!

GitHub stars GitHub forks

Made with ❀️ for the MCP community

Report Bug Β· Request Feature Β· Documentation

-
security - not tested
A
license - permissive license
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mjdevaccount/AIStack-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server