Vibe Coder MCP is a server that supercharges AI assistants with tools for software development, research, planning, and project management.
Research & Planning: Perform deep research on technical topics, generate PRDs, user stories, task lists, and development rules.
Code Generation & Refactoring: Generate code stubs and refactor existing code snippets.
Project Scaffolding: Create full-stack starter kits with custom tech stacks.
Git Integration: Summarize Git changes.
Dependency Analysis: Analyze project dependency files.
Codebase Mapping: Map and analyze codebase structure with diagrams.
Workflow Execution: Run predefined sequences of tool calls for complex tasks.
Request Processing: Route natural language requests to appropriate tools using semantic matching.
Provides Git integration through the git-summary tool to display current Git changes and status, helping with code commit preparation
Supports creation of Mermaid diagrams in documentation, used throughout planning tools for visualizing workflows and architecture
Offers Node.js project analysis including dependency examination and project scaffolding capabilities
Integrates with Perplexity Sonar for deep technical research capabilities through the research-manager tool
Vibe Coder MCP Server
Vibe Coder is an MCP (Model Context Protocol) server designed to supercharge your AI assistant (like Cursor, Cline AI, or Claude Desktop) with powerful tools for software development. It helps with research, planning, generating requirements, creating starter projects, and more!
Overview & Features
Vibe Coder MCP integrates with MCP-compatible clients to provide the following capabilities:
Semantic Request Routing: Intelligently routes requests using embedding-based semantic matching with sequential thinking fallbacks.
Tool Registry Architecture: Centralized tool management with self-registering tools.
Direct LLM Calls: Generator tools now use direct LLM calls for improved reliability and structured output control.
Workflow Execution: Runs predefined sequences of tool calls defined in
workflows.json
.Research & Planning: Performs deep research (
research-manager
) and generates planning documents like PRDs (generate-prd
), user stories (generate-user-stories
), task lists (generate-task-list
), and development rules (generate-rules
).Project Scaffolding: Generates full-stack starter kits (
generate-fullstack-starter-kit
).Code Map Generator: Recursively scans a codebase, extracts semantic information, and generates either a token-efficient, context-dense Markdown index with Mermaid diagrams or a structured JSON representation with absolute file paths for imports and enhanced class property information (
map-codebase
).Asynchronous Execution: Many long-running tools (generators, research, workflows) now run asynchronously. They return a Job ID immediately, and the final result is retrieved using the
get-job-result
tool.Session State Management: Maintains basic state across requests within a session (in-memory).
Standardized Error Handling: Consistent error patterns across all tools.
(See "Detailed Tool Documentation" and "Feature Details" sections below for more)
Setup Guide
Follow these micro-steps to get the Vibe Coder MCP server running and connected to your AI assistant.
Step 1: Prerequisites
Check Node.js Version:
Open a terminal or command prompt.
Run
node -v
Ensure the output shows v18.0.0 or higher (required).
If not installed or outdated: Download from nodejs.org.
Check Git Installation:
Open a terminal or command prompt.
Run
git --version
If not installed: Download from git-scm.com.
Get OpenRouter API Key:
Visit openrouter.ai
Create an account if you don't have one.
Navigate to API Keys section.
Create a new API key and copy it.
Keep this key handy for Step 4.
Step 2: Get the Code
Create a Project Directory (optional):
Open a terminal or command prompt.
Navigate to where you want to store the project:
cd ~/Documents # Example: Change to your preferred location
Clone the Repository:
Run:
git clone https://github.com/freshtechbro/vibe-coder-mcp.git(Or use your fork's URL if applicable)
Navigate to Project Directory:
Run:
cd vibe-coder-mcp
Step 3: Run the Setup Script
Choose the appropriate script for your operating system:
For Windows:
In your terminal (still in the vibe-coder-mcp directory), run:
setup.batWait for the script to complete (it will install dependencies, build the project, and create necessary directories).
If you see any error messages, refer to the Troubleshooting section below.
For macOS or Linux:
Make the script executable:
chmod +x setup.shRun the script:
./setup.shWait for the script to complete.
If you see any error messages, refer to the Troubleshooting section below.
The script performs these actions:
Checks Node.js version (v18+)
Installs all dependencies via npm
Creates necessary
VibeCoderOutput/
subdirectories (as defined in the script).Builds the TypeScript project.
Copies You will need to edit this file.
Sets executable permissions (on Unix systems).
Step 4: Configure Environment Variables (.env
)
The setup script (from Step 3) automatically creates a .env
file in the project's root directory by copying the .env.example
template, only if .
Locate and Open Find the
.env
file in the mainvibe-coder-mcp
directory and open it with a text editor.Add Your OpenRouter API Key (Required):
The file contains a template based on
.env.example
:# OpenRouter Configuration ## Specifies your unique API key for accessing OpenRouter services. ## Replace "Your OPENROUTER_API_KEY here" with your actual key obtained from OpenRouter.ai. OPENROUTER_API_KEY="Your OPENROUTER_API_KEY here" ## Defines the base URL for the OpenRouter API endpoints. ## The default value is usually correct and should not need changing unless instructed otherwise. OPENROUTER_BASE_URL=https://openrouter.ai/api/v1 ## Sets the specific Gemini model to be used via OpenRouter for certain AI tasks. ## ':free' indicates potential usage of a free tier model if available and supported by your key. GEMINI_MODEL=google/gemini-2.0-flash-thinking-exp:freeCrucially, replace Remove the quotes if your key doesn't require them.
Configure Output Directory (Optional):
To change where generated files are saved (default is
VibeCoderOutput/
inside the project), add this line to your.env
file:VIBE_CODER_OUTPUT_DIR=/path/to/your/desired/output/directoryReplace the path with your preferred absolute path. Use forward slashes (
/
) for paths. If this variable is not set, the default directory (VibeCoderOutput/
) will be used.
Configure Code-Map Generator Directory (Optional):
To specify which directory the code-map-generator tool is allowed to scan, add this line to your
.env
file:CODE_MAP_ALLOWED_DIR=/path/to/your/source/code/directoryReplace the path with the absolute path to the directory containing the source code you want to analyze. This is a security boundary - the tool will not access files outside this directory.
Note that
CODE_MAP_ALLOWED_DIR
(for reading source code) andVIBE_CODER_OUTPUT_DIR
(for writing output files) are separate for security reasons. The code-map-generator tool uses separate validation for read and write operations.
Review Other Settings (Optional):
You can add other environment variables supported by the server, such as
LOG_LEVEL
(e.g.,LOG_LEVEL=debug
) orNODE_ENV
(e.g.,NODE_ENV=development
).
Save the
Step 5: Integrate with Your AI Assistant (MCP Settings)
This crucial step connects Vibe Coder to your AI assistant by adding its configuration to the client's MCP settings file.
5.1: Locate Your Client's MCP Settings File
The location varies depending on your AI assistant:
Cursor AI / Windsurf / RooCode (VS Code based):
Open the application.
Open the Command Palette (
Ctrl+Shift+P
orCmd+Shift+P
).Type and select
Preferences: Open User Settings (JSON)
.This opens your
settings.json
file where themcpServers
object should reside.
Cline AI (VS Code Extension):
Windows:
%APPDATA%\Cursor\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json
macOS:
~/Library/Application Support/Cursor/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
Linux:
~/.config/Cursor/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
(Note: If using standard VS Code instead of Cursor, replace
Claude Desktop:
Windows:
%APPDATA%\Claude\claude_desktop_config.json
macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
Linux:
~/.config/Claude/claude_desktop_config.json
5.2: Add the Vibe Coder Configuration
Open the settings file identified above in a text editor.
Find the
"mcpServers": { ... }
JSON object. If it doesn't exist, you may need to create it (ensure the overall file remains valid JSON). For example, an empty file might become{"mcpServers": {}}
.Add the following configuration block inside the curly braces
{}
of themcpServers
object. If other servers are already listed, add a comma,
after the previous server's closing brace}
before pasting this block.// This is the unique identifier for this MCP server instance within your client's settings "vibe-coder-mcp": { // Specifies the command used to execute the server. Should be 'node' if Node.js is in your system's PATH "command": "node", // Provides the arguments to the 'command'. The primary argument is the absolute path to the compiled server entry point // !! IMPORTANT: Replace with the actual absolute path on YOUR system. Use forward slashes (/) even on Windows !! "args": ["/Users/username/Documents/Dev Projects/Vibe-Coder-MCP/build/index.js"], // Sets the current working directory for the server process when it runs // !! IMPORTANT: Replace with the actual absolute path on YOUR system. Use forward slashes (/) even on Windows !! "cwd": "/Users/username/Documents/Dev Projects/Vibe-Coder-MCP", // Defines the communication transport protocol between the client and server "transport": "stdio", // Environment variables to be passed specifically to the Vibe Coder server process when it starts // API Keys should be in the .env file, NOT here "env": { // Absolute path to the LLM configuration file used by Vibe Coder // !! IMPORTANT: Replace with the actual absolute path on YOUR system !! "LLM_CONFIG_PATH": "/Users/username/Documents/Dev Projects/Vibe-Coder-MCP/llm_config.json", // Sets the logging level for the server "LOG_LEVEL": "debug", // Specifies the runtime environment "NODE_ENV": "production", // Directory where Vibe Coder tools will save their output files // !! IMPORTANT: Replace with the actual absolute path on YOUR system !! "VIBE_CODER_OUTPUT_DIR": "/Users/username/Documents/Dev Projects/Vibe-Coder-MCP/VibeCoderOutput", // Directory that the code-map-generator tool is allowed to scan // This is a security boundary - the tool will not access files outside this directory "CODE_MAP_ALLOWED_DIR": "/Users/username/Documents/Dev Projects/Vibe-Coder-MCP/src" }, // A boolean flag to enable (false) or disable (true) this server configuration "disabled": false, // A list of tool names that the MCP client is allowed to execute automatically "autoApprove": [ "research", "generate-rules", "generate-user-stories", "generate-task-list", "generate-prd", "generate-fullstack-starter-kit", "refactor-code", "git-summary", "run-workflow", "map-codebase" ] }CRUCIAL: Replace all placeholder paths (like
/path/to/your/vibe-coder-mcp/...
) with the correct absolute paths on your system where you cloned the repository. Use forward slashes/
for paths, even on Windows (e.g.,C:/Users/YourName/Projects/vibe-coder-mcp/build/index.js
). Incorrect paths are the most common reason the server fails to connect.Save the settings file.
Completely close and restart your AI assistant application (Cursor, VS Code, Claude Desktop, etc.) for the changes to take effect.
Step 6: Test Your Configuration
Start Your AI Assistant:
Completely restart your AI assistant application.
Test a Simple Command:
Type a test command like:
Research modern JavaScript frameworks
Check for Proper Response:
If working correctly, you should receive a research response.
If not, check the Troubleshooting section below.
Project Architecture
The Vibe Coder MCP server follows a modular architecture centered around a tool registry pattern:
Directory Structure
Tool Registry Pattern
The Tool Registry is a central component for managing tool definitions and execution:
Sequential Thinking Process
The Sequential Thinking mechanism provides LLM-based fallback routing:
Session State Management
Workflow Execution Engine
The Workflow system enables multi-step sequences:
Workflow Configuration
Workflows are defined in the workflows.json
file located in the root directory of the project. This file contains predefined sequences of tool calls that can be executed with a single command.
File Location and Structure
The
workflows.json
file must be placed in the project root directory (same level as package.json)The file follows this structure:
{ "workflows": { "workflowName1": { "description": "Description of what this workflow does", "inputSchema": { "param1": "string", "param2": "string" }, "steps": [ { "id": "step1_id", "toolName": "tool-name", "params": { "param1": "{workflow.input.param1}" } }, { "id": "step2_id", "toolName": "another-tool", "params": { "paramA": "{workflow.input.param2}", "paramB": "{steps.step1_id.output.content[0].text}" } } ], "output": { "summary": "Workflow completed message", "details": ["Output line 1", "Output line 2"] } } } }
Parameter Templates
Workflow step parameters support template strings that can reference:
Workflow inputs:
{workflow.input.paramName}
Previous step outputs:
{steps.stepId.output.content[0].text}
Triggering Workflows
Use the run-workflow
tool with:
Detailed Tool Documentation
Each tool in the src/tools/
directory includes comprehensive documentation in its own README.md file. These files cover:
Tool overview and purpose
Input/output specifications
Workflow diagrams (Mermaid)
Usage examples
System prompts used
Error handling details
Refer to these individual READMEs for in-depth information:
src/tools/fullstack-starter-kit-generator/README.md
src/tools/prd-generator/README.md
src/tools/research-manager/README.md
src/tools/rules-generator/README.md
src/tools/task-list-generator/README.md
src/tools/user-stories-generator/README.md
src/tools/workflow-runner/README.md
src/tools/code-map-generator/README.md
Tool Categories
Analysis & Information Tools
Code Map Generator (: Scans a codebase to extract semantic information (classes, functions, comments) and generates either a human-readable Markdown map with Mermaid diagrams or a structured JSON representation with absolute file paths for imports and enhanced class property information.
Research Manager (: Performs deep research on technical topics using Perplexity Sonar, providing summaries and sources.
Planning & Documentation Tools
Rules Generator ( Creates project-specific development rules and guidelines.
PRD Generator ( Generates comprehensive product requirements documents.
User Stories Generator ( Creates detailed user stories with acceptance criteria.
Task List Generator ( Builds structured development task lists with dependencies.
Project Scaffolding Tool
Fullstack Starter Kit Generator ( Creates customized project starter kits with specified frontend/backend technologies, including basic setup scripts and configuration.
Workflow & Orchestration
Workflow Runner ( Executes predefined sequences of tool calls for common development tasks.
Generated File Storage
By default, outputs from the generator tools are stored for historical reference in the VibeCoderOutput/
directory within the project. This location can be overridden by setting the VIBE_CODER_OUTPUT_DIR
environment variable in your .env
file or AI assistant configuration.
Security Boundaries for Read and Write Operations
For security reasons, the Vibe Coder MCP tools maintain separate security boundaries for read and write operations:
Read Operations: Tools like the code-map-generator only read from directories explicitly authorized through the
CODE_MAP_ALLOWED_DIR
environment variable. This creates a clear security boundary and prevents unauthorized access to files outside the allowed directory.Write Operations: All output files are written to the
VIBE_CODER_OUTPUT_DIR
directory (or its subdirectories). This separation ensures that tools can only write to designated output locations, protecting your source code from accidental modifications.
Example structure (default location):
Usage Examples
Interact with the tools via your connected AI assistant:
Research:
Research modern JavaScript frameworks
Generate Rules:
Create development rules for a mobile banking application
Generate PRD:
Generate a PRD for a task management application
Generate User Stories:
Generate user stories for an e-commerce website
Generate Task List:
Create a task list for a weather app based on [user stories]
Sequential Thinking:
Think through the architecture for a microservices-based e-commerce platform
Fullstack Starter Kit:
Create a starter kit for a React/Node.js blog application with user authentication
Run Workflow:
Run workflow newProjectSetup with input { "projectName": "my-new-app", "description": "A simple task manager" }
Map Codebase:
Generate a code map for the current project
,map-codebase path="./src"
, orGenerate a JSON representation of the codebase structure with output_format="json"
Running Locally (Optional)
While the primary use is integration with an AI assistant (using stdio), you can run the server directly for testing:
Running Modes
Production Mode (Stdio):
npm startLogs go to stderr (mimics AI assistant launch)
Use NODE_ENV=production
Development Mode (Stdio, Pretty Logs):
npm run devLogs go to stdout with pretty formatting
Requires
nodemon
andpino-pretty
Use NODE_ENV=development
SSE Mode (HTTP Interface):
# Production mode over HTTP npm run start:sse # Development mode over HTTP npm run dev:sseUses HTTP instead of stdio
Configured via PORT in .env (default: 3000)
Access at http://localhost:3000
Detailed Troubleshooting
Connection Issues
MCP Server Not Detected in AI Assistant
Check Configuration Path:
Verify the absolute path in the
args
array is correctEnsure all slashes are forward slashes
/
even on WindowsRun
node <path-to-build/index.js>
directly to test if Node can find it
Check Configuration Format:
Make sure JSON is valid without syntax errors
Check that commas between properties are correct
Verify that the
mcpServers
object contains your server
Restart the Assistant:
Completely close (not just minimize) the application
Reopen and try again
Server Starts But Tools Don't Work
Check Disabled Flag:
Ensure
"disabled": false
is setRemove any
//
comments as JSON doesn't support them
Verify autoApprove Array:
Check that tool names in the
autoApprove
array match exactlyTry adding
"process-request"
to the array if using hybrid routing
API Key Issues
OpenRouter Key Problems:
Double-check that the key is correctly copied
Verify the key is active in your OpenRouter dashboard
Check if you have sufficient credits
Environment Variable Issues:
Verify the key is correct in both:
The
.env
file (for local runs)Your AI assistant's configuration env block
Path & Permission Issues
Build Directory Not Found:
Run
npm run build
to ensure the build directory existsCheck if build output is going to a different directory (check tsconfig.json)
File Permission Errors:
Ensure your user has write access to the workflow-agent-files directory
On Unix systems, check if build/index.js has execute permission
Log Debugging
For Local Runs:
Check the console output for error messages
Try running with
LOG_LEVEL=debug
in your.env
file
For AI Assistant Runs:
Set
"NODE_ENV": "production"
in the env configurationCheck if the assistant has a logging console or output window
Tool-Specific Issues
Semantic Routing Not Working:
First run may download embedding model - check for download messages
Try a more explicit request that mentions the tool name
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Tools
An MCP server that supercharges AI assistants with powerful tools for software development, enabling research, planning, code generation, and project scaffolding through natural language interaction.
- Overview & Features
- Setup Guide
- Project Architecture
- Directory Structure
- Tool Registry Pattern
- Sequential Thinking Process
- Session State Management
- Workflow Execution Engine
- Workflow Configuration
- Detailed Tool Documentation
- Tool Categories
- Generated File Storage
- Usage Examples
- Running Locally (Optional)
- Detailed Troubleshooting
Related Resources
Related MCP Servers
- AsecurityAlicenseAqualityA Model Context Protocol server that enables AI assistants to interact with Linear project management systems, allowing users to retrieve, create, and update issues, projects, and teams through natural language.Last updated -42561106MIT License
- -securityAlicense-qualityAn MCP server that analyzes codebases and generates contextual prompts, making it easier for AI assistants to understand and work with code repositories.Last updated -14MIT License
- AsecurityAlicenseAqualityA MCP server that enables human-in-the-loop workflow in AI-assisted development tools by allowing users to run commands, view their output, and provide textual feedback directly to the AI assistant.Last updated -11,588MIT License
- AsecurityAlicenseAqualityA powerful MCP server that provides interactive user feedback and command execution capabilities for AI-assisted development, featuring a graphical interface with text and image support.Last updated -139MIT License