Damn Vulnerable Model Context Protocol (DVMCP)
A deliberately vulnerable implementation of the Model Context Protocol (MCP) for educational purposes.
Overview
The Damn Vulnerable Model Context Protocol (DVMCP) is an educational project designed to demonstrate security vulnerabilities in MCP implementations. It contains 10 challenges of increasing difficulty that showcase different types of vulnerabilities and attack vectors.
This project is intended for security researchers, developers, and AI safety professionals to learn about potential security issues in MCP implementations and how to mitigate them.
What is MCP?
The Model Context Protocol (MCP) is a standardized protocol that allows applications to provide context for Large Language Models (LLMs) in a structured way. It separates the concerns of providing context from the actual LLM interaction, enabling applications to expose resources, tools, and prompts to LLMs.
Recommended MCP Clients
CLINE - VSCode Extension refer this https://docs.cline.bot/mcp-servers/connecting-to-a-remote-server for connecting CLine with MCP server
getting started
disclaimer
its not stable in windows environment if you don't want to docker please use linux environment I recommend Docker to run the LAB and I am 100% percent sure it works well in docker environment
Security Risks
While MCP provides many benefits, it also introduces new security considerations. This project demonstrates various vulnerabilities that can occur in MCP implementations, including:
- Prompt Injection: Manipulating LLM behavior through malicious inputs
- Tool Poisoning: Hiding malicious instructions in tool descriptions
- Excessive Permissions: Exploiting overly permissive tool access
- Rug Pull Attacks: Exploiting tool definition mutations
- Tool Shadowing: Overriding legitimate tools with malicious ones
- Indirect Prompt Injection: Injecting instructions through data sources
- Token Theft: Exploiting insecure token storage
- Malicious Code Execution: Executing arbitrary code through vulnerable tools
- Remote Access Control: Gaining unauthorized system access
- Multi-Vector Attacks: Combining multiple vulnerabilities
Project Structure
Getting Started
See the Setup Guide for detailed instructions on how to install and run the challenges.
Challenges
The project includes 10 challenges across three difficulty levels:
Easy Challenges
- Basic Prompt Injection: Exploit unsanitized user input to manipulate LLM behavior
- Tool Poisoning: Exploit hidden instructions in tool descriptions
- Excessive Permission Scope: Exploit overly permissive tools to access unauthorized resources
Medium Challenges
- Rug Pull Attack: Exploit tools that change their behavior after installation
- Tool Shadowing: Exploit tool name conflicts to override legitimate tools
- Indirect Prompt Injection: Inject malicious instructions through data sources
- Token Theft: Extract authentication tokens from insecure storage
Hard Challenges
- Malicious Code Execution: Execute arbitrary code through vulnerable tools
- Remote Access Control: Gain remote access to the system through command injection
- Multi-Vector Attack: Chain multiple vulnerabilities for a sophisticated attack
See the Challenges Guide for detailed descriptions of each challenge.
Solutions
Solution guides are provided for educational purposes. It's recommended to attempt the challenges on your own before consulting the solutions.
See the Solutions Guide for detailed solutions to each challenge.
Disclaimer
This project is for educational purposes only. The vulnerabilities demonstrated in this project should never be implemented in production systems. Always follow security best practices when implementing MCP servers.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Author
This project is created by Harish Santhanalakshmi Ganesan using cursor IDE and Manus AI.
This server cannot be installed
An educational project that deliberately implements vulnerable MCP servers to demonstrate various security risks like prompt injection, tool poisoning, and code execution for training security researchers and AI safety professionals.