wassden
Spec-Driven Development MCP server that transforms any LLM into a systematic development agent
🚀 Generate requirements → design → tasks → code with 100% traceability and validation
⚡ Quick Start (MCP)
Related MCP server: Serena MCP Server
🎯 What It Does
✅ Requirements → Design → Tasks → Code - Complete SDD workflow with structured prompts
🔒 100% READ-ONLY - No file modification risk, only analysis and prompt generation
🌍 Auto Language Detection - Seamless Japanese/English support
⚡ Ultra-fast Performance - <0.01ms response time with 405+ tests passing
📊 100% Traceability - Complete REQ↔DESIGN↔TASK mapping enforcement
📦 Installation
Prerequisites
uv package manager (
pip install uv)MCP-compatible client (Claude Code, Cursor, GitHub Copilot, etc.)
Claude Code
Cursor, GitHub Copilot, Other MCP Clients
Edit your MCP settings file:
📚 For detailed setup and development installation, see docs/development.md
🚀 Basic Usage
1. Analyze & Generate Requirements
2. Validate Generated Specs
3. Get Traceability Matrix
🛠️ Core Tools
Requirements Generation
prompt-requirements- Analyzes input for completeness and generates EARS requirements promptDefault: Checks completeness, asks questions if incomplete, generates prompt if complete
--force: Skip completeness verification and generate requirements prompt
Design & Planning
prompt-design- Generates architectural design prompt from requirementsprompt-tasks- Creates WBS task breakdown prompt from design
Validation Suite
validate-requirements- Ensures EARS format and completenessvalidate-design- Checks architectural consistencyvalidate-tasks- Verifies task dependencies and coverage
Traceability & Analysis
get-traceability- Complete REQ↔DESIGN↔TASK dependency mappinganalyze-changes- Impact analysis for specification changesgenerate-review-prompt- Task-specific implementation review
📚 See full tool documentation for detailed usage and examples
📊 Performance
Response Time: <0.01ms average (ultra-fast analysis)
Test Coverage: 405+ tests with 100% passing
Memory Efficient: <50MB growth per 1000 operations
Concurrent Support: Handles 20+ parallel tool calls efficiently
🧪 Run benchmarks:
python benchmarks/run_all.py
🧪 Development Mode Features (Experimental)
⚠️ NOTE: Experimental features for internal validation and benchmarking. Requires dev installation.
The experiment framework provides validation and benchmarking capabilities for the wassden toolkit itself. These features are only available in development mode:
Key Capabilities
Statistical Analysis: Mean, variance, confidence intervals (REQ-04)
Experiment Management: Save/load/compare configurations (REQ-05, REQ-06)
Comparative Analysis: Statistical significance testing (REQ-07)
Resource Constraints: 10-minute timeout, 100MB memory limit (NFR-01, NFR-02)
📚 See Experiment Framework Documentation for detailed usage
🎯 Use Cases
Development Teams: Systematic requirements gathering and project planning
AI Agents: Structured prompts for complex development workflows
Technical Writers: Automated documentation validation and traceability
Quality Assurance: Built-in validation with actionable feedback
📚 Documentation
Getting Started - Installation and first steps
Tool Reference - Complete tool documentation
Spec Format Guide - Requirements, design, and tasks format
Validation Rules - EARS format and traceability requirements
Development - Development setup and contributing
Experiment Framework - Experimental validation features (dev only)
Examples - Sample specifications (Japanese/English)
🤝 Contributing
Contributions are welcome! See docs/development.md for development setup.
📄 License
MIT License - see LICENSE file for details.
Built with ❤️ for the AI-driven development community