Includes a 'Buy Me a Coffee' link in the README for supporting the developer through donations.
Built with TypeScript, providing type-safe development for the MCP server implementation.
Structured Workflow MCP Server
An MCP server that enforces disciplined programming practices by requiring AI assistants to audit their work and produce verified outputs at each phase of development.
Why I Built This
TLDR: I found that prompting with these two words: inventory and audit, helped make the AI think systematically and follow structured phases in development, but got tired of repeating them across every platform and prompt - so I built this MCP server to enforce this discipline automatically.
The Details: Like many of you, over the last year of learning to use AI in development, I got frustrated with AI not thinking through problems the way I do.
When I approach a problem, I ask: How are these components connected? How do they relate to other systems? What side effects will this change produce? What steps ensure success? What already exists in my codebase?
AI skips this analysis. It jumps into code changes without understanding the system it's building into. It creates new classes and folder structures when they already exist. It adds code without understanding component relationships or potential side effects. The result was usually duplicated classes, functions, unused helpers, and other code that didn't fit the system I was working on.
Planning modes helped, but didn't always force the AI to break down the problem properly, especially in larger existing codebases. Eventually I discovered two key words: inventory and audit. Forcing AI to INVENTORY and AUDIT before acting was the key to getting the model to be thorough and disciplined in understanding the system it was working on. But I had to keep repeating these instructions across multiple prompts and different AI platforms.
I looked for existing MCP tools but didn't find anything quite like what I needed. The Sequential Thinking MCP server was inspiring (and I still use this a lot), but I needed something that went further - forcing AI to follow structured phases and produce verifiable output before proceeding.
So I built this for myself. I need this kind of disciplined workflow. If others find it useful, great. If not, no worries - I'll keep using it because it solves my problem.
I'm sharing this in case others have similar frustrations. Contributions, improvements, and discussion are welcome.
Features
Enforced Workflow Phases - AI must complete specific phases in order (audit, analysis, planning, implementation, testing, etc.)
Mandatory Output Artifacts - Each phase requires structured documentation or verified outputs before proceeding
Multiple Workflow Types:
- Refactor workflows for code improvement
- Feature development with integrated testing
- Test-focused workflows for coverage improvement
- Test-driven development (TDD) cycles
- Custom workflows for specialized needs
Output Verification - The server validates that outputs contain meaningful content and proper structure
Session State Management - Tracks progress and prevents skipping phases
Installation
Quick Start (Recommended) - Zero Installation
Add to your AI assistant config - Uses npx automatically:
VS Code / Cursor / Windsurf - Add to your MCP settings:
Claude Desktop - Add to your claude_desktop_config.json
:
Global Installation (Optional)
You can install globally (local) first:
Then use in your AI assistant config:
Auto-Install via Smithery
For Claude Desktop users (Smithery has other options like direct add to Cursor):
Manual Installation
For developers:
Usage
Once configured in your AI assistant, start with these workflow tools:
mcp__structured-workflow__build_custom_workflow
- Create custom workflowsmcp__structured-workflow__refactor_workflow
- Structured refactoringmcp__structured-workflow__create_feature_workflow
- Feature developmentmcp__structured-workflow__test_workflow
- Test coverage workflows
Example Output Artifacts
The server enforces that AI produces structured outputs like these:
AUDIT_INVENTORY Phase Output:
COMPARE_ANALYZE Phase Output:
Each phase requires documented analysis and planning before the AI can proceed to implementation.
Tools
Workflow Entry Points
refactor_workflow - Start a structured refactoring process with required analysis and planning phases
create_feature_workflow - Develop new features with integrated testing and documentation requirements
test_workflow - Add test coverage with mandatory analysis of what needs testing
tdd_workflow - Implement Test-Driven Development with enforced Red-Green-Refactor cycles
build_custom_workflow - Create workflows with custom phases and validation requirements
Phase Guidance Tools
- audit_inventory_guidance - Forces thorough code analysis and change cataloging
- compare_analyze_guidance - Requires evaluation of multiple approaches with pros/cons
- question_determine_guidance - Mandates clarification and finalized planning
- phase_output - Validates and records structured outputs from each phase
- workflow_status - Check current progress and validation state
Usage
The server enforces structured workflows through mandatory phases. Each workflow type has different phase requirements:
- Refactor Workflow: AUDIT_INVENTORY → COMPARE_ANALYZE → QUESTION_DETERMINE → WRITE_OR_REFACTOR → LINT → ITERATE → PRESENT
- Feature Workflow: PLANNING → QUESTION_DETERMINE → WRITE_OR_REFACTOR → TEST → LINT → ITERATE → PRESENT
- Test Workflow: AUDIT_INVENTORY → QUESTION_DETERMINE → WRITE_OR_REFACTOR → TEST → ITERATE → PRESENT
- TDD Workflow: PLANNING → WRITE_OR_REFACTOR → TEST → (Red-Green-Refactor cycles) → LINT → PRESENT
Input Validation
The server requires:
task
(string): Description of what you want to accomplishoutputArtifacts
(array): Structured documentation for each completed phase
Output Validation
Each phase completion is validated for:
- Meaningful content length (minimum 10 characters)
- Valid JSON format for structured outputs
- Phase-specific content requirements
- Proper documentation of decisions and analysis
Safety Rule
Files must be read before modification. This prevents accidental data loss and ensures informed changes.
Development
How It Works
- AI starts a workflow using one of the entry point tools
- Server creates a session and tracks phase progression
- Each phase requires specific outputs before proceeding
- The
phase_output
tool validates artifacts have meaningful content - AI cannot skip phases or claim completion without verified outputs
- Session state prevents circumventing the structured approach
Testing the MCP Server
You can quickly try out the Structured Workflow MCP server using the test prompts and helper scripts included in this repository.
- Build the server (if you haven't already):
- Start the server:
- Open the test prompt
docs/test_prompt/mcp_server_test_prompt.md
in your preferred MCP-compatible AI client and paste the contents. - Alternatively, open the sample project located in
refactor-test/
for an end-to-end refactor workflow demo. Follow the steps in itsREADME.md
to run and observe the structured workflow in action. - Watch the AI progress through each phase and verify the structured outputs produced.
Sample Prompts
The docs/sample_prompts
directory contains several ready-to-use prompts illustrating typical workflows:
feature_workflow_prompt.md
refactor_workflow_prompt.md
test_workflow_prompt.md
tdd_workflow_prompt.md
custom_workflow_prompt.md
Use these as a starting point and adapt them to your projects.
Building
The server uses TypeScript with the @modelcontextprotocol/sdk and runs locally via stdio transport.
Pull Requests Welcome
We welcome and encourage pull requests! Whether you're fixing bugs, adding features, or improving documentation, your contributions are valuable.
Please follow these steps:
- Fork the repository on GitHub.
- Create a new branch:
git checkout -b feature/your-feature
. - Make your changes and commit with clear, descriptive messages.
- Write tests for any new functionality and ensure all existing tests pass.
- Push to your branch:
git push origin feature/your-feature
. - Open a pull request and describe your changes clearly.
See CONTRIBUTING.md for more details, if available.
Thank you for contributing!
License
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License.
This server cannot be installed
Enforces disciplined programming practices by requiring AI assistants to audit their work and produce verified outputs at each phase of development, following structured workflows for refactoring, feature development, and testing.
Related MCP Servers
- -securityAlicense-qualityServes as a guardian of development knowledge, providing AI assistants with curated access to latest documentation and best practices.Last updated -48752TypeScriptMIT License
- -securityAlicense-qualityProvides code manipulation, execution, and version control capabilities. It allows AI assistants to read, write, and execute code while maintaining a history of changes.Last updated -8PythonMIT License
- AsecurityAlicenseAqualityFacilitates interactive feature discussions with AI guidance, maintaining context and providing intelligent recommendations for implementation, architecture, and best practices in software development.Last updated -21JavaScriptMIT License
- -securityAlicense-qualityAI-powered agent that streamlines web testing workflows by allowing developers to record, execute, and discover tests using natural language prompts in their AI coding assistants.Last updated -46PythonApache 2.0