Serves as the runtime environment for the MCP servers, providing the foundation for all server functionality.
Integrates local AI models through the Local AI Server, enabling token-efficient processing and hybrid local+cloud analysis.
Enhanced Architecture MCP
Enhanced Model Context Protocol (MCP) servers with professional accuracy, tool safety, user preferences, and intelligent context monitoring.
Overview
This repository contains a collection of MCP servers that provide advanced architecture capabilities for AI assistants, including:
Professional Accuracy Enforcement - Prevents marketing language and ensures factual descriptions
Tool Safety Protocols - Blocks prohibited operations and validates parameters
User Preference Management - Stores and applies communication and aesthetic preferences
Intelligent Context Monitoring - Automatic token estimation and threshold warnings
Multi-MCP Orchestration - Coordinated workflows across multiple servers
Active Servers
Enhanced Architecture Server (enhanced_architecture_server_context.js
)
Primary server with complete feature set:
Professional accuracy verification
Tool safety enforcement
User preference storage/retrieval
Context token tracking
Pattern storage and learning
Violation logging and metrics
Chain of Thought Server (cot_server.js
)
Reasoning strand management:
Create and manage reasoning threads
Branch reasoning paths
Complete strands with conclusions
Cross-reference reasoning history
Local AI Server (local-ai-server.js
)
Local model integration via Ollama:
Delegate heavy reasoning tasks
Token-efficient processing
Hybrid local+cloud analysis
Model capability queries
Installation
Prerequisites:
npm installConfiguration: Update your Claude Desktop configuration to include the servers:
{ "mcpServers": { "enhanced-architecture": { "command": "node", "args": ["D:\\arch_mcp\\enhanced_architecture_server_context.js"], "env": {} }, "cot-server": { "command": "node", "args": ["D:\\arch_mcp\\cot_server.js"], "env": {} }, "local-ai-server": { "command": "node", "args": ["D:\\arch_mcp\\local-ai-server.js"], "env": {} } } }Local AI Setup (Optional): Install Ollama and pull models:
ollama pull llama3.1:8b
Usage
Professional Accuracy
Automatically prevents:
Marketing language ("revolutionary", "cutting-edge")
Competitor references
Technical specification enhancement
Promotional tone
Context Monitoring
Tracks conversation tokens across:
Document attachments
Artifacts and code
Tool calls and responses
System overhead
Provides warnings at 80% and 90% capacity limits.
User Preferences
Stores preferences for:
Communication style (brief professional)
Aesthetic approach (minimal)
Message format requirements
Tool usage patterns
Multi-MCP Workflows
Coordinates complex tasks:
Create CoT reasoning strand
Delegate analysis to local AI
Store insights in memory
Update architecture patterns
Key Features
Version-Free Operation - No version dependencies, capability-based reporting
Empirical Validation - 60+ validation gates for decision-making
Token Efficiency - Intelligent context management and compression
Professional Standards - Enterprise-grade accuracy and compliance
Cross-Session Learning - Persistent pattern storage and preference evolution
File Structure
Development
Architecture Principles
Dual-System Enforcement - MCP tools + text document protocols
Empirical Grounding - Measurable validation over assumptions
User-Centric Design - Preference-driven behavior adaptation
Professional Standards - Enterprise accuracy and safety requirements
Adding New Features
Update server tool definitions
Implement handler functions
Add empirical validation gates
Update user preference options
Test cross-MCP coordination
Troubleshooting
Server Connection Issues:
Check Node.js version compatibility
Verify file paths in configuration
Review server logs for syntax errors
Context Tracking:
Monitor token estimation accuracy
Adjust limits for conversation length
Use reset tools for fresh sessions
Performance:
Local AI requires Ollama installation
Context monitoring adds ~50ms overhead
Pattern storage optimized for < 2ms response
License
MIT License - see individual files for specific licensing terms.
Contributing
Architecture improvements welcome. Focus areas:
Enhanced token estimation accuracy
Additional validation gates
Cross-domain pattern recognition
Performance optimization
A collection of Model Context Protocol servers providing advanced capabilities for AI assistants including professional accuracy enforcement, tool safety protocols, user preference management, and intelligent context monitoring.
Related MCP Servers
- -securityFlicense-qualityA versatile Model Context Protocol server that enables AI assistants to manage calendars, track tasks, handle emails, search the web, and control smart home devices.Last updated -18
- -securityFlicense-qualityA comprehensive Model Context Protocol server implementation that enables AI assistants to interact with file systems, databases, GitHub repositories, web resources, and system tools while maintaining security and control.Last updated -331
- AsecurityAlicenseAqualityA template for building Model Context Protocol servers that allow AI assistants to interact with custom data and services through queryable resources and specialized tools.Last updated -6MIT License