Used for UI framework components in the frontend dashboard for server monitoring and visualization
Provides data visualization capabilities for displaying server performance metrics and monitoring data
Used for styling the server monitoring dashboard and user interface components
OpenManager Vibe V4
OpenManager Vibe V4 is a natural language-based server analysis and monitoring system. When an administrator asks questions about server status in easy language, the system automatically analyzes and provides results.
Distribution Link
- Frontend : https://openvibe3.netlify.app
- Backend API : https://openmanager-vibe-v4.onrender.com
Project Structure
MCP linkage flow
The frontend and MCP Lite server are integrated as follows:
- Frontend : User inputs questions about server status in natural language
- API call : The frontend sends a question and context information to the
/query
endpoint of the MCP server. - Backend processing : MCP server matches the question to the context file and generates an appropriate response.
- Display results : The response is passed to the frontend and displayed visually to the user.
In a demonstration environment :
- Most of the logic is executed on the front end and virtual data is used.
- The MCP server serves a secondary role, providing only simple natural language processing functions.
- In a real environment, the MCP server can be expanded to link real monitoring data with advanced LLM.
Key Features
- Natural language-based queries : Allows administrators to ask questions about server status and issues in everyday language
- Automatic Analysis : The system analyzes the question and provides a list of related servers and the cause of the problem.
- Intuitive dashboard : Visually displays server status and performance metrics
- Detailed Report : Generate analysis report including cause and solution of the problem.
- Data Filtering : Ability to filter by time, server type, and location
Technology Stack
Frontend
- HTML5/CSS3/JavaScript (Vanilla)
- Bootstrap UI Framework
- Chart.js (data visualization)
- Font Awesome/Bootstrap Icons (Icons)
Backend
- Node.js
- Express.js
- File-based context management system
🏗 System architecture: Large-scale AI agent vs. this project (MCP-based)
🧠 Real LLM-based AI agent architecture
- Data flow: Real-time monitoring log → Collector (Kafka/Fluentd, etc.) → Analysis engine
- Natural Language Processing:
- Integration with LLM API (OpenAI, Claude, etc.)
- Python/Java based backend
- Includes advanced query interpretation and contextual understanding capabilities
- Analysis Engine:
- Integration with time series/search engines such as InfluxDB and Elasticsearch
- Event-based pattern analysis, alarm triggering
- UI Integration:
- Fully integrated dashboard
- Conversational interface + usage history learning
⚙️ This project (lightweight MCP-based demo system)
This project contains several layers of self-developed "AI agent" logic to provide AI-like functionality without the need for LLM (Large Language Model). Each component performs a specific role and works together to provide users with an intelligent server analysis and monitoring experience.
- Frontend: Built in pure HTML/JS, deployed to Netlify. Most of the complex AI agent logic is contained here.
- MCP Server (Backend): Node.js-based, deployed on Render. Mainly responsible for simple Q&A and statistical analysis functions.
Multilayer “LLM-Free AI Agent” Components
- Simple MCP server (
/query
endpoint inmcp-lite-server/server.js
):- Role: Provides basic keyword matching query-answering functionality based on the contents of text files in
context/
folder. - How it works: Responds by checking if a word in the user's query is contained in a specific line in the context file.
- Limitations: Simple string matching can result in poor contextual understanding and may return irrelevant information.
- Role: Provides basic keyword matching query-answering functionality based on the contents of text files in
- Backend AI agent (
mcp-lite-server/ai_agent.js
and/api/ai/query
endpoint):- Role: Detect anomalies through statistical analysis (e.g. Z-score) on server metrics data and generate simple, pattern-based natural language answers for certain types of questions.
- How it works: Analyzes numerical data to identify statistical outliers and uses predefined response templates.
- Limitations: Only responds to limited scenarios and question types.
- Frontend AI Processor (
frontend/ai_processor.js
):- Role: Performs the most sophisticated LLM-less agent logic in the current system. Responsible for defining detailed rule-based problem patterns, analyzing user natural language queries (simple NLU), analyzing causes and suggesting solutions, and generating dynamic report content.
- How it works: Analyzes server data based on rules and conditions defined in
problemPatterns
, identifies the intent of the user's question throughanalyzeQuery
, and provides customized information through variousgenerate...Response
functions. - Features: Most of the intelligent logic is implemented within the front-end JavaScript code.
Advantages and limitations of this approach
- merit:
- Work like AI without LLM (low cost/high efficiency)
- Response can be extended by simply adding documents (for simple MCP servers)
- Low introduction and maintenance costs (no external LLM API dependency and costs)
- Possibility to set rules optimized for specific domains (server monitoring)
- margin:
- Limited question interpretation ability (no context understanding, limited depth of natural language understanding)
- Not suitable for large-scale real-time analysis
- Performance depends on the sophistication of rules and patterns, and it is difficult to flexibly respond to new problem types or questions (especially front-end AI processors).
- Because the core logic is concentrated in the front end, there may be limitations in scalability and maintainability.
🤖 Development method (based on Vibe Coding)
This project was developed by inputting prompts into a GPT-based tool and using the Cursor to guide AI coding .
Development phase flow
Step 1: Initial Planning and Feature Definition (using GPT)
- Project Structure Proposal
- Defining the role of the MCP server and context-based response method
- Setting the technology stack and UI basic direction
Step 2: Implementing functions and front-end integration (using Cursor)
- Front JS code configuration
- MCP request fetch processing
- Markdown response rendering
Step 3: Advanced and Document Pattern Response (Cursor + GPT Collaboration)
- context Extending the multi-document response structure
- Designing automatic report templates
- Design and branch processing of response documents by type of failure
📐 Development Guidelines
✅ UI & Design
index.html
and UI styles should remain current (preservation of 90% or more recommended)- Allow changes only to the extent that they do not break the user experience flow.
✅ MCP Backend
- Server functionality can be freely improved by expanding and adding context structures.
- The context document structure follows the text-based
.txt
or.md
- RESTful structure is recommended when extending API
Development Guidelines
Please follow these guidelines when working on your project:
UI and design related (based on commit ad03d5f)
- Index File and User Interface : Current UI/UX design must be thoroughly preserved.
index.html
and externally visible user interface components should retain at least 90% of their current styles.- Do not modify the front-end design unless absolutely necessary.
Backend Development
- Backend feature improvements : Server-side feature improvements and extensions are free to proceed.
- Improved data processing logic
- Add and optimize API endpoints
- Work to improve performance and enhance scalability
Install and run
Frontend
Backend Server
Deployment Environment
- Frontend : Netlify ( https://openvibe3.netlify.app )
- Backend : Render.com ( https://openmanager-vibe-v4.onrender.com )
Future development plans
- AI Integration : Real Natural Language Processing LLM Linkage
- Real-time data : Integration with real server monitoring systems (Prometheus, Zabbix, etc.)
- Expanding Visualization : Diversifying Data Analysis Graphs and Charts
- Notification System : Automatic notification and report transmission in case of failure
Developer Information
This project was developed using Vibe Coding based on various AI models such as Claude, GPT, and Gemini.
License
This project was created for internal development purposes and no separate license is specified for it.
Recent Improvements (Release v4.1.0)
1. Improvement and consistency of lightweight NLU structure
- We implemented lightweight NLU (Natural Language Understanding) architectures on both the frontend and backend.
- Consistently separate user queries into intents and entities.
- Intents: CPU_STATUS, MEMORY_STATUS, DISK_STATUS, INCIDENT_QUERY, SERVER_STATUS, etc.
- Entities: server_type, threshold, time_range, severity, etc.
2. Improved front-end-back-end query processing logic
- Added NLU analysis function compatible with backend to frontend
process_query.js
. - Improve the backend API response structure to explicitly provide intent and entity information to the frontend.
- We've improved error handling to ensure consistent error handling across all API endpoints.
3. Improved context handling
- Leverage conversation context to provide more accurate and contextual responses to follow-up questions.
- Example: After asking a question about a specific server, keep the context by asking a follow-up question like "Explain why?"
- Remember the metric types (CPU, memory, etc.) mentioned in previous conversations and use them in follow-up questions.
4. Improved developer experience
- Frontend development becomes easier by maintaining a consistent API response structure.
- Added error handling logic to all APIs to improve stability.
- Improved maintainability by improving the backend code structure.
Future Improvement Plans
- Structuring the context file - Convert the current text file-based context to JSON/YAML format.
- Enhanced NLU capabilities - Added recognition of more intents and entities
- Front-end UI improvements - Visually express context-aware features
- Backend Performance Optimization - Improve performance of processing large-scale metrics data
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
A natural language-based server analysis and monitoring system that automatically processes user queries about server status and provides detailed responses with visualizations.
- Distribution Link
- Project Structure
- MCP linkage flow
- Key Features
- Technology Stack
- 🏗 System architecture: Large-scale AI agent vs. this project (MCP-based)
- 🤖 Development method (based on Vibe Coding)
- 📐 Development Guidelines
- Development Guidelines
- Install and run
- Deployment Environment
- Future development plans
- Developer Information
- License
- Recent Improvements (Release v4.1.0)
- Future Improvement Plans
Related MCP Servers
- -securityFlicense-qualityA server that implements a checklist management system with features like task creation, progress tracking, data persistence, and item comments.Last updated -53TypeScript
- AsecurityFlicenseAqualityThe server facilitates natural language interactions for exploring and understanding codebases, providing insights into data models and system architecture using a cost-effective, simple setup with support for existing Claude Pro subscriptions.Last updated -47Python
- -securityFlicense-qualityA server that enables interaction with PostgreSQL, MySQL, MariaDB, or SQLite databases through Claude Desktop using natural language queries.Last updated -Python
- -securityAlicense-qualityA lightweight server that provides real-time system information including CPU, memory, disk, and GPU statistics for monitoring and diagnostic purposes.Last updated -PythonMIT License