Skip to main content
Glama

OpenManager Vibe V4 MCP Server

by skyasu2
Integrations
  • Used for UI framework components in the frontend dashboard for server monitoring and visualization

  • Provides data visualization capabilities for displaying server performance metrics and monitoring data

  • Used for styling the server monitoring dashboard and user interface components

OpenManager Vibe V4

OpenManager Vibe V4 is a natural language-based server analysis and monitoring system. When an administrator asks questions about server status in easy language, the system automatically analyzes and provides results.

Project Structure

. ├── frontend/ # 프론트엔드 코드 │ ├── css/ # 스타일시트 파일들 │ ├── public/ # 정적 자원 파일들 │ ├── index.html # 메인 랜딩 페이지 │ ├── server_dashboard.html # 서버 모니터링 대시보드 │ ├── server_detail.html # 서버 상세 정보 페이지 │ ├── agent.js # 에이전트 스크립트 │ ├── ai_processor.js # 자연어 처리 엔진 │ ├── data_processor.js # 데이터 처리 로직 │ ├── dummy_data_generator.js # 더미 데이터 생성기 │ ├── fixed_dummy_data.js # 고정 더미 데이터 │ ├── summary.js # 데이터 요약 및 보고서 생성 │ └── config.js # 설정 파일 │ └── mcp-lite-server/ # 백엔드 서버 ├── context/ # 컨텍스트 문서 보관 ├── server.js # 백엔드 서버 코드 └── package.json # 백엔드 의존성 정의

MCP linkage flow

The frontend and MCP Lite server are integrated as follows:

  1. Frontend : User inputs questions about server status in natural language
  2. API call : The frontend sends a question and context information to the /query endpoint of the MCP server.
  3. Backend processing : MCP server matches the question to the context file and generates an appropriate response.
  4. Display results : The response is passed to the frontend and displayed visually to the user.

In a demonstration environment :

  • Most of the logic is executed on the front end and virtual data is used.
  • The MCP server serves a secondary role, providing only simple natural language processing functions.
  • In a real environment, the MCP server can be expanded to link real monitoring data with advanced LLM.

Key Features

  • Natural language-based queries : Allows administrators to ask questions about server status and issues in everyday language
  • Automatic Analysis : The system analyzes the question and provides a list of related servers and the cause of the problem.
  • Intuitive dashboard : Visually displays server status and performance metrics
  • Detailed Report : Generate analysis report including cause and solution of the problem.
  • Data Filtering : Ability to filter by time, server type, and location

Technology Stack

Frontend

  • HTML5/CSS3/JavaScript (Vanilla)
  • Bootstrap UI Framework
  • Chart.js (data visualization)
  • Font Awesome/Bootstrap Icons (Icons)

Backend

  • Node.js
  • Express.js
  • File-based context management system

🏗 System architecture: Large-scale AI agent vs. this project (MCP-based)

🧠 Real LLM-based AI agent architecture

  • Data flow: Real-time monitoring log → Collector (Kafka/Fluentd, etc.) → Analysis engine
  • Natural Language Processing:
    • Integration with LLM API (OpenAI, Claude, etc.)
    • Python/Java based backend
    • Includes advanced query interpretation and contextual understanding capabilities
  • Analysis Engine:
    • Integration with time series/search engines such as InfluxDB and Elasticsearch
    • Event-based pattern analysis, alarm triggering
  • UI Integration:
    • Fully integrated dashboard
    • Conversational interface + usage history learning

⚙️ This project (lightweight MCP-based demo system)

This project contains several layers of self-developed "AI agent" logic to provide AI-like functionality without the need for LLM (Large Language Model). Each component performs a specific role and works together to provide users with an intelligent server analysis and monitoring experience.

  • Frontend: Built in pure HTML/JS, deployed to Netlify. Most of the complex AI agent logic is contained here.
  • MCP Server (Backend): Node.js-based, deployed on Render. Mainly responsible for simple Q&A and statistical analysis functions.
Multilayer “LLM-Free AI Agent” Components
  1. Simple MCP server ( /query endpoint in mcp-lite-server/server.js ):
    • Role: Provides basic keyword matching query-answering functionality based on the contents of text files in context/ folder.
    • How it works: Responds by checking if a word in the user's query is contained in a specific line in the context file.
    • Limitations: Simple string matching can result in poor contextual understanding and may return irrelevant information.
  2. Backend AI agent ( mcp-lite-server/ai_agent.js and /api/ai/query endpoint):
    • Role: Detect anomalies through statistical analysis (e.g. Z-score) on server metrics data and generate simple, pattern-based natural language answers for certain types of questions.
    • How it works: Analyzes numerical data to identify statistical outliers and uses predefined response templates.
    • Limitations: Only responds to limited scenarios and question types.
  3. Frontend AI Processor ( frontend/ai_processor.js ):
    • Role: Performs the most sophisticated LLM-less agent logic in the current system. Responsible for defining detailed rule-based problem patterns, analyzing user natural language queries (simple NLU), analyzing causes and suggesting solutions, and generating dynamic report content.
    • How it works: Analyzes server data based on rules and conditions defined in problemPatterns , identifies the intent of the user's question through analyzeQuery , and provides customized information through various generate...Response functions.
    • Features: Most of the intelligent logic is implemented within the front-end JavaScript code.
Advantages and limitations of this approach
  • merit:
    • Work like AI without LLM (low cost/high efficiency)
    • Response can be extended by simply adding documents (for simple MCP servers)
    • Low introduction and maintenance costs (no external LLM API dependency and costs)
    • Possibility to set rules optimized for specific domains (server monitoring)
  • margin:
    • Limited question interpretation ability (no context understanding, limited depth of natural language understanding)
    • Not suitable for large-scale real-time analysis
    • Performance depends on the sophistication of rules and patterns, and it is difficult to flexibly respond to new problem types or questions (especially front-end AI processors).
    • Because the core logic is concentrated in the front end, there may be limitations in scalability and maintainability.

🤖 Development method (based on Vibe Coding)

This project was developed by inputting prompts into a GPT-based tool and using the Cursor to guide AI coding .

Development phase flow

Step 1: Initial Planning and Feature Definition (using GPT)
  • Project Structure Proposal
  • Defining the role of the MCP server and context-based response method
  • Setting the technology stack and UI basic direction
Step 2: Implementing functions and front-end integration (using Cursor)
  • Front JS code configuration
  • MCP request fetch processing
  • Markdown response rendering
Step 3: Advanced and Document Pattern Response (Cursor + GPT Collaboration)
  • context Extending the multi-document response structure
  • Designing automatic report templates
  • Design and branch processing of response documents by type of failure

📐 Development Guidelines

✅ UI & Design

  • index.html and UI styles should remain current (preservation of 90% or more recommended)
  • Allow changes only to the extent that they do not break the user experience flow.

✅ MCP Backend

  • Server functionality can be freely improved by expanding and adding context structures.
  • The context document structure follows the text-based .txt or .md
  • RESTful structure is recommended when extending API

Development Guidelines

Please follow these guidelines when working on your project:

  • Index File and User Interface : Current UI/UX design must be thoroughly preserved.
    • index.html and externally visible user interface components should retain at least 90% of their current styles.
    • Do not modify the front-end design unless absolutely necessary.

Backend Development

  • Backend feature improvements : Server-side feature improvements and extensions are free to proceed.
    • Improved data processing logic
    • Add and optimize API endpoints
    • Work to improve performance and enhance scalability

Install and run

Frontend

cd frontend # 정적 서버로 실행 (예: VS Code의 Live Server 또는 기타 정적 파일 서버)

Backend Server

cd mcp-lite-server npm install node server.js

Deployment Environment

Future development plans

  1. AI Integration : Real Natural Language Processing LLM Linkage
  2. Real-time data : Integration with real server monitoring systems (Prometheus, Zabbix, etc.)
  3. Expanding Visualization : Diversifying Data Analysis Graphs and Charts
  4. Notification System : Automatic notification and report transmission in case of failure

Developer Information

This project was developed using Vibe Coding based on various AI models such as Claude, GPT, and Gemini.

License

This project was created for internal development purposes and no separate license is specified for it.

Recent Improvements (Release v4.1.0)

1. Improvement and consistency of lightweight NLU structure

  • We implemented lightweight NLU (Natural Language Understanding) architectures on both the frontend and backend.
  • Consistently separate user queries into intents and entities.
  • Intents: CPU_STATUS, MEMORY_STATUS, DISK_STATUS, INCIDENT_QUERY, SERVER_STATUS, etc.
  • Entities: server_type, threshold, time_range, severity, etc.

2. Improved front-end-back-end query processing logic

  • Added NLU analysis function compatible with backend to frontend process_query.js .
  • Improve the backend API response structure to explicitly provide intent and entity information to the frontend.
  • We've improved error handling to ensure consistent error handling across all API endpoints.

3. Improved context handling

  • Leverage conversation context to provide more accurate and contextual responses to follow-up questions.
  • Example: After asking a question about a specific server, keep the context by asking a follow-up question like "Explain why?"
  • Remember the metric types (CPU, memory, etc.) mentioned in previous conversations and use them in follow-up questions.

4. Improved developer experience

  • Frontend development becomes easier by maintaining a consistent API response structure.
  • Added error handling logic to all APIs to improve stability.
  • Improved maintainability by improving the backend code structure.

Future Improvement Plans

  1. Structuring the context file - Convert the current text file-based context to JSON/YAML format.
  2. Enhanced NLU capabilities - Added recognition of more intents and entities
  3. Front-end UI improvements - Visually express context-aware features
  4. Backend Performance Optimization - Improve performance of processing large-scale metrics data

Related MCP Servers

  • -
    security
    F
    license
    -
    quality
    A server that implements a checklist management system with features like task creation, progress tracking, data persistence, and item comments.
    Last updated -
    5
    3
    TypeScript
  • A
    security
    F
    license
    A
    quality
    The server facilitates natural language interactions for exploring and understanding codebases, providing insights into data models and system architecture using a cost-effective, simple setup with support for existing Claude Pro subscriptions.
    Last updated -
    4
    7
    Python
    • Apple
  • -
    security
    F
    license
    -
    quality
    A server that enables interaction with PostgreSQL, MySQL, MariaDB, or SQLite databases through Claude Desktop using natural language queries.
    Last updated -
    Python
  • -
    security
    A
    license
    -
    quality
    A lightweight server that provides real-time system information including CPU, memory, disk, and GPU statistics for monitoring and diagnostic purposes.
    Last updated -
    Python
    MIT License

View all related MCP servers

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/skyasu2/openmanager-vibe-v4'

If you have feedback or need assistance with the MCP directory API, please join our Discord server