# MCP Code Analysis & Quality Server - User Instructions
## Overview
The MCP Code Analysis & Quality Server is a comprehensive suite of Model Context Protocol (MCP) servers designed to provide advanced code analysis, quality metrics, and development insights. This monorepo contains multiple specialized servers that work together to deliver:
- **Static Analysis**: Multi-language code quality checks (ESLint, Pylint, etc.)
- **Complexity Analysis**: Code complexity metrics (cyclomatic, cognitive, Halstead)
- **Dependency Analysis**: Dependency management and vulnerability scanning
- **Combined Reporting**: Unified analysis results across all servers
## Getting Started
### Prerequisites
- Node.js 18+
- npm or yarn
- MCP-compatible client (Cline, VS Code with MCP extension, etc.)
### Installation
```bash
# Clone the repository
git clone <repository-url>
cd mcp-code-analyzer
# Install dependencies
npm install
# Build all packages
npm run build
```
### Running the Servers
```bash
# Run all servers in development mode
npm run dev
# Or run individual servers
cd packages/static-analysis-server && npm run dev
cd packages/complexity-analyzer-server && npm run dev
cd packages/dependency-analysis-server && npm run dev
```
## Using the MCP Tools
### Connecting to MCP Clients
#### Cline
1. Ensure Cline is configured to connect to MCP servers
2. Add the server configuration to your MCP settings:
```json
{
"mcpServers": {
"code-analysis": {
"command": "node",
"args": ["c:/mcpservers/mcp-code-analyzer/packages/static-analysis-server/dist/index.js"],
"env": {}
},
"complexity-analyzer": {
"command": "node",
"args": ["c:/mcpservers/mcp-code-analyzer/packages/complexity-analyzer-server/dist/index.js"],
"env": {}
},
"dependency-analysis": {
"command": "node",
"args": ["c:/mcpservers/mcp-code-analyzer/packages/dependency-analysis-server/dist/index.js"],
"env": {}
}
}
}
```
#### VS Code with MCP Extension
1. Install the MCP extension for VS Code
2. Configure the server paths in the extension settings
3. The tools will be available in the command palette
### Available Tools
#### Static Analysis Server Tools
- `analyze_file`: Analyze a single file for code quality issues
- `analyze_project`: Analyze an entire project for static analysis issues
- `batch_analyze`: Analyze multiple files in batch
- `get_rules`: Get available rules for a specific language
- `configure_analyzer`: Configure the analyzer with custom settings
#### Complexity Analyzer Server Tools
- `analyze_complexity`: Analyze code complexity with comprehensive metrics
- `calculate_metrics`: Calculate custom complexity metrics
- `identify_hotspots`: Identify complexity hotspots in codebase
- `suggest_refactorings`: Suggest refactoring opportunities
- `track_trends`: Track complexity trends over time
#### Dependency Analysis Server Tools
- `analyze_dependencies`: Analyze project dependencies
- `check_vulnerabilities`: Check for security vulnerabilities
- `update_dependencies`: Update outdated dependencies
- `generate_report`: Generate dependency analysis report
## Scanning Files
### Scanning a Single File
#### Using Static Analysis
```bash
# Via MCP tool
analyze_file --filePath /path/to/your/file.js --language javascript
```
#### Using Complexity Analysis
```bash
# Via MCP tool
analyze_complexity --filePath /path/to/your/file.js --language javascript
```
#### Via MCP Client (Cline Example)
1. Open the file in your editor
2. Use the MCP tool: `Analyze file: /path/to/your/file.js`
3. Results will be displayed in the chat or output panel
### Scanning All Files in a Project
#### Using Static Analysis
```bash
# Via MCP tool
analyze_project --projectPath /path/to/your/project --excludePatterns ["node_modules/**", "dist/**"]
```
#### Using Complexity Analysis
```bash
# Via MCP tool
identify_hotspots --projectPath /path/to/your/project --hotspotThreshold 15
```
#### Via MCP Client (Cline Example)
1. Navigate to your project root
2. Use the MCP tool: `Analyze project: /path/to/your/project`
3. Configure analysis options as needed
4. View comprehensive results across all files
### Batch Scanning
```bash
# Analyze multiple specific files
batch_analyze --filePaths ["/path/to/file1.js", "/path/to/file2.py", "/path/to/file3.ts"]
```
## Creating AI Rules for Cline or Copilot
### Understanding AI Rules
AI rules help guide AI assistants (like Cline or GitHub Copilot) in following your project's coding standards, best practices, and conventions. The MCP servers can generate these rules based on your codebase analysis.
### Generating Rules from Analysis
#### Step 1: Run Comprehensive Analysis
```bash
# Analyze your entire project
analyze_project --projectPath /path/to/your/project --includeMetrics ["all"]
```
#### Step 2: Extract Patterns and Rules
Use the complexity analyzer to identify common patterns:
```bash
# Identify complexity hotspots and patterns
identify_hotspots --projectPath /path/to/your/project --maxResults 20
# Suggest refactorings based on analysis
suggest_refactorings --filePath /path/to/your/file.js --complexityThreshold 10
```
#### Step 3: Create Custom Rules File
Based on the analysis results, create a rules file:
```markdown
# Project-Specific AI Rules
## Code Quality Rules
- Maximum cyclomatic complexity: 10
- Maximum cognitive complexity: 15
- Use TypeScript strict mode
- Follow ESLint recommended rules
- Maintain test coverage above 80%
## Naming Conventions
- Use camelCase for variables and functions
- Use PascalCase for classes and interfaces
- Use UPPER_CASE for constants
- Prefix private methods with underscore
## Architecture Patterns
- Use dependency injection for services
- Implement proper error handling with custom error types
- Follow the established service architecture pattern
- Use shared types for consistency across packages
## Import Rules
- Use workspace path mapping for cross-package imports
- Group imports: standard library, third-party, local
- Avoid relative imports across packages
```
#### Step 4: Configure in Your AI Assistant
##### For Cline
Add the rules to your Cline configuration:
```json
{
"rules": {
"projectRules": "/path/to/your/project/.cursorrules",
"customRules": [
"Maximum complexity should not exceed 10",
"Use TypeScript interfaces for all data structures",
"Follow established error handling patterns"
]
}
}
```
##### For GitHub Copilot
Create a `.cursorrules` file in your project root:
```
# Project Rules for GitHub Copilot
## General Guidelines
- Follow TypeScript best practices
- Use async/await for asynchronous operations
- Implement proper error handling
- Write descriptive commit messages
## Code Style
- Use 2 spaces for indentation
- Maximum line length: 100 characters
- Use single quotes for strings
- Add JSDoc comments for public APIs
## Architecture
- Use the established service pattern
- Implement dependency injection
- Follow the shared types conventions
- Use the cross-server communication service for inter-server calls
```
### Automated Rule Generation
The MCP servers can help generate rules automatically:
```bash
# Get analysis rules for a language
get_rules --language typescript
# Configure analyzer with custom rules
configure_analyzer --language javascript --config {
"rules": {
"max-len": ["error", 100],
"no-console": "warn"
}
}
```
## Easiest Way to Use MCP Server Tools
### Quick Start Method
1. **Run All Servers**: Use `npm run dev` to start all servers simultaneously
2. **Connect MCP Client**: Configure your MCP client (Cline, VS Code) to connect to the servers
3. **Use Tools Directly**: Access tools through your client's interface
### Integrated Workflow
```bash
# 1. Start the servers
npm run dev
# 2. In your MCP client, use tools like:
# - "Analyze current file"
# - "Check complexity"
# - "Scan dependencies"
# - "Generate quality report"
```
### Docker Deployment (Alternative)
```bash
# Use Docker Compose for easy deployment
docker-compose up -d
# Servers will be available on configured ports
```
### VS Code Integration
1. Install MCP extension for VS Code
2. Configure server endpoints
3. Use command palette to access tools
4. View results in VS Code panels
### Cline Integration
1. Configure MCP server connections in Cline settings
2. Use natural language commands:
- "Analyze this file for issues"
- "Check code complexity"
- "Find dependency vulnerabilities"
- "Generate quality metrics"
## Best Practices
### Performance Optimization
- Use caching: Results are cached to avoid redundant analysis
- Batch operations: Analyze multiple files together when possible
- Incremental analysis: Only analyze changed files
### Error Handling
- Check server logs for issues
- Use structured error messages with error codes
- Follow the established error handling patterns
### Customization
- Configure analysis rules per project
- Set custom thresholds for complexity metrics
- Define project-specific quality gates
## Troubleshooting
### Common Issues
- **Server not starting**: Check Node.js version and dependencies
- **Tools not available**: Verify MCP client configuration
- **Analysis fails**: Check file paths and permissions
- **Performance issues**: Use batch analysis and caching
### Getting Help
- Check server logs in `packages/*/logs/`
- Review the project documentation
- Use the combined reporting service for unified insights
## Advanced Usage
### Custom Tool Development
Extend the servers by adding custom tools in the `tools/` directories of each server package.
### Integration with CI/CD
Integrate analysis into your build pipeline using the Docker containers or npm scripts.
### Monitoring and Metrics
Use the shared logger and cache services to monitor server performance and analysis metrics.
---
For more detailed information, refer to the individual server documentation in each package's `docs/` directory.