---
name: log-analysis
description: Log file analysis including reading, searching, filtering, and pattern matching. Use when investigating issues in logs, searching for errors, or analyzing application behavior.
allowed-tools: Bash, Read, Grep, Glob
mcp_tools:
- "log_read"
- "log_search"
- "log_tail"
- "log_stats"
- "log_errors"
- "log_filter"
- "log_aggregate"
---
# Log Analysis Skill
**Version**: 1.0.0
**Purpose**: Log file analysis and pattern discovery
---
## Triggers
| Trigger | Examples |
|---------|----------|
| Read | "read logs", "view log", "ログ表示" |
| Search | "search logs", "find error", "エラー検索" |
| Errors | "show errors", "エラー一覧" |
| Analyze | "analyze logs", "ログ分析" |
---
## Integrated MCP Tools
| Tool | Purpose |
|------|---------|
| `log_read` | Read log file contents |
| `log_search` | Search for patterns |
| `log_tail` | Last N lines of log |
| `log_stats` | Log statistics |
| `log_errors` | Extract error entries |
| `log_filter` | Filter by level/time |
| `log_aggregate` | Aggregate log entries |
---
## Workflow: Error Investigation
### Phase 1: Initial Scan
#### Step 1.1: Check Recent Logs
```
Use log_tail with:
- file: Log file path
- lines: 100
```
#### Step 1.2: Get Error Summary
```
Use log_errors with:
- file: Log file path
- levels: ["ERROR", "FATAL", "CRITICAL"]
```
### Phase 2: Pattern Search
#### Step 2.1: Search for Specific Error
```
Use log_search with:
- file: Log file path
- pattern: "Exception|Error|Failed"
- context: 3 (lines before/after)
```
#### Step 2.2: Filter by Time
```
Use log_filter with:
- file: Log file path
- start_time: "2024-01-01 10:00"
- end_time: "2024-01-01 11:00"
```
### Phase 3: Analysis
#### Step 3.1: Statistics
```
Use log_stats to get:
- Total entries
- Entries by level
- Error frequency
- Time distribution
```
#### Step 3.2: Aggregate Patterns
```
Use log_aggregate to find:
- Repeated errors
- Common patterns
- Trending issues
```
---
## Common Log Formats
### Apache/Nginx
```
IP - - [timestamp] "METHOD /path HTTP/1.1" status size
```
### Application (JSON)
```json
{"timestamp":"...","level":"ERROR","message":"..."}
```
### Syslog
```
Mon DD HH:MM:SS hostname process[pid]: message
```
---
## Search Patterns
| Pattern | Matches |
|---------|---------|
| `ERROR\|WARN` | Error or warning |
| `Exception.*` | Exception with message |
| `\d{3}` | HTTP status codes |
| `timeout\|timed out` | Timeout issues |
| `connection refused` | Connection failures |
---
## Best Practices
✅ GOOD:
- Start with recent logs
- Filter by time first
- Look for patterns, not just single errors
- Check error frequency
❌ BAD:
- Read entire large log files
- Search without time bounds
- Focus on single error instance
- Ignore warning patterns
---
## Checklist
- [ ] Log file accessible
- [ ] Time range identified
- [ ] Error pattern found
- [ ] Context lines reviewed
- [ ] Frequency analyzed
- [ ] Root cause identified