Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Risk Audit MCPscan my Express API for SQL injection and XSS vulnerabilities"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Security Audit Skill
A portable security auditing skill that teaches AI coding assistants to become expert security auditors. Works with Claude Code, Codex CLI, Cursor, and any AI that can learn from prompts.
What is This?
This is not a tool you install and run. It's a skill - a set of instructions and patterns that teach an AI assistant how to perform comprehensive security audits on any codebase.
Think of it like giving your AI assistant a security certification.
Why a Skill Instead of a Tool?
Traditional Tool | AI Skill |
Runs regex patterns mechanically | Understands context and intent |
Fixed rules, many false positives | Can reason about actual risk |
Only finds exact pattern matches | Can identify novel vulnerabilities |
Reports everything, you filter | Prioritizes what matters |
Just flags issues | Explains why and how to fix |
An AI with security knowledge can:
Follow data flow across functions and files
Understand your specific architecture
Explain attack scenarios in context
Suggest fixes that match your codebase style
Answer follow-up questions
Quick Start
Claude Code
Copy the skill file to your project or global config:
# Per-project (recommended)
mkdir -p .claude/skills
cp skills/security-audit.md .claude/skills/
# Or install globally
mkdir -p ~/.claude/skills
cp skills/security-audit.md ~/.claude/skills/Then ask Claude Code to audit your code:
"Audit this codebase for security vulnerabilities"
"Check src/api for injection risks"
"Is this authentication implementation secure?"Codex CLI
Add the skill to your Codex instructions:
# Add to your AGENTS.md or instructions file
cat skills/security-audit.md >> AGENTS.mdOr reference it in your Codex config:
{
"instructions": "When asked about security, follow the methodology in skills/security-audit.md"
}Cursor / Other AI Editors
Add to your rules or system prompt:
Open settings/rules
Add the contents of
skills/security-audit.mdOr reference it: "For security audits, use the methodology in skills/security-audit.md"
Any AI Chat
Simply paste the skill content before your question:
[paste skills/security-audit.md]
Now audit this code for security issues:
[paste your code]What Gets Detected
Critical (Fix Immediately)
SQL Injection - Concatenated queries, f-strings in execute()
Command Injection - exec() with user input, shell=True
Medium Priority
XSS - innerHTML, dangerouslySetInnerHTML, document.write
SSRF - User-controlled URLs in fetch/axios/requests
Path Traversal - User input in file operations
NoSQL Injection - Variable objects in MongoDB queries
Low Priority
Template Injection - Variable templates
Debug Code - alert(), console.log with sensitive data
Beyond Pattern Matching
The AI can also review:
Authentication & session handling
Authorization & access controls
Cryptography usage
Dependency vulnerabilities
Security headers & configuration
Secrets management
Error handling
Files
skills/
├── security-audit.md # Main skill - comprehensive methodology
└── vulnerability-patterns.md # Quick reference - patterns & regexsecurity-audit.md
The complete skill teaching:
Systematic audit methodology
Taint analysis (tracking untrusted data)
Vulnerability patterns with examples
Language-specific guidance (JS/TS, Python, React)
Fix recommendations
Output format
vulnerability-patterns.md
Quick reference containing:
All detection patterns in table format
Regex patterns for grep/ripgrep
Severity classifications
Taint source patterns
Example Session
You: Audit this Express route for security issues
AI: I'll analyze this route using systematic security methodology...
## Security Audit: user-routes.js
### Critical Issues (1)
#### SQL Injection at line 24
**Code:**
```javascript
db.query(`SELECT * FROM users WHERE id = ${req.params.id}`)Risk: Attacker can inject SQL via the id parameter to extract or modify database contents. Fix:
db.query('SELECT * FROM users WHERE id = ?', [req.params.id])Medium Issues (2)
Path Traversal at line 31
...
Recommendations
Use parameterized queries throughout
Add input validation middleware
Consider using an ORM like Prisma
## Extending the Skill
### Add Custom Patterns
Edit `skills/vulnerability-patterns.md` to add patterns specific to your stack:
```markdown
### Custom: Unsafe Deserialization
| ID | Language | Pattern | Description |
|----|----------|---------|-------------|
| CUSTOM001 | Python | `pickle\.loads?\s*\(` | Pickle deserialization |Add Framework-Specific Rules
Extend skills/security-audit.md with your framework's security considerations:
## Next.js Specific
### Server Actions
- Validate all inputs in server actions
- Don't expose sensitive data in client components
- Use `headers()` and `cookies()` safelyPhilosophy
Zero network calls - Everything runs locally in your AI's context
Teach, don't just flag - The AI explains vulnerabilities, not just lists them
Context-aware - The AI understands your code, not just pattern matches
Portable - Works with any AI that can read markdown
Extensible - Add your own patterns and rules
Contributing
Add patterns, improve explanations, support more languages:
Fork this repo
Edit files in
skills/Submit a PR
License
MIT - Use freely, contribute back if you can.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.