SECURITY_NOTES.md•1.71 kB
# MCP Security: Hooks & Sandboxing Quick Reference
## What Are Runtime Hooks?
Runtime hooks let you intercept and validate AI agent actions before they execute. Think of them as middleware for AI tool calls.
### Example use cases:
- Inspect tool calls before execution (e.g., "Is this trying to delete files?")
- Validate MCP server responses before they reach the LLM
- Filter user prompts to remove sensitive data
- Log all AI actions for audit trails
Security benefit: You can programmatically block malicious actions before they happen, not just review logs after the fact.
## What Is Sandboxing?
Sandboxing isolates AI agents from your host system—limiting what files, network resources, and system APIs they can access.
### Sandboxing options:
- Containers (Docker, Podman) - Run MCP servers or entire AI clients in isolated environments
- OS-level tools - seatbelt, others...
## Vendor Documentation Links
### Claude
- [MCP Overview](https://docs.claude.com/en/docs/claude-code/mcp)
- [Runtime Hooks](https://docs.claude.com/en/docs/claude-code/hooks-guide)
- [Security Best Practices](https://docs.claude.com/en/docs/claude-code/security)
### Cursor
- [Hooks](https://cursor.com/docs/agent/hooks)
- [Security Tips](https://cursor.com/docs/agent/security)
- [Sandboxing](https://cursor.com/docs/agent/terminal#sandbox)
### Gemini CLI
- [Security Tips](https://geminicli.com/docs/cli/enterprise/)
- [Sandboxing](https://geminicli.com/docs/cli/sandbox/)
## Additoinal resources
- [MCP Specification](https://spec.modelcontextprotocol.io/)
- [OWASP LLM Security](https://owasp.org/www-project-top-10-for-large-language-model-applications/)
- [Guardrails AI](https://www.guardrailsai.com/)