# RLM MCP Threat Model
## Trust Boundaries
1. **Client (e.g., Claude Desktop, Cursor)**: Trusted.
2. **RLM MCP Server**: Trusted to manage tools and ingestion.
3. **LLM Provider (e.g., OpenAI, OpenRouter)**: Semi-trusted. Receives snippets of code/text.
4. **Code Execution Sandbox (Docker)**: Untrusted. This is where code generated by RLM during its search/probe process is executed.
## CRITICAL: Remote Exposure Warning
**DO NOT expose the RLM MCP server as a public HTTP endpoint.**
- The default transport is **stdio/local**, which is inherently more secure.
- If you must run over HTTP/SSE, you **MUST** implement strong authentication (e.g., Bearer tokens), network isolation, and strict rate limiting.
- Exposed MCP servers without authorization are a major security risk and are actively targeted by scanners.
## Security Posture
- **File Ingestion**: RLM MCP enforces boundaries. It only reads files within the repository root or specified globs. It does NOT allow path traversal (e.g., `../../etc/passwd`).
- **Secrets**: RLM MCP never accepts raw API keys in tool arguments. It only accepts the *name* of environment variables (`api_key_env`).
- **Best Practice**: Use least-privilege API keys with usage limits and rotate them regularly.
- **Code Execution**:
- **Local Mode**: Uses your local machine. **USE WITH EXTREME CAUTION** on untrusted queries.
- **Docker Mode (Default)**: Executes code inside a minimal container. Limits filesystem and network access.
- **Modal/Prime**: Remote execution in isolated containers.
## Data Exfiltration Caveats
While RLM MCP attempts to sandbox execution, the LLM itself (the provider) receives your code snippets. Do not use with extremely sensitive/proprietary code unless using a local/private LLM provider.