We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/wx-b/long-context-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
README.md•777 B
# RLM Code Mode Example (TS)
This example demonstrates how to generate a typed TypeScript wrapper from the RLM MCP schema and use it for batch processing.
## Why Code Mode?
When you have N related tasks, calling the tool N times via an LLM agent can be slow and expensive. Code mode allows you to:
1. Define the batch in code.
2. Execute all calls programmatically.
3. Consolidate results efficiently.
## Security
Batch execution of RLM should ideally happen in a sandboxed environment (like the provided Docker sandbox) to ensure code generated by RLM during its recursive steps doesn't affect your host.
## Usage
1. Generate the wrapper:
```bash
python schema_to_ts.py
```
2. Run the batch job (requires Node.js):
```bash
npx ts-node batch_solve.ts
```