Skip to main content
Glama

Here is the English translation of the README documentation. I have maintained the Markdown formatting, emojis, and technical terminology.

🧠 MCP-RLM: Recursive Language Model Agent

Infinite Context Reasoning for Large Language Models

πŸ“– What is MCP-RLM?

MCP-RLM is an open-source implementation of the Recursive Language Models (RLMs) architecture introduced by researchers at MIT CSAIL (Zhang et al., 2025).

Typically, LLMs have a "Context Window" limit. If you force a document containing millions of words into it, the model will suffer from context rot (forgetting the middle part) or become extremely slow and expensive.

MCP-RLM changes how LLMs process data: Instead of "reading" the entire document at once, MCP-RLM treats the document as an External Environment (like a database or file) that can be accessed programmatically. The agent uses Python code to break down, scan, and perform sub-queries recursively to itself to answer complex questions from massive data.


✨ Key Features

  • ♾️ Infinite Context Scaling: Capable of processing documents far larger than the model's token limit (theoretically up to 10 Million+ tokens).

  • πŸ“‰ Cost-Effective: Uses small models (Worker) for heavy scanning, and large models (Planner) only for orchestration. Cheaper than loading the entire context into a large model.

  • 🎯 High Accuracy on Reasoning: Reduces hallucinations on complex needle-in-a-haystack tasks because each section is examined in isolation.

  • πŸ”Œ Provider Agnostic: Flexible configuration! Use Claude as the brain (Root) and Ollama/Local LLM as the worker (Sub) for privacy and cost savings.


βš™οΈ How It Works & Architecture

This implementation uses the MCP (Model Context Protocol) to connect your IDE/Chatbot (such as Cursor, Claude Desktop) with the "RLM Engine" behind the scenes. RLM

Core Concept: Root vs. Sub Agent

The system divides tasks into two AI model roles for cost efficiency and accuracy:

  1. 🧠 Root Agent (The Planner)

  • Role: Project Manager.

  • Task: Does not read the document directly. It views metadata (file length), plans strategies, and writes Python code to execute those strategies.

  • Model: Smart model (e.g., Claude-3.5-Sonnet, GPT-4o).

  1. πŸ‘· Sub Agent (The Worker)

  • Role: Field Worker.

  • Task: Called hundreds of times by the Python code to read small data chunks and extract specific information.

  • Model: Fast & cheap model (e.g., GPT-4o-mini, Llama-3, Haiku).


πŸš€ Installation & Usage

Prerequisites

  • Python 3.10+

  • pip

Installation Steps

  1. Clone Repository

git clone https://github.com/username/MCP-RLM.git cd MCP-RLM
  1. Create Virtual Environment

python -m venv venv source venv/bin/activate # For Linux/Mac # venv\Scripts\activate # For Windows
  1. Install Dependencies

pip install -r requirements.txt

What is being installed?

  • mcp: The core SDK for the MCP protocol.

  • openai & anthropic: Client libraries to connect to LLM providers.

  • python-dotenv: To load API Keys from the .env file.

  • tiktoken: To count tokens to ensure they fit model limits.

  1. Environment Configuration Copy .env.EXAMPLE to .env and fill in your API Keys.

cp .env.EXAMPLE .env

Model Configuration

You can control the agent's behavior via config.yaml.

# config.yaml agents: root: provider: "anthropic" model: "claude-3-5-sonnet" # Excellent at coding sub: provider: "openai" # Or use "ollama" for local model: "gpt-4o-mini" # Fast & Cheap for hundreds of loops

Running the Server

Run the MCP server:

python server.py

The server will run and be ready to connect with MCP clients (like Claude Desktop or Cursor).


Client Configuration

To use it, you need to connect this MCP server to applications like Claude Desktop or Cursor.

1. Claude Desktop

Open the Claude Desktop configuration file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Add the following configuration:

{ "mcpServers": { "rlm-researcher": { "command": "/path/to/MCP-RLM/venv/bin/python", "args": ["/path/to/MCP-RLM/server.py"] } } }

Note: Replace /path/to/MCP-RLM/ with the absolute path to your project folder.

2. Cursor IDE

  1. Open Cursor Settings > Features > MCP.

  2. Click + Add New MCP Server.

  3. Fill in the following form:

  • Name: RLM-Researcher (or any other name)

  • Type: stdio

  • Command: /path/to/MCP-RLM/venv/bin/python /path/to/MCP-RLM/server.py

  1. Click Save.

If successful, the status indicator will turn green.

3. Antigravity IDE

You can use the UI or edit the configuration file manually.

Method 1: Via UI

  1. Click the ... menu in the agent panel.

  2. Select Manage MCP Servers.

  3. Add a new server with the same configuration as above.

Method 2: Manual Config Edit the file ~/.gemini/antigravity/mcp_config.json:

{ "mcpServers": { "rlm-researcher": { "command": "/path/to/MCP-RLM/venv/bin/python", "args": ["/path/to/MCP-RLM/server.py"], "enabled": true } } }

πŸ“š References & Credits

This project is an experimental implementation based on the following research paper:

Recursive Language Models Alex L. Zhang, Tim Kraska, Omar Khattab (MIT CSAIL) 2025

This paper proposes RLM as a general inference strategy that treats long prompts as an external environment, enabling programmatic problem decomposition.


πŸ“„ License

This project is licensed under the MIT License. See the LICENSE file for more details.


-
security - not tested
A
license - permissive license
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MuhammadIndar/MCP-RLM'

If you have feedback or need assistance with the MCP directory API, please join our Discord server