RedM Mcp
Server Details
RedM (Red Dead Redemption 2 multiplayer) / RDR3 modding. Hosted HTTP endpo
int: native lookups (hash ↔ name), semantic search over framework docs (VORP, RSGCore, oxmysql), and grep over rdr3_discoveries community data tables (peds, weapons,
animations, AI flags, props). No install, no auth.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 8 of 8 tools scored.
Each tool has a clearly distinct purpose: browse lists paths, get_document fetches content, grep_docs does exact token search, semantic_search handles concept queries, lookup_native resolves hashes/names, get_invoke_guide provides calling conventions, list_namespaces gives orientation, and share_finding contributes to the corpus. No overlap in functionality.
All tool names follow a consistent verb_noun pattern (e.g., get_document, grep_docs, list_namespaces) using lowercase with underscores. The naming is predictable and easy to remember.
With 8 tools, the server covers all necessary operations for a documentation system without being bloated. Each tool earns its place, covering navigation, search, retrieval, native lookup, and community contribution.
The tool set provides a complete lifecycle for documentation: orientation (list_namespaces), discovery (browse), retrieval (get_document), multiple search methods (grep_docs, semantic_search), native resolution (lookup_native), a guide for invoking natives (get_invoke_guide), and even a mechanism for users to contribute findings (share_finding). No obvious gaps.
Available Tools
8 toolsbrowseBrowse RedM doc pathsAInspect
Enumerate doc paths in a category/namespace. Use to discover what exists before calling get_document or a targeted grep_docs. NOT a content search — use semantic_search for behavior/concept lookups or grep_docs for token lookups. Returns {path, title, chunks}[].
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | ||
| namespace | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must carry the full burden. It discloses that the tool returns an array of objects with path, title, and chunks, implying a read-only listing operation. While not exhaustive (e.g., no mention of permissions or limitations), it is sufficient for an enumeration tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loading the purpose and providing actionable guidance without any redundant or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity of the tool (enumeration with only two parameters), no annotations, and no output schema, the description adequately covers when, how, and what the tool returns. It also references sibling tools appropriately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not explain the individual parameters (category, namespace). The category enum provides some context, but namespace is left undefined. The description fails to add meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Enumerate doc paths in a category/namespace', providing a specific verb and resource. It distinguishes itself from siblings by explicitly stating when to use browse versus get_document, grep_docs, or semantic_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool ('Use to discover what exists before calling get_document or a targeted grep_docs') and when not to ('NOT a content search — use semantic_search for behavior/concept lookups or grep_docs for token lookups').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_documentGet full RedM docAInspect
Fetch full markdown of a doc by path (as returned by browse, semantic_search, or grep_docs). Use to retrieve full content after a search snippet looks promising. Pass heading (full breadcrumb like Character Management > Inventory Management, or just the leaf — case-insensitive, fuzzy) to fetch only that section. Deep-heading matches auto-prepend the H2 parent's intro for context. For individual script natives prefer lookup_native. For code symbols (addItem) or content inside the largest rdr3_discoveries lua data tables (preview-only here) use grep_docs. Community findings use learning:N paths, not learnings/<slug>.md. On 404 returns available headings + cross-file hints.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Doc path. Two valid shapes: (a) `<category>/<file>.md` for docs, e.g. `vorp/vorp_core_docs.md`; (b) `learning:<id>` for community findings, e.g. `learning:11`. Use the path returned by `browse`/`semantic_search`/`grep_docs` verbatim — do not invent `learnings/<slug>.md`. | |
| heading | No | Optional prose heading from the doc, e.g. `Add Item to User` or `Character Management > Inventory Management`. Case-insensitive, fuzzy match on the leaf (text after the final `>`). NOT for code symbols — `addItem`, `getPlayerPed` etc. won't match; use `grep_docs` for those. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description discloses 404 behavior (returns headings + hints) and deep-heading auto-prepend logic. Missing details on authorization or rate limits, but overall good coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is fairly concise but packs multiple pieces of information in a single paragraph. Could be slightly more structured but remains efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but the description mentions return format (markdown) and error handling (404 hints). Covers key behaviors for a document retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions. The description adds value by explaining heading matching (case-insensitive, fuzzy, full breadcrumb) and deep-heading behavior beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches full markdown of a doc by path, and distinguishes from siblings like lookup_native and grep_docs by specifying use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance on when to use (after search snippet), when not to use (prefer lookup_native for natives, grep_docs for lua data tables), and details on heading usage with fallback behavior.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_invoke_guideGet native invocation guide for a languageAInspect
Load the calling-convention reference for RedM/RDR3 natives in js or lua. Call ONCE per session before writing native-calling code — every native doc page only shows Lua examples, so JS/TS authors need this to translate correctly. Covers result modifiers (Citizen.resultAsInteger/Float/String/Vector), Citizen.invokeNative vs invokeNativeByHash, type mapping, pointer-arg gotchas, worked examples. Cheap, no embedding.
| Name | Required | Description | Default |
|---|---|---|---|
| language | Yes | Target language: 'js' or 'lua' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the tool is cheap, has no embedding, and should be called once. This gives reasonable behavioral insight, though it could mention idempotency or caching.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: five sentences covering purpose, usage, rationale, content, and cost. No redundancy, front-loaded with key action, efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter tool with no output schema, the description fully covers when to use, what it includes, and why. Complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (enum for language). The description adds motivation (JS/TS authors need this) but does not add parameter semantics beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool loads the calling-convention reference for RedM/RDR3 natives in js or lua, distinguishing it from sibling tools like get_document or lookup_native which operate on different content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises to call once per session before writing code and explains why it's needed (only Lua examples elsewhere), providing clear usage context. It does not enumerate alternatives but the guidance is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
grep_docsLiteral/regex grep over raw doc filesAInspect
Find an EXACT literal token in raw doc files (markdown + lua). Use for specific weapon/ped/animation/prop/interior/zone names (weapon_pistol_volcanic, a_c_bear_01, p_campfire01x), known hashes (0x020D13FF), walkstyles/clipsets (MP_Style_Casual, mech_loco_m@), or any string you'd grep for. NOT for behavior/concept queries (use semantic_search) or script-native hash/name lookup (use lookup_native). REQUIRED for tokens inside the largest rdr3_discoveries data tables (audio_banks, ingameanims_list, cloth_drawable, cloth_hash_names, object_list, megadictanims, entity_extensions, imaps_with_coords, propsets_list, vehicle_bones) — only preview-indexed for embeddings, so semantic_search will NOT find tokens in them. Returns matched lines with path + line number. Lines >400 chars are truncated — fetch full context via get_document({path}).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| pattern | Yes | JS regex pattern. Case-insensitive by default. | |
| category | No | Limit to a doc category (e.g. discoveries, natives). | |
| pathSubstring | No | Substring filter on relative doc path, e.g. 'weapons' or 'clothes/cloth_hash_names'. | |
| caseInsensitive | No | Default true. Set false for case-sensitive match. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It explains return format (matched lines with path+line number) and truncation behavior with workaround. However, there is a slight inconsistency: description says 'EXACT literal token' but input schema says 'JS regex pattern', which could confuse agents expecting literal matching.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured: begins with core purpose, then usage guidance (do/don't), required cases, return format, and a limitation. Every sentence adds value, no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 params, no output schema), the description sufficiently explains purpose, usage, return values, and a practical limitation (truncation with workaround). It covers when the tool is necessary (certain data tables) and what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 80%, so baseline is 3. Description does not add significant new parameter information beyond what schema provides; the case-insensitivity mention is already in schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds exact literal tokens in raw doc files, provides concrete examples (weapon names, hashes, flags), and explicitly distinguishes from siblings semantic_search and lookup_native.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states when to use (specific known tokens), when not to (behavior/concept queries, script-native lookups), names alternatives, and highlights required usage for certain data tables that are not covered by semantic_search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_namespacesList RedM doc namespacesAInspect
Orient yourself: list available doc categories and their namespaces. Use once at session start (or when unsure) before applying a category= / namespace= filter to browse / semantic_search. NOT a content search. Categories: natives (PLAYER, ENTITY, VEHICLE, …), vorp, rsgcore, oxmysql, discoveries (AI, weapons, peds, animations, clothes, objects, …), jo_libs (menu, notification, callback, framework-bridge, …, dev_resources, redm_scripts), guides, learnings.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description sufficiently discloses the tool's scope (orienting, non-search, category listing). Minor gap: no mention of output structure or side effects, but for a read-only listing it is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with essential information, but could slightly benefit from bullet points for readability. No redundant sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description hints at return structure ('list available...') and enumerates categories, which is sufficient for a simple listing tool. Slightly lacking in detail about what the response looks like.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the description adds rich meaning to the single optional enum parameter by listing categories and providing examples of sub-namespaces, fully compensating for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('list') and resources ('doc categories and their namespaces'), explicitly states it is NOT a content search, and implicitly distinguishes it from sibling tools like browse and semantic_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use once at session start (or when unsure) before applying a category=/namespace= filter to browse / semantic_search', providing clear context and excluding search use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_nativeLookup RedM native by hash or nameAInspect
Resolve a RedM/RDR3 SCRIPT native by hash or name — O(1), exact. Use whenever you see Citizen.InvokeNative(0x...), Citizen.invokeNative('0x...'), GetHashKey('NAME'), or a SCREAMING_SNAKE_CASE native name (e.g. SET_ENTITY_COORDS, GetPedHealth) in Lua/JS/TS. NOT for game-data hashes (weapon/ped/animation names) — use grep_docs. Pass hash (0x… optional, case-insensitive) or name (exact first, ILIKE substring fallback). Returns name, hash, namespace, return type, params, description, full content, plus findings[] — community gotchas linked to that native. Inspect findings[].id and call get_document({path: 'learning:<id>'}) for full body.
| Name | Required | Description | Default |
|---|---|---|---|
| hash | No | Native hash, e.g. 0x09C28F828EE674FA (case-insensitive, 0x optional) | |
| name | No | Native name, e.g. CAN_PLAYER_START_MISSION. Substring match if no exact hit. | |
| limit | No | ||
| namespace | No | Restrict to a namespace, e.g. PLAYER, ENTITY. Only used with `name`. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses O(1) exact match, return fields including findings, and how to use findings. Does not mention potential errors or rate limits, but as a read-only lookup, this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense but well-structured. Front-loaded with core purpose. Slightly verbose for a single tool description, but every sentence serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description fully explains the return structure (name, hash, namespace, return type, params, description, full content, findings[]) and how to use findings with `get_document`. This is comprehensive for a lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description adds significant context beyond the input schema: explains usage contexts for hash and name, case-insensitivity, exact first with ILIKE fallback, and namespace restriction only with name. The limit parameter is covered in schema, so no need to repeat.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool resolves RedM/RDR3 SCRIPT natives by hash or name, with O(1) exact match. Distinguishes itself from sibling `grep_docs` by specifying what it is NOT for (game-data hashes).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly describes when to use (when seeing Citizen.InvokeNative, GetHashKey, etc.) and when not to use (use grep_docs for game-data hashes). Also explains input options and fallback behavior.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
semantic_searchHybrid search RedM docs (semantic + lexical)AInspect
Search RedM/RDR3 docs by behavior, concept, OR exact token. Use when you don't have a specific native hash/name (use lookup_native) and the term isn't a known asset name in a large data table (use grep_docs). Hybrid mode (default) handles 'how do I X' queries ('teleport player', 'spawn vehicle', 'inventory add item') AND tokens ('addItem', 'weapon_pistol_volcanic', 'CPED_CONFIG_FLAG_') — fused via RRF over vector + BM25. Returns ranked snippets (path, breadcrumb, heading, snippet, score). Call get_document({path, heading}) for full chunk content. mode=semantic for pure vector; mode=lexical for pure BM25. Filter via category=vorp|rsgcore|oxmysql|natives|discoveries|jo_libs|learnings or namespace. Community findings merged by default; category=learnings returns only findings.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Retrieval mode. Default hybrid (recommended). | |
| limit | No | How many ranked snippets to return. Default 20 (Anthropic contextual-retrieval research: top-20 outperforms top-5/10 before reranking). | |
| query | Yes | Natural language or token query | |
| category | No | Limit to one doc category | |
| namespace | No | Limit to a native namespace, e.g. PLAYER, ENTITY | |
| responseFormat | No | `concise` (default): 400-char snippet per hit — cheap, browse-style. `detailed`: full chunk content — use when you need an answer in one round-trip and want to skip the `get_document` follow-up. | concise |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It details hybrid search with RRF fusion, returns ranked snippets, and mentions community findings. Lacks explicit safety statement but is otherwise comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured and informative, though a bit lengthy. Each sentence adds value, and the main purpose is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and 6 parameters, description explains hybrid mechanism, return format, filtering, and follow-up call. Lacks total count or no-result handling, but sufficient for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description adds significant meaning: explains query types, hybrid default rationale, limit default reasoning, and response format details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches RedM/RDR3 docs by behavior, concept, or exact token, and distinguishes it from siblings like lookup_native and grep_docs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use this tool vs alternatives (e.g., 'Use when you don't have a specific native hash/name'), and provides guidance on modes and categories.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!