redm-mcp
Server Details
RedM / RDR3 docs MCP server: native lookups, semantic search, VORP, RSGCore, oxmysql.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Cmoen11/redm-mcp-public
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.6/5 across 8 of 8 tools scored.
Each tool has a clearly distinct purpose: browse for discovering paths, get_document for fetching content, grep_docs for exact token lookups, semantic_search for concept searches, lookup_native for specific natives, etc. No two tools overlap in function, and descriptions clarify boundaries.
All tool names follow a consistent lowercase_underscore pattern, using verb_noun or noun_phrase conventions (e.g., list_namespaces, get_document, semantic_search). No mixing of styles like camelCase or vague verbs.
With 8 tools, the server is well-scoped for its purpose of querying RedM/RDR3 documentation. Each tool serves a necessary role without redundancy or bloat, fitting the typical 3-15 range comfortably.
The tool surface covers the full lifecycle of documentation interaction: orientation (list_namespaces), discovery (browse), retrieval (get_document), search (semantic_search, grep_docs), specific native lookup (lookup_native), guidance (get_invoke_guide), and contribution (share_finding). No obvious gaps.
Available Tools
8 toolsbrowseBrowse RedM doc pathsAInspect
Enumerate doc paths in a category/namespace. Use to discover what exists before calling get_document or a targeted grep_docs. NOT a content search — use semantic_search for behavior/concept lookups or grep_docs for token lookups. Returns {path, title, chunks}[].
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | ||
| namespace | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. Discloses return format as {path, title, chunks}[]. Lacks details on pagination or ordering, but for a browse tool this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. Front-loads purpose and usage guidelines immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description explicitly states return format. Covers use case and alternatives. Complete for a discovery tool given sibling context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%. Description mentions 'category/namespace' linking to parameters but does not explain each parameter's meaning or default behavior. Provides basic context that they are used together.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb (enumerate) and resource (doc paths). Distinguishes from siblings by specifying that it's for discovery before get_document or grep_docs, not for content search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides when to use (discover before fetching) and when not (for content or token lookups, redirecting to semantic_search and grep_docs).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_documentGet full RedM docAInspect
Fetch full markdown of a doc by path (as returned by browse, semantic_search, or grep_docs). Use to retrieve full content after a search snippet looks promising. Pass heading (full breadcrumb like Character Management > Inventory Management, or just the leaf — case-insensitive, fuzzy) to fetch only that section. Deep-heading matches auto-prepend the H2 parent's intro for context. For individual script natives prefer lookup_native. For code symbols (addItem) or content inside the largest rdr3_discoveries lua data tables (preview-only here) use grep_docs. Community findings use learning:N paths, not learnings/<slug>.md. On 404 returns available headings + cross-file hints.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Doc path. Two valid shapes: (a) `<category>/<file>.md` for docs, e.g. `vorp/vorp_core_docs.md`; (b) `learning:<id>` for community findings, e.g. `learning:11`. Use the path returned by `browse`/`semantic_search`/`grep_docs` verbatim — do not invent `learnings/<slug>.md`. | |
| heading | No | Optional prose heading from the doc, e.g. `Add Item to User` or `Character Management > Inventory Management`. Case-insensitive, fuzzy match on the leaf (text after the final `>`). NOT for code symbols — `addItem`, `getPlayerPed` etc. won't match; use `grep_docs` for those. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description explains behavior: optional heading with fuzzy match, deep-heading prepends H2 intro, returns headings on 404. Lacks explicit read-only statement, but implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Every sentence is informative and earns its place. Front-loaded with main purpose, then details. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description explains return values (full markdown, headings on 404). Also covers edge cases like deep-heading and lua data table fallback.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers both parameters (100% coverage). Description adds meaning: heading behavior details (fuzzy match, breadcrumb, deep-heading context).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches full markdown of a doc by path, and distinguishes from siblings like browse, semantic_search, grep_docs, and lookup_native.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (after a search snippet looks promising) and when to use alternatives (lookup_native for natives, grep_docs for lua data). Also notes that path comes from specific sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_invoke_guideGet native invocation guide for a languageAInspect
Load the calling-convention reference for RedM/RDR3 natives in js or lua. Call ONCE per session before writing native-calling code — every native doc page only shows Lua examples, so JS/TS authors need this to translate correctly. Covers result modifiers (Citizen.resultAsInteger/Float/String/Vector), Citizen.invokeNative vs invokeNativeByHash, type mapping, pointer-arg gotchas, worked examples. Cheap, no embedding.
| Name | Required | Description | Default |
|---|---|---|---|
| language | Yes | Target language: 'js' or 'lua' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description discloses it is 'Cheap, no embedding' and lists contents. It implies read-only behavior, though it does not explicitly state no side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with no wasted words. Front-loaded with purpose, then usage, contents, and cost. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description explains what the guide covers (result modifiers, invokeNative vs hash, etc.) and when to use it. Lacks output format but is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter fully described. The description adds context about language choice but no new syntactic or semantic details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Load the calling-convention reference for RedM/RDR3 natives' with specific verb and resource. It distinguishes from siblings like 'lookup_native' and 'get_document' by focusing on the invocation guide for JS and Lua.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Call ONCE per session before writing native-calling code' and explains why JS/TS authors need it. However, it doesn't explicitly state when not to use it (e.g., for Lua-only users).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
grep_docsLiteral/regex grep over raw doc filesAInspect
Find an EXACT literal token in raw doc files (markdown + lua). Use for specific weapon/ped/animation/prop/interior/zone names (weapon_pistol_volcanic, a_c_bear_01, p_campfire01x), known hashes (0x020D13FF), walkstyles/clipsets (MP_Style_Casual, mech_loco_m@), or any string you'd grep for. NOT for behavior/concept queries (use semantic_search) or script-native hash/name lookup (use lookup_native). REQUIRED for tokens inside the largest rdr3_discoveries data tables (audio_banks, ingameanims_list, cloth_drawable, cloth_hash_names, object_list, megadictanims, entity_extensions, imaps_with_coords, propsets_list, vehicle_bones) — only preview-indexed for embeddings, so semantic_search will NOT find tokens in them. Returns matched lines with path + line number. Lines >400 chars are truncated — fetch full context via get_document({path}).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| pattern | Yes | JS regex pattern. Case-insensitive by default. | |
| category | No | Limit to a doc category (e.g. discoveries, natives). | |
| pathSubstring | No | Substring filter on relative doc path, e.g. 'weapons' or 'clothes/cloth_hash_names'. | |
| caseInsensitive | No | Default true. Set false for case-sensitive match. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully informs: returns matched lines with path and line number, truncation at >400 chars, references get_document for full context. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is focused and front-loaded with purpose. Each sentence serves a purpose (scope, examples, exclusions, special tables, return format). Slightly verbose but justified by the information density needed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters, no output schema, and no annotations, the description covers all essential aspects: exact usage, return format, truncation, alternatives, and critical use case for large tables. Includes cross-reference to get_document for full context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 80%. Description adds value by clarifying pattern type (JS regex, but schema already states), case-insensitivity default, and pathSubstring usage example. However, some parameter details (like limit bounds) are already in schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States exact functionality ('Find an EXACT literal token in raw doc files (markdown + lua)') with concrete examples. Clearly distinguishes from sibling tools by specifying it is NOT for behavior/concept queries (semantic_search) or script-native hash/name lookup (lookup_native).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'when to use' examples (specific weapon/ped/animation names, known hashes, flag constants) and 'when not to use' alternatives (semantic_search, lookup_native). Also notes REQUIRED use for certain data tables where semantic_search fails.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_namespacesList RedM doc namespacesAInspect
Orient yourself: list available doc categories and their namespaces. Use once at session start (or when unsure) before applying a category= / namespace= filter to browse / semantic_search. NOT a content search. Categories: natives (PLAYER, ENTITY, VEHICLE, …), vorp, rsgcore, oxmysql, discoveries (AI, weapons, peds, animations, clothes, objects, …), jo_libs (menu, notification, callback, framework-bridge, …, dev_resources, redm_scripts), guides, learnings.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the tool is a harmless orientation action (read-only) and clarifies it is not a content search, but does not detail the output format or any potential side effects, though none are expected.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise yet informative, with a front-loaded purpose statement and essential usage guidance, including a comprehensive list of categories, without unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose, usage, and parameter values well. However, without an output schema, a brief note on the output format (e.g., list of namespaces) would enhance completeness, though the implication is clear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description enriches the input schema by listing and explaining the enum values for the category parameter, including subcategories (e.g., 'natives (PLAYER, ENTITY, VEHICLE, …)'), which adds significant meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists available doc categories and their namespaces, distinguishing it from sibling tools like browse and semantic_search by explicitly noting it is not a content search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises to use the tool at session start or when unsure before applying a category/namespace filter to browse or semantic_search, providing explicit guidance on when to use and context for alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_nativeLookup RedM native by hash or nameAInspect
Resolve a RedM/RDR3 SCRIPT native by hash or name — O(1), exact. Use whenever you see Citizen.InvokeNative(0x...), Citizen.invokeNative('0x...'), GetHashKey('NAME'), or a SCREAMING_SNAKE_CASE native name (e.g. SET_ENTITY_COORDS, GetPedHealth) in Lua/JS/TS. NOT for game-data hashes (weapon/ped/animation names) — use grep_docs. Pass hash (0x… optional, case-insensitive) or name (exact first, ILIKE substring fallback). Returns name, hash, namespace, return type, params, description, full content, plus findings[] — community gotchas linked to that native. Inspect findings[].id and call get_document({path: 'learning:<id>'}) for full body.
| Name | Required | Description | Default |
|---|---|---|---|
| hash | No | Native hash, e.g. 0x09C28F828EE674FA (case-insensitive, 0x optional) | |
| name | No | Native name, e.g. CAN_PLAYER_START_MISSION. Substring match if no exact hit. | |
| limit | No | ||
| namespace | No | Restrict to a namespace, e.g. PLAYER, ENTITY. Only used with `name`. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses O(1) exact lookup, case-insensitivity, substring fallback, and detailed return structure including findings. Lacks explicit statement about safety (read-only), but it's implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Packed with useful information in a logical order (purpose, usage, exclusions, parameters, output). Slightly verbose, but each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description fully details return fields (name, hash, namespace, return type, params, description, content, findings) and suggests follow-up action (call get_document). Complete for a lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is high (75%), but description adds key meaning: explains that hash and name are alternatives (not both), substring fallback for name, and namespace restriction only with name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool resolves RedM/RDR3 script natives by hash or name, with specific verb (resolve) and resource (native). It explicitly distinguishes from sibling tool 'grep_docs' for game-data hashes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use examples (Citizen.InvokeNative, GetHashKey, SCREAMING_SNAKE_CASE) and when-not-to-use (game-data hashes) with alternative tool name (grep_docs).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
semantic_searchHybrid search RedM docs (semantic + lexical)AInspect
Search RedM/RDR3 docs by behavior, concept, OR exact token. Use when you don't have a specific native hash/name (use lookup_native) and the term isn't a known asset name in a large data table (use grep_docs). Hybrid mode (default) handles 'how do I X' queries ('teleport player', 'spawn vehicle', 'inventory add item') AND tokens ('addItem', 'weapon_pistol_volcanic', 'CPED_CONFIG_FLAG_') — fused via RRF over vector + BM25. Returns ranked snippets (path, breadcrumb, heading, snippet, score). Call get_document({path, heading}) for full chunk content. mode=semantic for pure vector; mode=lexical for pure BM25. Filter via category=vorp|rsgcore|oxmysql|natives|discoveries|jo_libs|learnings or namespace. Community findings merged by default; category=learnings returns only findings.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Retrieval mode. Default hybrid (recommended). | |
| limit | No | How many ranked snippets to return. Default 20 (Anthropic contextual-retrieval research: top-20 outperforms top-5/10 before reranking). | |
| query | Yes | Natural language or token query | |
| category | No | Limit to one doc category | |
| namespace | No | Limit to a native namespace, e.g. PLAYER, ENTITY | |
| responseFormat | No | `concise` (default): 400-char snippet per hit — cheap, browse-style. `detailed`: full chunk content — use when you need an answer in one round-trip and want to skip the `get_document` follow-up. | concise |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description thoroughly discloses behavior: hybrid mode fuses via RRF, returns ranked snippets with specific fields, merges community findings by default, and explains category=learnings behavior. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is moderately lengthy but each sentence adds value. It front-loads main purpose and usage. Slight verbosity in detailed-mode explanation could be trimmed, but structure is logical and efficient overall.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has 6 parameters with enums and no output schema, yet description explains everything: tool purpose, usage context, parameter details, return fields (path, breadcrumb, heading, snippet, score), and follow-up via get_document. Highly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3, but the description adds significant meaning: defaults (hybrid mode, limit 20), mode semantics, filter usage, response format differences, and even research justification for default limit. Far exceeds baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches RedM/RDR3 docs by behavior, concept, or exact token. It distinguishes from siblings like lookup_native and grep_docs by specifying when to use each.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided: use when you lack a specific native hash/name (use lookup_native) and the term isn't a known asset name in a large data table (use grep_docs). Also explains modes, filters, and response formats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!