Skip to main content
Glama
Ownership verified

Server Details

RedM / RDR3 docs MCP server: native lookups, semantic search, VORP, RSGCore, oxmysql.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Cmoen11/redm-mcp-public
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.6/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: browse for discovering paths, get_document for fetching content, grep_docs for exact token lookups, semantic_search for concept searches, lookup_native for specific natives, etc. No two tools overlap in function, and descriptions clarify boundaries.

Naming Consistency5/5

All tool names follow a consistent lowercase_underscore pattern, using verb_noun or noun_phrase conventions (e.g., list_namespaces, get_document, semantic_search). No mixing of styles like camelCase or vague verbs.

Tool Count5/5

With 8 tools, the server is well-scoped for its purpose of querying RedM/RDR3 documentation. Each tool serves a necessary role without redundancy or bloat, fitting the typical 3-15 range comfortably.

Completeness5/5

The tool surface covers the full lifecycle of documentation interaction: orientation (list_namespaces), discovery (browse), retrieval (get_document), search (semantic_search, grep_docs), specific native lookup (lookup_native), guidance (get_invoke_guide), and contribution (share_finding). No obvious gaps.

Available Tools

8 tools
browseBrowse RedM doc pathsAInspect

Enumerate doc paths in a category/namespace. Use to discover what exists before calling get_document or a targeted grep_docs. NOT a content search — use semantic_search for behavior/concept lookups or grep_docs for token lookups. Returns {path, title, chunks}[].

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNo
namespaceNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. Discloses return format as {path, title, chunks}[]. Lacks details on pagination or ordering, but for a browse tool this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. Front-loads purpose and usage guidelines immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description explicitly states return format. Covers use case and alternatives. Complete for a discovery tool given sibling context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%. Description mentions 'category/namespace' linking to parameters but does not explain each parameter's meaning or default behavior. Provides basic context that they are used together.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb (enumerate) and resource (doc paths). Distinguishes from siblings by specifying that it's for discovery before get_document or grep_docs, not for content search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when to use (discover before fetching) and when not (for content or token lookups, redirecting to semantic_search and grep_docs).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_documentGet full RedM docAInspect

Fetch full markdown of a doc by path (as returned by browse, semantic_search, or grep_docs). Use to retrieve full content after a search snippet looks promising. Pass heading (full breadcrumb like Character Management > Inventory Management, or just the leaf — case-insensitive, fuzzy) to fetch only that section. Deep-heading matches auto-prepend the H2 parent's intro for context. For individual script natives prefer lookup_native. For code symbols (addItem) or content inside the largest rdr3_discoveries lua data tables (preview-only here) use grep_docs. Community findings use learning:N paths, not learnings/<slug>.md. On 404 returns available headings + cross-file hints.

ParametersJSON Schema
NameRequiredDescriptionDefault
pathYesDoc path. Two valid shapes: (a) `<category>/<file>.md` for docs, e.g. `vorp/vorp_core_docs.md`; (b) `learning:<id>` for community findings, e.g. `learning:11`. Use the path returned by `browse`/`semantic_search`/`grep_docs` verbatim — do not invent `learnings/<slug>.md`.
headingNoOptional prose heading from the doc, e.g. `Add Item to User` or `Character Management > Inventory Management`. Case-insensitive, fuzzy match on the leaf (text after the final `>`). NOT for code symbols — `addItem`, `getPlayerPed` etc. won't match; use `grep_docs` for those.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description explains behavior: optional heading with fuzzy match, deep-heading prepends H2 intro, returns headings on 404. Lacks explicit read-only statement, but implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence is informative and earns its place. Front-loaded with main purpose, then details. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description explains return values (full markdown, headings on 404). Also covers edge cases like deep-heading and lua data table fallback.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already covers both parameters (100% coverage). Description adds meaning: heading behavior details (fuzzy match, breadcrumb, deep-heading context).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches full markdown of a doc by path, and distinguishes from siblings like browse, semantic_search, grep_docs, and lookup_native.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (after a search snippet looks promising) and when to use alternatives (lookup_native for natives, grep_docs for lua data). Also notes that path comes from specific sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_invoke_guideGet native invocation guide for a languageAInspect

Load the calling-convention reference for RedM/RDR3 natives in js or lua. Call ONCE per session before writing native-calling code — every native doc page only shows Lua examples, so JS/TS authors need this to translate correctly. Covers result modifiers (Citizen.resultAsInteger/Float/String/Vector), Citizen.invokeNative vs invokeNativeByHash, type mapping, pointer-arg gotchas, worked examples. Cheap, no embedding.

ParametersJSON Schema
NameRequiredDescriptionDefault
languageYesTarget language: 'js' or 'lua'
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but the description discloses it is 'Cheap, no embedding' and lists contents. It implies read-only behavior, though it does not explicitly state no side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with no wasted words. Front-loaded with purpose, then usage, contents, and cost. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema, the description explains what the guide covers (result modifiers, invokeNative vs hash, etc.) and when to use it. Lacks output format but is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter fully described. The description adds context about language choice but no new syntactic or semantic details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Load the calling-convention reference for RedM/RDR3 natives' with specific verb and resource. It distinguishes from siblings like 'lookup_native' and 'get_document' by focusing on the invocation guide for JS and Lua.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Call ONCE per session before writing native-calling code' and explains why JS/TS authors need it. However, it doesn't explicitly state when not to use it (e.g., for Lua-only users).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

grep_docsLiteral/regex grep over raw doc filesAInspect

Find an EXACT literal token in raw doc files (markdown + lua). Use for specific weapon/ped/animation/prop/interior/zone names (weapon_pistol_volcanic, a_c_bear_01, p_campfire01x), known hashes (0x020D13FF), walkstyles/clipsets (MP_Style_Casual, mech_loco_m@), or any string you'd grep for. NOT for behavior/concept queries (use semantic_search) or script-native hash/name lookup (use lookup_native). REQUIRED for tokens inside the largest rdr3_discoveries data tables (audio_banks, ingameanims_list, cloth_drawable, cloth_hash_names, object_list, megadictanims, entity_extensions, imaps_with_coords, propsets_list, vehicle_bones) — only preview-indexed for embeddings, so semantic_search will NOT find tokens in them. Returns matched lines with path + line number. Lines >400 chars are truncated — fetch full context via get_document({path}).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
patternYesJS regex pattern. Case-insensitive by default.
categoryNoLimit to a doc category (e.g. discoveries, natives).
pathSubstringNoSubstring filter on relative doc path, e.g. 'weapons' or 'clothes/cloth_hash_names'.
caseInsensitiveNoDefault true. Set false for case-sensitive match.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully informs: returns matched lines with path and line number, truncation at >400 chars, references get_document for full context. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is focused and front-loaded with purpose. Each sentence serves a purpose (scope, examples, exclusions, special tables, return format). Slightly verbose but justified by the information density needed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters, no output schema, and no annotations, the description covers all essential aspects: exact usage, return format, truncation, alternatives, and critical use case for large tables. Includes cross-reference to get_document for full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 80%. Description adds value by clarifying pattern type (JS regex, but schema already states), case-insensitivity default, and pathSubstring usage example. However, some parameter details (like limit bounds) are already in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States exact functionality ('Find an EXACT literal token in raw doc files (markdown + lua)') with concrete examples. Clearly distinguishes from sibling tools by specifying it is NOT for behavior/concept queries (semantic_search) or script-native hash/name lookup (lookup_native).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'when to use' examples (specific weapon/ped/animation names, known hashes, flag constants) and 'when not to use' alternatives (semantic_search, lookup_native). Also notes REQUIRED use for certain data tables where semantic_search fails.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_namespacesList RedM doc namespacesAInspect

Orient yourself: list available doc categories and their namespaces. Use once at session start (or when unsure) before applying a category= / namespace= filter to browse / semantic_search. NOT a content search. Categories: natives (PLAYER, ENTITY, VEHICLE, …), vorp, rsgcore, oxmysql, discoveries (AI, weapons, peds, animations, clothes, objects, …), jo_libs (menu, notification, callback, framework-bridge, …, dev_resources, redm_scripts), guides, learnings.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the tool is a harmless orientation action (read-only) and clarifies it is not a content search, but does not detail the output format or any potential side effects, though none are expected.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise yet informative, with a front-loaded purpose statement and essential usage guidance, including a comprehensive list of categories, without unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers purpose, usage, and parameter values well. However, without an output schema, a brief note on the output format (e.g., list of namespaces) would enhance completeness, though the implication is clear.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description enriches the input schema by listing and explaining the enum values for the category parameter, including subcategories (e.g., 'natives (PLAYER, ENTITY, VEHICLE, …)'), which adds significant meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists available doc categories and their namespaces, distinguishing it from sibling tools like browse and semantic_search by explicitly noting it is not a content search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description advises to use the tool at session start or when unsure before applying a category/namespace filter to browse or semantic_search, providing explicit guidance on when to use and context for alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_nativeLookup RedM native by hash or nameAInspect

Resolve a RedM/RDR3 SCRIPT native by hash or name — O(1), exact. Use whenever you see Citizen.InvokeNative(0x...), Citizen.invokeNative('0x...'), GetHashKey('NAME'), or a SCREAMING_SNAKE_CASE native name (e.g. SET_ENTITY_COORDS, GetPedHealth) in Lua/JS/TS. NOT for game-data hashes (weapon/ped/animation names) — use grep_docs. Pass hash (0x… optional, case-insensitive) or name (exact first, ILIKE substring fallback). Returns name, hash, namespace, return type, params, description, full content, plus findings[] — community gotchas linked to that native. Inspect findings[].id and call get_document({path: 'learning:<id>'}) for full body.

ParametersJSON Schema
NameRequiredDescriptionDefault
hashNoNative hash, e.g. 0x09C28F828EE674FA (case-insensitive, 0x optional)
nameNoNative name, e.g. CAN_PLAYER_START_MISSION. Substring match if no exact hit.
limitNo
namespaceNoRestrict to a namespace, e.g. PLAYER, ENTITY. Only used with `name`.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses O(1) exact lookup, case-insensitivity, substring fallback, and detailed return structure including findings. Lacks explicit statement about safety (read-only), but it's implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Packed with useful information in a logical order (purpose, usage, exclusions, parameters, output). Slightly verbose, but each sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description fully details return fields (name, hash, namespace, return type, params, description, content, findings) and suggests follow-up action (call get_document). Complete for a lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is high (75%), but description adds key meaning: explains that hash and name are alternatives (not both), substring fallback for name, and namespace restriction only with name.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool resolves RedM/RDR3 script natives by hash or name, with specific verb (resolve) and resource (native). It explicitly distinguishes from sibling tool 'grep_docs' for game-data hashes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use examples (Citizen.InvokeNative, GetHashKey, SCREAMING_SNAKE_CASE) and when-not-to-use (game-data hashes) with alternative tool name (grep_docs).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

share_findingShare a verified finding back to the docsAInspect

Share a verified finding back to the docs corpus so the next agent can find it. Use AFTER solving a non-trivial problem to record what would have saved you time: a gotcha, a working parameter combo, an undocumented constraint, a relationship between two natives that isn't obvious. Other agents will find this via semantic_search (findings are merged into default results; category: 'learnings' returns only findings).

WHEN to use:

  • You burned multiple iterations on something not in the docs.

  • You discovered an undocumented quirk (param order, hash collision, framework export that isn't in vorp/rsgcore).

  • You verified that a specific combination works (e.g. native A + flag B for behavior C).

WHEN NOT to use:

  • The information is already in the docs (verify with semantic_search/grep_docs first).

  • You're guessing — only contribute verified findings.

  • It's project-specific (your repo's auth flow, your DB schema). Keep it general to RedM/RDR3.

Keep title short and searchable. body should explain WHY, not just WHAT — context, the trap, the fix.

ParametersJSON Schema
NameRequiredDescriptionDefault
bodyYesMarkdown explaining WHY: context, the trap, the fix, verified behavior.
tagsNoUp to 8 lowercase tags, e.g. ['weapons', 'damage'].
titleYesShort, searchable summary of the finding.
sourceNoOptional short identifier of the contributing agent.
categoryNoOptional doc category this relates to.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It explains that findings are merged into default results and can be filtered by category 'learnings', providing useful behavioral context. Lacks details on auth or rate limits but is generally informative.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with headings and front-loaded key purpose. While thorough, it is somewhat lengthy but each part earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of sharing findings (5 params, no output schema), the description adequately covers when and how to use, integration with search, and content guidelines. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage. The description adds value beyond schema by advising on how to write title (short/searchable) and body (WHY not WHAT), and gives examples for tags.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool shares a verified finding back to the docs corpus, specifying the resource and action. It distinguishes itself from sibling tools like semantic_search, which retrieves findings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when to use (non-trivial problems, undocumented quirks) and when not to use (info already in docs, guessing, project-specific). References alternatives like semantic_search and grep_docs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.