Skip to main content
Glama
Ownership verified

Server Details

RedM (Red Dead Redemption 2 multiplayer) / RDR3 modding. Hosted HTTP endpo int: native lookups (hash ↔ name), semantic search over framework docs (VORP, RSGCore, oxmysql), and grep over rdr3_discoveries community data tables (peds, weapons, animations, AI flags, props). No install, no auth.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: browse lists paths, get_document fetches content, grep_docs does exact token search, semantic_search handles concept queries, lookup_native resolves hashes/names, get_invoke_guide provides calling conventions, list_namespaces gives orientation, and share_finding contributes to the corpus. No overlap in functionality.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (e.g., get_document, grep_docs, list_namespaces) using lowercase with underscores. The naming is predictable and easy to remember.

Tool Count5/5

With 8 tools, the server covers all necessary operations for a documentation system without being bloated. Each tool earns its place, covering navigation, search, retrieval, native lookup, and community contribution.

Completeness5/5

The tool set provides a complete lifecycle for documentation: orientation (list_namespaces), discovery (browse), retrieval (get_document), multiple search methods (grep_docs, semantic_search), native resolution (lookup_native), a guide for invoking natives (get_invoke_guide), and even a mechanism for users to contribute findings (share_finding). No obvious gaps.

Available Tools

8 tools
browseBrowse RedM doc pathsAInspect

Enumerate doc paths in a category/namespace. Use to discover what exists before calling get_document or a targeted grep_docs. NOT a content search — use semantic_search for behavior/concept lookups or grep_docs for token lookups. Returns {path, title, chunks}[].

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNo
namespaceNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must carry the full burden. It discloses that the tool returns an array of objects with path, title, and chunks, implying a read-only listing operation. While not exhaustive (e.g., no mention of permissions or limitations), it is sufficient for an enumeration tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, front-loading the purpose and providing actionable guidance without any redundant or extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity of the tool (enumeration with only two parameters), no annotations, and no output schema, the description adequately covers when, how, and what the tool returns. It also references sibling tools appropriately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not explain the individual parameters (category, namespace). The category enum provides some context, but namespace is left undefined. The description fails to add meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Enumerate doc paths in a category/namespace', providing a specific verb and resource. It distinguishes itself from siblings by explicitly stating when to use browse versus get_document, grep_docs, or semantic_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool ('Use to discover what exists before calling get_document or a targeted grep_docs') and when not to ('NOT a content search — use semantic_search for behavior/concept lookups or grep_docs for token lookups').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_documentGet full RedM docAInspect

Fetch full markdown of a doc by path (as returned by browse, semantic_search, or grep_docs). Use to retrieve full content after a search snippet looks promising. Pass heading (full breadcrumb like Character Management > Inventory Management, or just the leaf — case-insensitive, fuzzy) to fetch only that section. Deep-heading matches auto-prepend the H2 parent's intro for context. For individual script natives prefer lookup_native. For code symbols (addItem) or content inside the largest rdr3_discoveries lua data tables (preview-only here) use grep_docs. Community findings use learning:N paths, not learnings/<slug>.md. On 404 returns available headings + cross-file hints.

ParametersJSON Schema
NameRequiredDescriptionDefault
pathYesDoc path. Two valid shapes: (a) `<category>/<file>.md` for docs, e.g. `vorp/vorp_core_docs.md`; (b) `learning:<id>` for community findings, e.g. `learning:11`. Use the path returned by `browse`/`semantic_search`/`grep_docs` verbatim — do not invent `learnings/<slug>.md`.
headingNoOptional prose heading from the doc, e.g. `Add Item to User` or `Character Management > Inventory Management`. Case-insensitive, fuzzy match on the leaf (text after the final `>`). NOT for code symbols — `addItem`, `getPlayerPed` etc. won't match; use `grep_docs` for those.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but the description discloses 404 behavior (returns headings + hints) and deep-heading auto-prepend logic. Missing details on authorization or rate limits, but overall good coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is fairly concise but packs multiple pieces of information in a single paragraph. Could be slightly more structured but remains efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but the description mentions return format (markdown) and error handling (404 hints). Covers key behaviors for a document retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions. The description adds value by explaining heading matching (case-insensitive, fuzzy, full breadcrumb) and deep-heading behavior beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches full markdown of a doc by path, and distinguishes from siblings like lookup_native and grep_docs by specifying use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance on when to use (after search snippet), when not to use (prefer lookup_native for natives, grep_docs for lua data tables), and details on heading usage with fallback behavior.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_invoke_guideGet native invocation guide for a languageAInspect

Load the calling-convention reference for RedM/RDR3 natives in js or lua. Call ONCE per session before writing native-calling code — every native doc page only shows Lua examples, so JS/TS authors need this to translate correctly. Covers result modifiers (Citizen.resultAsInteger/Float/String/Vector), Citizen.invokeNative vs invokeNativeByHash, type mapping, pointer-arg gotchas, worked examples. Cheap, no embedding.

ParametersJSON Schema
NameRequiredDescriptionDefault
languageYesTarget language: 'js' or 'lua'
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the tool is cheap, has no embedding, and should be called once. This gives reasonable behavioral insight, though it could mention idempotency or caching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: five sentences covering purpose, usage, rationale, content, and cost. No redundancy, front-loaded with key action, efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter tool with no output schema, the description fully covers when to use, what it includes, and why. Complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (enum for language). The description adds motivation (JS/TS authors need this) but does not add parameter semantics beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool loads the calling-convention reference for RedM/RDR3 natives in js or lua, distinguishing it from sibling tools like get_document or lookup_native which operate on different content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises to call once per session before writing code and explains why it's needed (only Lua examples elsewhere), providing clear usage context. It does not enumerate alternatives but the guidance is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

grep_docsLiteral/regex grep over raw doc filesAInspect

Find an EXACT literal token in raw doc files (markdown + lua). Use for specific weapon/ped/animation/prop/interior/zone names (weapon_pistol_volcanic, a_c_bear_01, p_campfire01x), known hashes (0x020D13FF), walkstyles/clipsets (MP_Style_Casual, mech_loco_m@), or any string you'd grep for. NOT for behavior/concept queries (use semantic_search) or script-native hash/name lookup (use lookup_native). REQUIRED for tokens inside the largest rdr3_discoveries data tables (audio_banks, ingameanims_list, cloth_drawable, cloth_hash_names, object_list, megadictanims, entity_extensions, imaps_with_coords, propsets_list, vehicle_bones) — only preview-indexed for embeddings, so semantic_search will NOT find tokens in them. Returns matched lines with path + line number. Lines >400 chars are truncated — fetch full context via get_document({path}).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
patternYesJS regex pattern. Case-insensitive by default.
categoryNoLimit to a doc category (e.g. discoveries, natives).
pathSubstringNoSubstring filter on relative doc path, e.g. 'weapons' or 'clothes/cloth_hash_names'.
caseInsensitiveNoDefault true. Set false for case-sensitive match.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It explains return format (matched lines with path+line number) and truncation behavior with workaround. However, there is a slight inconsistency: description says 'EXACT literal token' but input schema says 'JS regex pattern', which could confuse agents expecting literal matching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured: begins with core purpose, then usage guidance (do/don't), required cases, return format, and a limitation. Every sentence adds value, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 params, no output schema), the description sufficiently explains purpose, usage, return values, and a practical limitation (truncation with workaround). It covers when the tool is necessary (certain data tables) and what to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 80%, so baseline is 3. Description does not add significant new parameter information beyond what schema provides; the case-insensitivity mention is already in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds exact literal tokens in raw doc files, provides concrete examples (weapon names, hashes, flags), and explicitly distinguishes from siblings semantic_search and lookup_native.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use (specific known tokens), when not to (behavior/concept queries, script-native lookups), names alternatives, and highlights required usage for certain data tables that are not covered by semantic_search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_namespacesList RedM doc namespacesAInspect

Orient yourself: list available doc categories and their namespaces. Use once at session start (or when unsure) before applying a category= / namespace= filter to browse / semantic_search. NOT a content search. Categories: natives (PLAYER, ENTITY, VEHICLE, …), vorp, rsgcore, oxmysql, discoveries (AI, weapons, peds, animations, clothes, objects, …), jo_libs (menu, notification, callback, framework-bridge, …, dev_resources, redm_scripts), guides, learnings.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but the description sufficiently discloses the tool's scope (orienting, non-search, category listing). Minor gap: no mention of output structure or side effects, but for a read-only listing it is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact and front-loaded with essential information, but could slightly benefit from bullet points for readability. No redundant sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description hints at return structure ('list available...') and enumerates categories, which is sufficient for a simple listing tool. Slightly lacking in detail about what the response looks like.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description adds rich meaning to the single optional enum parameter by listing categories and providing examples of sub-namespaces, fully compensating for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('list') and resources ('doc categories and their namespaces'), explicitly states it is NOT a content search, and implicitly distinguishes it from sibling tools like browse and semantic_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use once at session start (or when unsure) before applying a category=/namespace= filter to browse / semantic_search', providing clear context and excluding search use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_nativeLookup RedM native by hash or nameAInspect

Resolve a RedM/RDR3 SCRIPT native by hash or name — O(1), exact. Use whenever you see Citizen.InvokeNative(0x...), Citizen.invokeNative('0x...'), GetHashKey('NAME'), or a SCREAMING_SNAKE_CASE native name (e.g. SET_ENTITY_COORDS, GetPedHealth) in Lua/JS/TS. NOT for game-data hashes (weapon/ped/animation names) — use grep_docs. Pass hash (0x… optional, case-insensitive) or name (exact first, ILIKE substring fallback). Returns name, hash, namespace, return type, params, description, full content, plus findings[] — community gotchas linked to that native. Inspect findings[].id and call get_document({path: 'learning:<id>'}) for full body.

ParametersJSON Schema
NameRequiredDescriptionDefault
hashNoNative hash, e.g. 0x09C28F828EE674FA (case-insensitive, 0x optional)
nameNoNative name, e.g. CAN_PLAYER_START_MISSION. Substring match if no exact hit.
limitNo
namespaceNoRestrict to a namespace, e.g. PLAYER, ENTITY. Only used with `name`.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses O(1) exact match, return fields including findings, and how to use findings. Does not mention potential errors or rate limits, but as a read-only lookup, this is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information-dense but well-structured. Front-loaded with core purpose. Slightly verbose for a single tool description, but every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description fully explains the return structure (name, hash, namespace, return type, params, description, full content, findings[]) and how to use findings with `get_document`. This is comprehensive for a lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description adds significant context beyond the input schema: explains usage contexts for hash and name, case-insensitivity, exact first with ILIKE fallback, and namespace restriction only with name. The limit parameter is covered in schema, so no need to repeat.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool resolves RedM/RDR3 SCRIPT natives by hash or name, with O(1) exact match. Distinguishes itself from sibling `grep_docs` by specifying what it is NOT for (game-data hashes).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly describes when to use (when seeing Citizen.InvokeNative, GetHashKey, etc.) and when not to use (use grep_docs for game-data hashes). Also explains input options and fallback behavior.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

share_findingShare a verified finding back to the docsAInspect

Share a verified finding back to the docs corpus so the next agent can find it. Use AFTER solving a non-trivial problem to record what would have saved you time: a gotcha, a working parameter combo, an undocumented constraint, a relationship between two natives that isn't obvious. Other agents will find this via semantic_search (findings are merged into default results; category: 'learnings' returns only findings).

WHEN to use:

  • You burned multiple iterations on something not in the docs.

  • You discovered an undocumented quirk (param order, hash collision, framework export that isn't in vorp/rsgcore).

  • You verified that a specific combination works (e.g. native A + flag B for behavior C).

WHEN NOT to use:

  • The information is already in the docs (verify with semantic_search/grep_docs first).

  • You're guessing — only contribute verified findings.

  • It's project-specific (your repo's auth flow, your DB schema). Keep it general to RedM/RDR3.

Keep title short and searchable. body should explain WHY, not just WHAT — context, the trap, the fix.

ParametersJSON Schema
NameRequiredDescriptionDefault
bodyYesMarkdown explaining WHY: context, the trap, the fix, verified behavior.
tagsNoUp to 8 lowercase tags, e.g. ['weapons', 'damage'].
titleYesShort, searchable summary of the finding.
sourceNoOptional short identifier of the contributing agent.
categoryNoOptional doc category this relates to.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description explains how findings are merged into default semantic search results and how to retrieve only learnings. However, it omits behavioral details like whether the operation creates or updates documents, required authentication, or rate limits, which are important for a write tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with bullet points and clear sections, but it is somewhat lengthy. However, every sentence adds value, and the first sentence captures purpose concisely. Minor redundancy could be trimmed, but overall effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers when to use, content guidelines, parameter semantics, and visibility in search results. It lacks detail on output or side effects, but given the tool's simplicity and no output schema, it is fairly complete for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning to parameters beyond the input schema: it advises keeping 'title' short and searchable, and explains that 'body' should explain WHY with context. This enriches the schema descriptions, which are already present at 100% coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Share a verified finding back to the docs corpus so the next agent can find it.' It specifies the resource ('finding') and action ('share back'), and distinguishes from siblings like 'semantic_search' and 'get_document' by detailing when to share findings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'WHEN to use' and 'WHEN NOT to use' sections provide clear guidance, including examples (undocumented quirks, working combos) and alternatives ('verify with semantic_search/grep_docs first'). This leaves no ambiguity about appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources