Skip to main content
Glama
T-NhanNguyen

GraphRAG Llama Index MCP Server

by T-NhanNguyen

search

Search a knowledge graph with three modes: keyword lookup for exact terms like tickers, entity connections for relationships, and thematic overview for broad patterns.

Instructions

Search the GraphRAG knowledge base with THREE distinct modes:

CRITICAL: For ticker symbols (ASTS, RKLB, NBIS) or acronyms, ALWAYS use 'keyword_lookup' FIRST. Semantic search (entity_connections/thematic_overview) can MISS exact ticker matches.

Workflow:

  1. Ticker/Acronym Query → Use 'keyword_lookup' to find raw mentions

  2. If found → Extract entity names from results

  3. Then use 'entity_connections' or 'thematic_overview' with full entity names for deeper analysis

Mode Selection Guide:

  • 'keyword_lookup': Direct BM25 retrieval for EXACT terms (tickers, acronyms, specific names) → Returns: Raw text chunks containing the literal search term → Use for: ASTS, NYSE:RKLB, "Direct-to-Cell", specific product names

  • 'entity_connections': Find entities and their knowledge graph relationships → Returns: Entities + relationships + supporting chunks → Use for: Company relationships, competitive landscape, partnerships → Example: After finding 'AST SpaceMobile' via keyword_lookup, search 'AST SpaceMobile competitors partners'

  • 'thematic_overview': High-level patterns and trends across corpus → Returns: Broad thematic context → Use for: Industry trends, macro narratives, sector analysis → Example: 'satellite telecommunications market trends'

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query string
modeNo**IMPORTANT**: Use 'keyword_lookup' for ticker symbols and acronyms. Use 'entity_connections' for relationships. Use 'thematic_overview' for broad patterns. Default: 'entity_connections'
topKNoNumber of results (default: 10)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the critical workflow for ticker symbols, the risk of semantic search missing exact matches, and what each mode returns (e.g., raw text chunks for keyword_lookup). However, it lacks details on error handling, rate limits, or authentication needs, which are minor gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (critical note, workflow, mode selection guide), and every sentence adds value by explaining usage or behavior. It is appropriately sized for a complex tool with three modes, though it could be slightly more concise by reducing some repetition (e.g., the mode explanations are detailed but lengthy).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (three modes, no annotations, no output schema), the description is mostly complete. It covers purpose, usage, behavioral traits, and parameter semantics effectively. However, it lacks information on output format details (beyond high-level returns like 'raw text chunks') and potential limitations or errors, which would enhance completeness for a search tool without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds significant value by explaining the semantics of the 'mode' parameter in detail (e.g., keyword_lookup for exact terms, entity_connections for relationships), providing examples and use cases that go beyond the schema's enum descriptions. It clarifies the practical implications of mode selection, though it doesn't add much for 'query' or 'topK' beyond what the schema already covers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches a GraphRAG knowledge base with three distinct modes, specifying the exact functionality (searching with keyword_lookup, entity_connections, and thematic_overview). It distinguishes itself from siblings by focusing on search operations rather than exploration or statistics gathering, making the purpose specific and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use each mode, including a critical workflow for ticker symbols/acronyms (use keyword_lookup first), and clear examples for each mode. It explicitly states alternatives within the tool itself (the three modes) and gives detailed when/when-not instructions, such as avoiding semantic search for exact ticker matches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/T-NhanNguyen/graphRAG-LlamaIndex'

If you have feedback or need assistance with the MCP directory API, please join our Discord server