Skip to main content
Glama
Kirachon

Context Engine MCP Server

by Kirachon

get_context_for_prompt

Retrieves relevant codebase context including file summaries, code snippets, related files, and past session memories to understand features, prepare for changes, and explore unfamiliar code.

Instructions

Get relevant codebase context optimized for prompt enhancement. This is the primary tool for understanding code and gathering context before making changes.

Returns:

  • File summaries and relevance scores

  • Smart-extracted code snippets (most relevant parts)

  • Related file suggestions for dependency awareness

  • Relevant memories from previous sessions (preferences, decisions, facts)

  • Token-aware output (respects context window limits)

Use this tool when you need to:

  • Understand how a feature is implemented

  • Find relevant code before making changes

  • Get context about a specific concept or pattern

  • Explore unfamiliar parts of the codebase

  • Recall user preferences and past decisions

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesDescription of what you need context for (e.g., "authentication logic", "database schema", "how user registration works")
max_filesNoMaximum number of files to include (default: 5, max: 20)
token_budgetNoMaximum tokens for the entire context (default: 8000). Adjust based on your context window.
include_relatedNoInclude related/imported files for better context (default: true)
min_relevanceNoMinimum relevance score (0-1) to include a file (default: 0.3)
bypass_cacheNoBypass caches (forces fresh retrieval; useful for benchmarking/debugging).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool returns multiple structured outputs (file summaries, snippets, related files, memories), respects token limits, and is optimized for prompt enhancement. However, it doesn't mention potential side effects like performance impact or caching behavior beyond the 'bypass_cache' parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, returns, usage guidelines) and uses bullet points for readability. Every sentence adds value, though it could be slightly more concise by integrating the 'Returns' list more tightly with the purpose statement.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema, no annotations), the description provides strong context on purpose, usage, and behavioral outputs. It covers what the tool returns and when to use it, but lacks details on error handling or output format specifics, which would be helpful for a tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all six parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. This meets the baseline of 3 since the schema does the heavy lifting, but no extra semantic context is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Get relevant codebase context optimized for prompt enhancement' and specifies it's 'the primary tool for understanding code and gathering context before making changes.' It distinguishes from siblings like 'get_file' (single file retrieval) and 'semantic_search' (general search) by emphasizing comprehensive, optimized context gathering for prompt enhancement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit 'Use this tool when you need to:' guidelines with five specific scenarios (e.g., 'Understand how a feature is implemented,' 'Find relevant code before making changes'). This clearly differentiates when to use this tool versus alternatives like 'get_file' for single files or 'semantic_search' for general searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Kirachon/context-engine'

If you have feedback or need assistance with the MCP directory API, please join our Discord server