Skip to main content
Glama

prepare_context

Generates focused code context by analyzing tasks to identify relevant files, symbols, and hotspots within a token budget, replacing manual repository indexing and filtering steps.

Instructions

The recommended tool for coding tasks. Give it your task and get back a token-budgeted context with the right files, symbols, and hotspots — all in one call. Proven +18.6% file prediction improvement (p=0.049*, n=45). 95% helpful rate. Use this instead of calling index_repo → focus → blast_radius manually.

task: describe what you're working on. Two modes are auto-selected:
  - PR/commit titles ("Merge pull request #123 from org/fix-auth-bug",
    "fix: prevent null pointer in handler", "Fix teardown callbacks (#5928)")
    → keyword extraction from branch name → per-keyword symbol focus → KEY FILES list
    → proven +7% file prediction improvement on real PRs (canonical n=159, p=0.035*)
  - General coding tasks ("add pagination to user list", "refactor database layer")
    → fuzzy symbol search → overview fallback if no match
task_type: optional hint — "changelocal" forces keyword-extraction path regardless
           of task format; also accepts "debug", "feature", "refactor", "review"
max_tokens: total token budget for the response (default 6000)
exclude_dirs: comma-separated directory prefixes to skip
baseline_predicted_files: optional list of files already predicted by the model
  (for adaptive injection). Two skip conditions:
  1. If len(baseline) ≥ 3 → returns "" (model is highly confident with 3+ predictions;
     any context disagrees more than it helps). Evidence: falcon bl=1.000, 3 correct preds
     → av2 without this guard injected anyway → F1 1.0→0.5 (commit 988960b/d4eb3c8).
  2. If overlap(baseline ∩ KEY FILES) ≥ 50% → returns "" (model already knows the files).
  Otherwise: returns full context (model needs the structural graph bridge).
  Bench (canonical): python3 -m bench.changelocal.analyze --canonical --conditions baseline,tempograph_adaptive
  Canonical result (n=159 Python+JS): +6.9% F1 (p=0.035*). Cost: 2× inference for ~37% of tasks.
precision_filter: if True, skip context when >4 key files are found (topic too broad).
  Canonical bench: python3 -m bench.changelocal.analyze --canonical --conditions baseline,tempograph_precision
  Canonical result (n=159 Python+JS): +3.7% F1 (p=0.21, ns). Default False (plain tempograph = +6.0%
  outperforms precision_filter on canonical corpus). Enable only for high-baseline repos.
definition_first: if True, when a keyword produces too-broad focus (>10 files) and no path match,
  fall back to the *defining file* of the top-ranked symbol (requires score≥10 and ≤2 defining files).
  Handles "redirect" → flask/helpers.py instead of injecting nothing.
  Phase 5.31 bench: +16.0% F1 (p=0.012*, n=93). Default True (enabled).
output_format: "text" (default) or "json" for structured response

Returns: overview summary + focused context + KEY FILES + hotspot warnings,
all within the token budget. JSON format adds `key_files` (parsed list) and `injected` (bool).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
repo_pathYes
taskYes
task_typeNo
max_tokensNo
exclude_dirsNo
baseline_predicted_filesNo
precision_filterNo
definition_firstNo
output_formatNotext

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Elmoaid/TempoGraph'

If you have feedback or need assistance with the MCP directory API, please join our Discord server