Skip to main content
Glama

pack_context

Pack project context into a single document for external LLMs using intelligent graph-based selection to fit within token budgets. Choose strategies for relevance, architectural focus, or compact outlines across 68 languages.

Instructions

Pack project context into a single document for external LLMs. Intelligent selection by graph importance, fits within token budget. Better than Repomix for focused context. Strategies: most_relevant (default — feature/PageRank ranked), core_first (PageRank always wins, surfaces architecturally central code), compact (signatures only — drops source bodies, lets outlines cover much more of the repo per token).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
scopeYesScope: project (whole repo), module (subdirectory), feature (NL query)
pathNoSubdirectory path (for module scope)
queryNoNatural language query (for feature scope)
formatNoOutput format (default: markdown)
max_tokensNoToken budget (default: 50000)
includeNoSections to include (default: outlines + source + routes)
compressNoStrip function bodies, keep signatures (default: true)
strategyNoPacking strategy (default: most_relevant). core_first = PageRank always wins. compact = drops source bodies, allows much wider outline coverage.
include_budget_reportNoInclude per-section token breakdown + headroom in result (default false)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does an excellent job describing key behavioral traits: the intelligent selection mechanism ('graph importance', 'PageRank ranked'), token budget constraints, and three distinct packing strategies with clear explanations of how each works. It also mentions the output format options and what gets included/dropped in different strategies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly sized and front-loaded. The first sentence establishes the core purpose, followed by key behavioral details. Every sentence earns its place by adding important information about selection mechanisms, token constraints, comparison to alternatives, and strategy explanations. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, no output schema, no annotations), the description does an excellent job covering the essential context. It explains the tool's purpose, selection methodology, token constraints, and strategy differences. The main gap is the lack of information about return values or output structure, which would be important since there's no output schema. However, for a tool with this level of parameter complexity, the description provides strong contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds some value by explaining the three strategy options in more detail than the schema's enum descriptions, particularly clarifying what 'core_first' and 'compact' mean. However, it doesn't provide additional semantic context for most parameters beyond what's already documented in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('pack', 'intelligent selection') and resources ('project context', 'single document for external LLMs'). It distinguishes from sibling tools by mentioning 'Better than Repomix for focused context' and differentiates from other context-related tools like get_context_bundle, get_feature_context, or get_domain_context by focusing on packing/selection rather than retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('for external LLMs', 'fits within token budget') and mentions an alternative ('Better than Repomix for focused context'). However, it doesn't explicitly state when NOT to use it or provide guidance on choosing between this and other context-related sibling tools like get_context_bundle or get_feature_context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nikolai-vysotskyi/trace-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server