Skip to main content
Glama

session_summary

Generate end-of-session snapshots for AI coding agents by capturing recent activity, mature suggestions, and store statistics to support session logging and memory updates.

Instructions

End-of-session snapshot: what was learned, what's mature, and housekeeping.

    Call this at the end of an agent session to get a one-call overview
    suitable for a session log or memory append: recent activity (last
    24h), top mature suggestions, and overall store stats.

    Side effect: also runs consolidate() and rebuilds the FTS search
    index. If you want a pure read-only summary, use stats() + suggest()
    separately.

    Args:
        project: Project fingerprint to scope the summary. Empty string
            auto-detects from cwd (recommended).

    Returns:
        Dict with keys: "session" (patterns_last_24h + recent list of
        up to 10), "suggestions" (count + top 5 by confidence),
        "stats" (full stats() payload), "consolidation"
        (promotion counts from the consolidate() call).
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and excels at behavioral disclosure. It reveals important side effects ('also runs consolidate() and rebuilds the FTS search index'), explains the auto-detection behavior for the project parameter, and describes the comprehensive return structure. This goes well beyond what the minimal input schema provides.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and front-loaded, with every sentence earning its place. It begins with the core purpose, then provides usage guidelines, reveals side effects, documents parameters, and describes returns - all in a logical flow without wasted words. The bullet-like formatting enhances readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (side effects, comprehensive returns) and the presence of an output schema, the description provides exactly what's needed. It explains the tool's comprehensive nature, side effects, parameter behavior, and return structure without duplicating what the output schema will specify. This is complete for an end-of-session summary tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description fully compensates by explaining the single parameter's behavior: 'Project fingerprint to scope the summary. Empty string auto-detects from cwd (recommended).' This provides crucial context about the parameter's purpose and default behavior that the schema alone doesn't capture.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('get a one-call overview') and resources ('session log or memory append'), distinguishing it from siblings like stats() and suggest(). It explicitly identifies this as an 'End-of-session snapshot' tool that provides comprehensive session data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Call this at the end of an agent session') and when to use alternatives ('If you want a pure read-only summary, use stats() + suggest() separately'). It clearly distinguishes this from sibling tools by explaining its comprehensive nature versus more focused alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/yakuphanycl/instinct'

If you have feedback or need assistance with the MCP directory API, please join our Discord server