Skip to main content
Glama

session_note

Read and write session notes to persist design state across turns. Use for plans, decisions, and retrospective notes like failures and learnings.

Instructions

Read / write your own session scratchpad. Persists across turns within one design session ("New Design" resets it). Notes are how state carries from this turn into the next — the next session loads them as context.

Actions: action: "read" → read({key}) returns the current value (empty string if unset) action: "write" → write({key, value}) replaces or deletes (pass value:"" to delete) action: "list" → list({}) returns [{key, chars}] for all existing notes

Slots — FORWARD-LOOKING (what we plan / commit to):

  • plan — this turn's intent + step outline (write at turn start)

  • decisions — locked choices: style picked + reason, accent token, font scale, hero treatment, etc. (write BEFORE jsx)

  • brand — durable brand notes pulled from a project design.md (if user supplied one)

  • todo — TRULY unfinished work for the next turn (omit if everything shipped)

Slots — BACKWARD-LOOKING (what happened, write AT TURN END — AUTO-MERGE on write):

  • failures — tool calls that failed this turn + how you worked around them. Example: "jsx items='stretch' rejected (DSL valid: center|start|end|space-between|baseline); retried with 'center'." If a failure repeats a class you've seen before, name the class.

  • gotchas — validator warnings you noticed but chose not to fix + why. Example: "4 LOW_CONTRAST on nav links (2.5:1) — deliberate for ambient-grey style; revisit if user complains." Also: magic numbers / hand-tuned positions and what motivated them. Example: "Glow ellipses at (-180, 220) / (1060, 70) — placed half outside frame to bleed in."

  • learnings — surprises about this codebase / DSL / Figma API. Example: "radial-gradient(circle at X% Y%) rejected — DSL only takes the simple form. Same trap as CSS-prior bleed elsewhere."

BACKWARD-LOOKING slots auto-merge: writing to failures / gotchas / learnings appends to prior content after a "---" divider — your new value never silently overwrites accumulated retrospective notes. To replace fresh, first write({value: ""}) to clear, then write again. The result data.merged=true flag confirms when merge happened.

REQUIRED behavior:

  • First turn: write at least decisions BEFORE any jsx (commit-before-act).

  • Subsequent turns: read decisions and learnings BEFORE any jsx — those slots survive across turns and your conversationHistory won't carry the full content reliably. Your turn's adjacent snapshot also surfaces current notes, but explicit read proves you considered them.

  • Every turn end: write at least ONE of failures / gotchas / learnings if ANY of these happened this turn: • a tool call returned an error • a tool call returned warnings you chose not to fix • you hand-tuned a coordinate / size / color away from a value the model would have picked • you found a DSL behavior that surprised you "All clean, no carry-over" is almost never accurate — at least one backward slot belongs.

  • jsx that uses a color / font / size should round-trip through decisions (token traceability).

Examples: session_note({action: "write", key: "decisions", value: "Style: fintech-dark.\nAccent: #3B82F6 (style.accent — NOT indigo).\nFont: Space Grotesk display / Inter body.\nHero treatment: split (overrides anatomy VERTICAL — desktop convention).\nH1 size: 32 (style.display)."}) session_note({action: "write", key: "failures", value: "jsx #6 align='stretch' rejected (DSL valid: center|start|end|space-between|baseline) — retried with 'center'. Same CSS-prior class as gradient: model has CSS values that DSL doesn't accept."}) session_note({action: "write", key: "gotchas", value: "8 LOW_CONTRAST warnings on nav links + secondary CTA (2.5:1 against dark bg). Left as-is — user prompt didn't require WCAG pass; flagging in case next iteration tightens contrast."}) session_note({action: "write", key: "learnings", value: "layoutPositioning='absolute' children render correctly inside auto-layout frame, but parent needs clipsContent=true to suppress overflow on big glow ellipses."}) session_note({action: "read", key: "decisions"}) session_note({action: "list"})

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
actionYesread | write | list
keyNoNote key. Required for read/write. Recommended slots: plan, decisions, brand, todo.
valueNoMarkdown body for write. Pass "" to delete. Ignored for read/list.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully bears the burden of behavioral disclosure. It explains persistence across turns, auto-merge behavior for backward-looking slots, deletion semantics, and the merged flag on writes. This leaves no ambiguity about tool behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Actions, Slots, Backward-looking slots, Required behavior, Examples) and front-loaded with the core purpose. While lengthy, every part serves a purpose given the tool's complexity. It could be slightly shorter but remains efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (three actions, slot conventions, auto-merge, turn-cycle requirements, examples) and the absence of an output schema, the description covers all necessary behavioral aspects. It ensures an agent can correctly invoke the tool in any scenario, including edge cases like deletion and auto-merge.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema coverage is 100%, the description adds significant meaning beyond the schema: it defines the three actions with expected parameters, explains slot conventions (plan, decisions, etc.), details auto-merge mechanics, and provides examples. This greatly enriches semantic understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Read / write your own session scratchpad.' It differentiates from sibling tools by emphasizing its role as a persistent state carrier across turns. The three actions (read, write, list) are explicitly defined, making the purpose unmistakable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit, actionable usage guidelines: first turn write decisions before jsx, subsequent turns read decisions and learnings before jsx, and every turn end write at least one backward-looking slot. It also explains when not to use (e.g., clearing slots before overwriting) and offers examples for correct invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/musepy/genable'

If you have feedback or need assistance with the MCP directory API, please join our Discord server