Skip to main content
Glama

save_note

Store notes by exact name for precise later retrieval. Save structured data, meeting notes, or API documentation with unique identifiers, enabling direct access without semantic search.

Instructions

Save a named note for later retrieval.

Notes are stored by name and can be retrieved exactly with get_note. Unlike memories (which are searched by meaning), notes are accessed by their exact name — like files in a folder.

Use this when:

  • Saving structured data: save_note("api-endpoints", "GET /users, POST /auth, ...")

  • Preserving analysis results: save_note("perf-audit-2024", "P95 latency: 240ms...")

  • Storing meeting notes: save_note("standup-mar-15", "Discussed auth migration...")

  • Attaching work to a session: save_note("findings", "...", session_id="s001")

If a note with the same name already exists, it is overwritten.

Args: name: A short, descriptive name (e.g., "api-auth-flow", "meeting-notes-q1"). Use lowercase with hyphens for consistency. content: The note content. Markdown is supported. No size limit, but keep it focused — use multiple notes for different topics. session_id: Optional session to attach this note to. Notes attached to sessions appear when that session is resumed.

Returns: Confirmation that the note was saved. Returns an error if name or content is empty.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameYes
contentYes
session_idNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden, disclosing overwrite semantics, session attachment behavior, supported markdown formatting, and error conditions (empty name/content). Could be improved by explicitly stating idempotency or persistence guarantees, but covers primary behavioral traits well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with progressive disclosure: summary → conceptual model → usage scenarios → constraints → detailed parameter specs → returns. Every sentence provides unique value; examples are concrete and immediately illustrative without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 0% schema coverage and medium complexity (distinction from memory system), the description is comprehensive: covers purpose, sibling differentiation, parameter documentation (compensating for schema), return value summary (sufficient since output schema exists), and error cases. No significant gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Critical compensation for 0% schema description coverage: the Args section provides rich semantics for all three parameters including naming conventions (lowercase-hyphens), format support (markdown), size guidance, and functional behavior (session attachment). Effectively documents what the schema omits entirely.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with specific verb+resource ('Save a named note') and immediately distinguishes from siblings by contrasting notes (exact name access) with memories (semantic search), explicitly naming `get_note` as the retrieval counterpart.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains an explicit 'Use this when:' section with four concrete scenarios including code examples, clarifies the overwrite behavior ('If a note with the same name already exists, it is overwritten'), and contrasts with memory-based tools (remember/recall) in the sibling set.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PL-ODIN/astria-plugin'

If you have feedback or need assistance with the MCP directory API, please join our Discord server