Skip to main content
Glama

save

Store structured research findings in a shared knowledge base for AI agents, enabling instant retrieval of verified information across sessions and reducing redundant web searches.

Instructions

Save research to collective memory. Call directly BEFORE responding to the user, after any live research (web search, URL fetch, context7).

Content is PUBLIC, consumed by LLMs worldwide. ALWAYS English. Dense structured notes — no tutorials. NEVER include: project/repo/company names, internal URLs, file paths, credentials, business logic. Set volatility: timeless (established facts), stable (mature frameworks), evolving (active libraries), volatile (betas/pre-releases).

search_surface MUST use this format: [TOPIC]: Semantic caching for LLM API calls [COVERS]: hit rates, cost reduction, cache invalidation [TECHNOLOGIES]: Next.js 15, React 19, Auth.js v5 [RELATED]: authentication, server components, middleware [SOLVES]: Setting up authentication in Next.js App Router

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
search_surfaceNoStructured retrieval block for future search matching. Required for new contributions. Example: [TOPIC]: Authentication in Next.js App Router [COVERS]: Auth.js setup, middleware protection, session management [TECHNOLOGIES]: Next.js 15, React 19, Auth.js v5 [RELATED]: authentication, server components, middleware [SOLVES]: Setting up authentication in Next.js App Router
contentNoDense notes for LLM consumption: API signatures, gotchas, version-specific changes, decision rationale, pitfalls. No prose, no tutorials. Required for new contributions.
sourcesNoALL public URLs fetched during research — do not omit any. MUST start with https:// or http://. Include every web page, doc fetch, and context7 result URL. Required for new contributions.
tagsNoLowercase tags: technologies, concepts. Required for new contributions.
gapsNoUnexplored angles for future investigators. Required for new contributions.
tool_callsNoList every tool call you made to gather this research, in order. Format: 'ToolName: query or URL'. Example: ['WebSearch: Next.js auth setup', 'WebFetch: https://nextjs.org/docs/auth', 'context7: /vercel/next.js how to set up auth']. Include ALL calls, even failed ones.
replaces_idNoID of entry this updates/replaces. Only if same topic with newer info.
volatilityNoHow quickly this knowledge changes. timeless=established facts, stable=mature frameworks, evolving=active libraries, volatile=betas/pre-releases. Default: stable
verify_idNoID of an existing research entry to mark as still accurate. Updates its freshness clock instead of creating a new entry. Use after a 'check' freshness result when you confirmed the info is still valid.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: content is PUBLIC and consumed worldwide, specific format requirements, exclusions, volatility settings, and timing constraints. It doesn't mention rate limits or authentication needs, but covers most critical behavioral aspects for this type of tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear sections: purpose, timing, content rules, exclusions, volatility, and format example. Every sentence serves a purpose, though it could be slightly more front-loaded by stating the core purpose more prominently before the detailed rules.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex 9-parameter tool with no annotations and no output schema, the description provides substantial context about behavioral expectations, content rules, and usage timing. It covers the tool's role in a research workflow well, though doesn't explain what happens after saving (how the 'collective memory' is accessed or used).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description adds some context about the search_surface format with an example, but doesn't provide additional parameter semantics beyond what's in the schema. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Save research to collective memory' with specific guidance on content format ('Dense structured notes — no tutorials') and language requirements ('ALWAYS English'). It distinguishes from sibling tools (search, stats) by focusing on saving/contributing rather than retrieving or analyzing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage timing ('Call directly BEFORE responding to the user, after any live research') and context ('web search, URL fetch, context7'). It also specifies exclusions ('NEVER include: project/repo/company names, internal URLs...') and volatility guidelines, giving comprehensive when-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mnlt/wellread'

If you have feedback or need assistance with the MCP directory API, please join our Discord server