Skip to main content
Glama
cachly-dev

cachly — AI Cognitive Brain

knowledge_decay

Evaluate confidence in learned lessons using decay scores (0–100%) based on age, recall frequency, and outcome. Identify trustworthy lessons and those needing re-validation before refactoring.

Instructions

Confidence scoring for every lesson in your Brain — because old knowledge rots. Computes a decay score (0–100%) per lesson based on age, recall frequency, and outcome. Lessons recalled recently score high. Lessons from 90 days ago never recalled score low. Returns a ranked list with visual confidence bars: "████░░░░ 40%". Use this before a big refactor to know which lessons to trust and which to re-validate.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
instance_idYesUUID of the cache instance
min_age_daysNoOnly include lessons older than N days (default: 0 = all)
show_topNoNumber of entries to return, sorted by lowest confidence first (default: 20)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the key behavioral traits: the tool is read-only (computes scores, no side effects), the scoring factors (age, recall frequency, outcome), and the output format (ranked list with visual confidence bars). It does not mention authentication or rate limits, but those are likely handled at the server level.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured: it starts with the core purpose, explains the scoring factors, gives examples, and provides a usage recommendation. It is concise but includes a creative analogy ('old knowledge rots'), which adds context without unnecessary fluff. It earns a 4 as it is clear and efficiently organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the input schema complexity (3 params, 1 required) and no output schema, the description fully explains the output format (ranked list with visual confidence bars) and the scoring logic. It addresses typical questions like how age and recall affect scores. The description is self-contained and provides enough information for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by explaining that 'show_top' returns entries sorted by lowest confidence first and that 'min_age_days' filters by older lessons. It also describes the output format (confidence bars), which is absent from the schema. This goes beyond mere repetition of the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes a decay score (0–100%) per lesson based on age, recall frequency, and outcome. It uses specific verbs like 'computes' and 'scores', and the resource is 'every lesson in your Brain'. It distinguishes itself from siblings by its unique focus on knowledge decay, though it doesn't explicitly name alternative tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises using the tool 'before a big refactor to know which lessons to trust and which to re-validate.' This provides clear context and a concrete use case. However, it does not mention when not to use the tool or suggest alternatives, so it loses a point for lack of exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cachly-dev/cachly-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server