Skip to main content
Glama
cachly-dev

cachly — AI Cognitive Brain

syndicate

Contribute a verified lesson to a shared AI knowledge base. Lessons become searchable by other instances, turning individual discoveries into collective intelligence.

Instructions

Contribute a verified lesson to the GLOBAL Cachly Knowledge Commons — a privacy-preserving shared brain where every AI instance can learn from the discoveries of every other. Your contributor identity is a one-way HMAC hash: completely anonymous. The lesson is immediately searchable by any other AI using syndicate_search. This is how individual knowledge becomes collective intelligence. Call this AFTER every learn_from_attempts that is worth sharing universally (critical bugs, deployment gotchas, architecture discoveries). If a lesson with the same topic already exists in the commons, it is updated in place (idempotent). Returns { key, confirm_count, scope } confirming the stored lesson. Use scope="org" to keep the lesson private to your organisation. Do NOT use for secrets or PII — content is stored in a shared knowledge base.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topicYesTopic key in category:keyword format (e.g. "fix:clickhouse-ipv6", "deploy:docker-compose")
outcomeNoResult of the attempt (default: success)
what_workedYesExact approach, command, or fix that worked. File paths are stripped automatically.
what_failedNoWhat failed or was wrong — helps others avoid the same trap.
severityNoHow severe the issue was (default: minor)
tagsNoUp to 10 keywords for better discoverability
scopeNoVisibility: "public" = global commons (default), "org" = private to your org only
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully covers behavioral traits: privacy-preserving via HMAC hash, idempotent updates, automatic file path stripping, immediate searchability, and return value structure. This exceeds what annotations would provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the primary purpose. It contains a few redundant phrases (e.g., 'every AI instance can learn') but overall efficient for the amount of detail provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters (2 required), no output schema, and no annotations, the description thoroughly covers purpose, parameter usage, return value, security, and idempotency. It is complete for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for each parameter. The description adds context about topic format and overall behavior but doesn't significantly enhance semantics beyond the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool contributes a verified lesson to a global knowledge commons, distinguishing it from sibling tools like syndicate_search. It uses specific verbs (contribute, update) and defines the resource (Cachly Knowledge Commons).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly recommends calling this AFTER learn_from_attempts for universally shareable lessons and warns against using it for secrets/PII. It lacks explicit differentiation from similar sibling tools like fedbrain_contribute or publish_lesson, but provides clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cachly-dev/cachly-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server