cachly — AI Cognitive Brain
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| CACHLY_JWT | Yes | Your Keycloak JWT from cachly.dev/settings | |
| CACHLY_API_URL | No | Override for local dev | https://api.cachly.dev |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| list_instancesA | List all your cachly cache instances with their status and connection details. Read-only. Returns an array of instance objects — each with id, name, tier, status, region, RAM, and redis:// connection string. Returns an empty array if no instances exist. No pagination: all instances are returned in one call (typical accounts have < 20). Use this first to discover instance UUIDs required by get_instance, cache_get, cache_set, and all other cache tools. Use get_instance to retrieve full metadata for a single instance. |
| create_instanceA | Create a new managed Valkey/Redis cache instance on cachly.dev. Free tier provisions in ~30 seconds. Paid tiers return a Stripe checkout URL. Available tiers: free (25 MB), dev (200 MB, €19/mo), pro (900 MB, €49/mo), speed (900 MB Dragonfly + Semantic Cache, €79/mo), business (7 GB, €199/mo). |
| get_instanceA | Get full metadata for a specific cache instance: name, tier, status (provisioning / running / paused), region, RAM limit, Redis connection string, created_at, and expiry. Read-only. Returns an error if the instance_id is not found or belongs to another account. Call list_instances first to discover valid UUIDs. Use get_connection_string instead if you only need the redis:// URL for your app config. |
| get_connection_stringA | Get the Redis/Valkey connection string (redis:// URL) for a running instance. Use this to configure your application or set environment variables. |
| delete_instanceA | Permanently delete a cache instance. Deprovisions the Kubernetes workload and removes all data. This action is irreversible. |
| cache_getA | Get a value from a running cache instance by key. Returns the stored value (string or deserialized JSON object) or null if the key does not exist or has expired. Read-only — no side effects. Use cache_mget when you need multiple keys in one round-trip. Use cache_exists to check existence without retrieving the value. Use semantic_search when you need fuzzy/vector search across stored values. |
| cache_setA | Set a key-value pair in a running cache instance. Overwrites any existing value at the key — not idempotent for new data. Returns "OK" on success; returns an error if the instance_id is invalid or the instance is paused. Value can be a string or a JSON-serialized object. Optionally set a TTL in seconds (omit for no expiry). Use cache_mset instead for setting multiple keys in a single pipeline round-trip. Use cache_stream_set instead for caching LLM token streams (ordered string chunks). |
| cache_deleteA | Permanently delete one or more keys from a running cache instance (uses Redis DEL). This operation is destructive and irreversible — deleted keys cannot be recovered. Deleting a non-existent key is safe and returns 0 for that key (no error). Returns the count of keys that were actually deleted (existing keys only). Use this to explicitly remove stale entries; prefer cache_set with a short TTL for auto-expiring data. Do NOT use this to clear an entire instance — use the dashboard or delete_instance for that. |
| cache_existsA | Check whether one or more keys exist in a running cache instance (uses Redis EXISTS). Read-only — no side effects. Returns the count of keys that currently exist (integer 0 to N). If none of the keys exist, returns 0. If all exist, returns the total key count passed in. Duplicate keys in the input array are each counted separately (Redis behavior). Use this to check presence before a cache_get to avoid null handling, or to verify a cache warm-up completed. Use cache_get instead if you also need the value; use cache_ttl if you need expiry info. |
| cache_ttlA | Get the remaining time-to-live (TTL) of a key in seconds. Returns -1 if the key exists but has no expiry, -2 if the key does not exist. Read-only — no side effects. Use cache_set with a ttl parameter to set or update the expiry. |
| cache_keysA | List keys in a cache instance matching an optional glob pattern (e.g. "user:", "session:"). Uses SCAN to avoid blocking the server. Returns at most |
| cache_statsA | Get real-time stats for a cache instance: memory usage, hit/miss rate, commands/sec, connected clients, keyspace info, and uptime. Read-only — no side effects. The instance_id identifies the target instance (obtain from list_instances). Use this for monitoring, capacity planning, or debugging performance issues — not for reading cached values (use cache_get for that). Use cache_exists or cache_ttl if you only need key-level information. |
| semantic_searchA | Find cached entries that are semantically similar to a natural-language query. Read-only — no side effects. Returns an array of objects, each with: key, value, similarity_score (0–1), and namespace. Returns an empty array if no entries meet the similarity threshold. Requires OPENAI_API_KEY (or compatible provider) and the Speed/Business tier with CACHLY_VECTOR_URL. Embeddings are computed server-side and never leave Germany (pgvector HNSW index). Example: "find all cached responses about password reset" or "what did we answer about pricing?". Use cache_get for exact key lookup; use smart_recall for brain lessons. |
| detect_namespaceA | Classify a prompt into one of 5 semantic namespaces using text heuristics. Overhead: <0.1 ms, no embedding required. Useful to understand which namespace cachly will use for a given prompt. Returns one of: cachly:sem:code, cachly:sem:translation, cachly:sem:summary, cachly:sem:qa, cachly:sem:creative. |
| cache_warmupA | Pre-warm the semantic cache with a list of prompt/value pairs. For each entry: computes an embedding, checks if a similar entry already exists (similarity ≥ 0.98), and writes new entries to Valkey + pgvector index. Use this to seed FAQ responses, product descriptions, or known-good LLM answers before the first real user traffic. Requires OPENAI_API_KEY. |
| index_projectA | Index local source files into the cachly semantic cache so AI assistants can use semantic_search to find relevant files instead of re-reading the whole codebase every time. Walks a directory recursively, reads each matching file, and stores a summary + path as a semantic cache entry (prompt = file path + content excerpt, value = relative path). Requires an embedding provider (OPENAI_API_KEY or CACHLY_EMBED_PROVIDER + key). Run once, then re-run after major refactors. TTL=86400 (24h) keeps entries fresh. |
| cache_msetA | Set multiple key-value pairs in a single pipeline round-trip. Supports per-key TTL – unlike native MSET. Uses one TCP round-trip for N keys via Redis pipeline. Each item overwrites any existing value for that key. On partial failure the successfully pipelined keys are committed; a per-key error list is returned for any that failed. Returns a summary: { set: N, errors: [...] }. Use cache_set for a single key; use cache_stream_set for large streaming payloads. |
| cache_mgetA | Retrieve multiple keys in one round-trip using native Redis MGET. Returns values in the same order as the keys array; missing keys are null. |
| cache_lock_acquireA | Acquire a distributed lock using Redis SET NX PX (Redlock-lite). Returns a fencing token on success. The lock auto-expires after ttl_ms to prevent deadlocks. Use cache_lock_release to free the lock early. |
| cache_lock_releaseA | Release a previously acquired distributed lock. Uses a Lua script for atomic release – only deletes the key if the fencing token matches. |
| get_api_statusA | Check the cachly API health and your authentication status. Returns whether the JWT is valid, your user ID (sub claim), token expiry, and the auth provider (keycloak). Use this to debug connection issues or verify your CACHLY_JWT is correct. |
| remember_contextA | Save context information to the cache so you can recall it later without re-computing. Perfect for caching: codebase overviews, file summaries, project structure, frequently-accessed data, or "thinking" results like dependency analysis. The AI assistant can use this to avoid re-reading the entire codebase every time. Overwrites any existing value stored under the same key. Returns { key, stored_at, ttl } confirming the saved context. Example: remember_context("project overview", "This is a Next.js app with...") then later: recall_context("project overview"). Use recall_context to retrieve; use list_remembered to see all stored keys. |
| recall_contextA | Retrieve previously saved context from the cache. Returns the saved content or null if not found. Use this at the START of any task to check if you already have relevant context cached, before doing expensive operations like reading many files. Supports glob patterns: "file:" matches all file summaries, "arch" matches architecture-related keys. |
| list_rememberedA | List all cached context entries for this project. Shows what knowledge the AI assistant has already cached, so you can decide whether to recall existing context or refresh it. Returns: key, category, size, TTL remaining, and a content preview. |
| forget_contextA | Delete one or more cached context entries. Use when context is stale or you want to force a fresh analysis. Supports glob patterns: "file:*" deletes all file summaries. |
| learn_from_attemptsA | Store a lesson learned from a failed or successful attempt. Call this AFTER completing any non-trivial task (deploy, debug, fix, architecture decision). The lesson will be recalled automatically in future sessions via recall_best_solution. Fields: topic (short slug like "deploy:web"), outcome ("success"|"failure"), what_worked (what solved it), what_failed (what did NOT work), context (extra details). Supports structured metadata: severity, file_paths (files involved), commands (working commands), tags. Deduplication: if a lesson for this topic already exists, it is updated with full audit trail. Contradiction detection: warns if new outcome conflicts with existing lesson outcome. Confidence: lesson starts at 1.0, decays after 5d (→0.7) and 10d (→0.5) without recall. Example: learn_from_attempts(topic="deploy:api", outcome="success", what_worked="nohup docker compose up -d --build", what_failed="docker compose up hangs on SSH timeout", severity="critical", commands=["nohup docker compose up -d --build"]) |
| recall_best_solutionA | Recall the best known solution for a topic from past lessons. Call this BEFORE attempting any task that might have been done before. Returns the most recent successful lesson for the topic, with confidence indicator. ⚠️ badge = lesson is >5d old (verify before applying). 🔴 = >10d old (likely stale!). Recalling a lesson resets its confidence clock to 1.0 (marks as recently verified). Example: recall_best_solution(topic="deploy:web") → returns the working deploy command. |
| smart_recallA | Semantically search cached context using natural language. Instead of exact key matching, finds context by meaning. Example: smart_recall("how does authentication work") → returns cached auth architecture summary. Falls back to remember_context keys if no semantic match is found. |
| session_startA | Single-call session briefing. Call this at the START of every session INSTEAD of multiple separate smart_recall/recall_best_solution calls. Returns: last session summary, recent lessons sorted by recency, relevant lessons for your focus area, open failures (topics with only failure outcomes), brain health stats, team telepathy (what teammates learned this week), predictive pre-warnings (if your focus area has known failure patterns), and memory crystals (compressed wisdom from old sessions). Also saves a session start marker so session_end can compute duration. |
| session_endA | Save a session summary when you finish working. Records what was accomplished, files changed, and lesson count. The next session_start will show this summary as "Last session". Call this when ending a work session, before going idle, or before summarizing. Ambient Learning: if workspace_path is provided, reads git log since session start and auto-learns from commits. |
| session_handoffA | Save a detailed handoff for the NEXT chat window / session. Stores: current progress, TODO list (done + remaining), changed files with descriptions, instructions for the next assistant, and any incomplete work. The next session_start automatically includes this handoff so the new window knows EXACTLY what happened and what remains. Call this BEFORE closing a chat window, especially if work is incomplete. This prevents the "continue" problem where new windows lose context, skip tasks, or produce broken code. |
| session_pingA | Lightweight checkpoint — call this every ~5 tool calls or whenever you complete a significant step. Stores the current task + files touched so session_start on the NEXT provider can reconstruct what happened even if session_end was never called (e.g. Claude context limit hit, window crashed). This solves the provider-switching problem: Claude → Copilot → Cursor all see the same last checkpoint. Extremely fast — one Redis SET, no blocking operations. |
| auto_learn_sessionA | Auto-learn from a list of session observations WITHOUT explicit learn_from_attempts calls. Pass what happened (commands run, errors seen, solutions found) and the brain classifies and stores lessons automatically. Use at session_end to capture everything you did, even if you forgot to call learn_from_attempts. Returns a summary of what was auto-stored. |
| sync_file_changesA | Associate recent file changes with brain knowledge. Pass a list of changed file paths (from |
| team_learnA | Store a lesson in a shared team brain so all team members benefit. Like learn_from_attempts, but REQUIRES an author name for attribution. Shows up in team_recall with "by " so the team knows who learned it. |
| team_recallA | Recall lessons from a shared team brain, showing who learned what. Works on any shared instance (all team members using the same instance_id). Shows author, recency, and severity for each lesson. Use this to onboard new team members or find who knows about a topic. |
| team_synthesizeA | Team Brain Synthesis — merge multiple contributors' lessons on the same topic into one canonical version. When 2+ developers store lessons for the same topic with different details, this proposes the best merged version. Shows: all contributions by author, what worked (consensus), what failed (union), canonical lesson to store. Use this when onboarding new team members or before documenting a process. |
| memory_crystalizeA | Compress the last 30-50 sessions and auto-learned lessons into a dense Memory Crystal. A crystal is a compact, structured summary of everything the brain learned — grouped by category (deploy, fix, debug, …). Crystals survive session cleanup and appear in session_start once enough sessions have accumulated. Run this monthly or after a big milestone to preserve institutional knowledge. Returns a digest of what was crystallized. |
| roadmap_addA | Add a new item to the persistent project roadmap stored in the Brain. Items survive across sessions and editors — the roadmap is always up to date. Use for features, bugs, refactors, or any planned work. Call roadmap_list to see all open items, roadmap_next to get the next actionable item. |
| roadmap_updateA | Update the status, priority, or details of a roadmap item. Use to move items through the lifecycle: planned → in-progress → done (or blocked/cancelled). Also use to add notes/findings while working on an item. |
| roadmap_listA | List all roadmap items, optionally filtered by status, priority, tag, or milestone. Returns items sorted by priority then creation date. Called automatically by session_start to show open work. |
| roadmap_nextA | Get the single most important next actionable roadmap item. Returns the highest-priority in-progress item first, then planned items, sorted by priority. Call at session start to immediately know what to work on next. |
| brain_doctorA | Check the health of your AI Brain and get actionable recommendations. Reports: lesson count, context entries, last session age, open failures, quality score, effective IQ boost, stale index. Returns a prioritized list of issues with fix instructions. |
| global_learnA | Store a lesson that applies across ALL your projects (cross-project knowledge). Idempotent: if a lesson with the same topic already exists, it is updated in place — no duplicates are created. Returns a confirmation with the stored lesson key. No rate limits. Global lessons are stored with the prefix cachly:global:lesson: and recalled from any instance via global_recall. Use for tool preferences, personal workflows, platform quirks, and universal gotchas. Example: global_learn(topic="bash:macos-arrays", lesson="Arrays work differently on macOS bash 3.2"). Use learn_from_attempts for project-specific session lessons; use team_learn to share lessons with your team. |
| global_recallA | Read-only retrieval of cross-project lessons stored via global_learn. No side effects. Returns a list of matching global lesson objects, each with topic, lesson text, severity, and tags. If no topic is provided, returns all global lessons (up to 50). If topic is provided, returns all lessons whose topic key contains that string (partial match). Use this for lessons that apply universally across all projects (tool quirks, shell gotchas, platform behavior). Use recall_best_solution instead for project-specific lessons; use team_recall for org-scoped lessons. |
| publish_lessonA | Publish a lesson to the Cachly Public Brain (anonymized community knowledge base). Published lessons can be imported by other developers via import_public_brain. PII is stripped automatically. Visible under the framework/category tag. Returns { lesson_id, topic, framework, published_at } confirming the publish. Irreversible — once published to the public brain, lessons cannot be deleted via the MCP interface. Use learn_from_attempts or global_learn for private lessons; use syndicate for anonymized global sharing without framework tagging. |
| import_public_brainA | Import community lessons from the Cachly Public Brain for a framework. Non-destructive: existing lessons with the same topic key are not overwritten. Returns the count of lessons imported and their topic slugs. Available frameworks: nextjs, fastapi, go, docker, kubernetes, react, typescript, python, rust, laravel, rails, spring. Use this to bootstrap a new brain with battle-tested community knowledge before your first session_start. Use publish_lesson to contribute your own lessons to the Public Brain; use learn_from_attempts for storing lessons from your own sessions. |
| recall_atA | Brain Archaeology — see what a lesson looked like at a specific point in time. "What did we know about deployments 3 months ago?" Returns the history of a topic filtered to entries before the given date. Shows how the lesson evolved: failure → partial → success. Also useful to understand WHY old code decisions were made. |
| trace_dependencyA | Causal Chain — find all lessons that depend on a given prerequisite. "What lessons are affected if node version changes?" When a dependency changes (new version, different provider, new OS), call this to see which lessons need review. Lessons store dependencies via the depends_on field in learn_from_attempts. |
| list_orgsA | List your Cachly organizations (team/org plans). Returns each org with plan, seat count, and member info. Org plans (Team €99, Business €299, Enterprise custom) are billed separately from cache tiers. |
| create_orgA | Create a new Cachly organization for team collaboration. After creation, invite team members with invite_member and upgrade the plan via the billing portal. Org plans: Team (€99/mo, 10 seats), Business (€299/mo, 50 seats), Enterprise (custom). |
| invite_memberB | Invite a team member to a Cachly organization by email. They will receive an invite email and can join via the dashboard. Roles: owner (full access), admin (manage members + instances), member (read + cache ops). |
| get_org_planA | Get the current org plan, seat usage, and billing info for an organization. Shows: plan name, price, seats used/max, next billing date. To upgrade: use the billing portal URL returned by this tool. |
| setup_ai_memoryA | One-shot setup of the cachly 3-layer AI Memory system for a project. Layer 1 — Storage: your cachly instance (Valkey, persistent across sessions) Layer 2 — Tools: learn_from_attempts + recall_best_solution + smart_recall (the memory API) Layer 3 — Autopilot: generates a copilot-instructions.md / .github/copilot-instructions.md that instructs any MCP-compatible AI to recall known solutions BEFORE each task and save lessons AFTER — fully automatic, zero manual effort. Returns the copilot-instructions.md content + provider-specific .mcp.json snippet. Optionally writes copilot-instructions.md directly to the project directory. |
| cache_stream_setA | Cache a list of string chunks (e.g. LLM token stream) via Redis RPUSH. Each chunk is stored as a separate list element under cachly:stream:{key}. Replay with cache_stream_get. |
| cache_stream_getA | Retrieve a previously cached stream as an ordered list of string chunks. Returns null on cache miss (key absent or empty list). Stored under cachly:stream:{key}. |
| memory_consolidateA | Cognitive memory consolidation — the weekly garbage collector for your AI Brain. Scans all lessons, detects contradictions (same topic with conflicting outcomes), merges duplicates, flags stale entries (not recalled in 90+ days), and computes a health score. Returns a full consolidation report with conflicts resolved, duplicates merged, and a before/after count. Run weekly or when brain_doctor reports > 20 lessons. Like git gc for knowledge. |
| brain_diffA | git log for your AI Brain — see exactly what changed since a point in time. Returns a structured changelog: new lessons added, lessons updated (outcome changed), lessons recalled (hit count increased), and lessons that decayed. Perfect for weekly reviews: "What did my AI learn this week?" Example: brain_diff(instance_id="...", since="7d") → "12 new · 4 updated · 2 stale" |
| causal_traceA | Root Cause Analysis through memory: given a problem description, traces the causal chain from root cause through intermediate failures to the current symptom, then surfaces the exact solution that worked before. Read-only — does not modify any stored data. Requires prior learning: brain must have lessons stored via learn_from_attempts or brain_from_git. Returns an ordered chain of concepts with confidence scores plus the matching solution; returns an empty chain with a message if no causal path is found. Example: causal_trace(problem="auth breaks after restart") → "Root: k8s:namespace-terminating → keycloak:jwks-race → Solution: PollUntilContextTimeout 3min". Use recall_best_solution for direct topic lookup, syndicate_search for community patterns, and causal_trace when you have a symptom and need the full root-cause chain. |
| knowledge_decayA | Confidence scoring for every lesson in your Brain — because old knowledge rots. Computes a decay score (0–100%) per lesson based on age, recall frequency, and outcome. Lessons recalled recently score high. Lessons from 90 days ago never recalled score low. Returns a ranked list with visual confidence bars: "████░░░░ 40%". Use this before a big refactor to know which lessons to trust and which to re-validate. |
| autopilotA | Generate a CLAUDE.md / copilot-instructions.md that makes any AI self-managing forever. Writes a configuration file to disk — will overwrite an existing file at the target path. No auth required beyond a valid instance_id. The generated file instructs Claude, Cursor, Copilot, Windsurf, or Gemini to automatically call session_start at window open, learn_from_attempts after every fix, and session_end before closing — without being asked. Returns the generated file content as a string and the path where it was written. Use style="minimal" for just the three hooks; style="full" for the complete ruleset with examples. One command. Every AI. Always on. Use setup_ai_memory instead if you want an interactive one-shot setup that also creates an instance. |
| syndicateA | Contribute a verified lesson to the GLOBAL Cachly Knowledge Commons — a privacy-preserving shared brain where every AI instance can learn from the discoveries of every other. Your contributor identity is a one-way HMAC hash: completely anonymous. The lesson is immediately searchable by any other AI using syndicate_search. This is how individual knowledge becomes collective intelligence. Call this AFTER every learn_from_attempts that is worth sharing universally (critical bugs, deployment gotchas, architecture discoveries). If a lesson with the same topic already exists in the commons, it is updated in place (idempotent). Returns { key, confirm_count, scope } confirming the stored lesson. Use scope="org" to keep the lesson private to your organisation. Do NOT use for secrets or PII — content is stored in a shared knowledge base. |
| syndicate_searchA | Search the GLOBAL Cachly Knowledge Commons for solutions contributed by the entire community. Returns lessons ranked by confirm_count (trust score) then recency. Use this BEFORE debugging any unknown issue — someone in the global brain likely solved it already. Example: syndicate_search(q="clickhouse localhost connection refused") → "fix: use 127.0.0.1 not localhost when IPv6 is disabled · confirmed by 47 instances" |
| syndicate_statsA | Show the health of the global Knowledge Commons: total lessons, total confirms, top categories, most-trusted lessons, growth in the last 7 days, and top contributors (anonymous scores). Use for weekly reviews or to explore what the community knows. |
| syndicate_trendingA | Show the TRENDING lessons in the global Knowledge Commons — those with the fastest confirmation velocity in the last 7 days (confirm_count / age_in_days). Use this at the start of a session or weekly review to see what the community is actively validating. Lessons need at least 2 independent confirms to appear here. |
| brain_searchA | BM25+ full-text search over ALL brain data: lessons, context entries, session history, CKG nodes, roadmap items. Unlike smart_recall (which focuses on lessons + context), brain_search casts a wider net. Use when smart_recall returns nothing or when you want to find anything the brain knows about a topic. |
| ckg_inspectA | Inspect the Causal Knowledge Graph (CKG) for a concept. Shows all typed edges (fixes, requires, co-occurs, causes) with Bayesian confidence scores. Use to understand what the brain knows about a topic and which fixes have the highest confidence. Also shows related concepts via graph traversal. |
| brain_predictB | Predictive Pre-fetch Engine (PPE): given your current context (what you're working on), traverses the CKG to predict likely failures and pre-load relevant fixes. Returns top predicted pitfalls + highest-confidence fixes. Call at session_start when working on a specific feature or debugging area. |
| madc_deliberateA | Multi-Agent Deliberation Chamber (MADC — Layer 3): When conflicting lessons exist for a topic, run deliberation between 6 specialist expert agents (InfraAgent, AuthAgent, DeployAgent, DatabaseAgent, DebugAgent, APIAgent). Each agent votes based on its domain CKG coverage. Unanimous vote → loser superseded. Split vote → contested flag, causal_trace required before acting. Resolution stored as permanent CKG node. Called automatically when learn_from_attempts detects a contradiction. |
| cls_ingestA | Continuous Learning Stream (CLS — Layer 5): Ingest learning signals WITHOUT explicit session_end calls. Sources: git_commit (commit message + files → CKG edges), ci_outcome (green/red build → confirms fix), ide_diagnostic (compiler error + fix pair → instant lesson). Install automatic ingestion with cls_install_hooks — brain learns from every commit and CI run. |
| cls_install_hooksB | Output a ready-to-install git post-commit hook + GitHub Actions step for Continuous Learning. Once installed, every git commit and CI build automatically feeds the brain — no session_end needed. Run once per repository. |
| fedbrain_contributeA | FedBrain (Layer 6): Contribute a lesson to the global Knowledge Commons with a cryptographic knowledge certificate. Certificate includes: domain fingerprint, confidence, outcome chain hash. Lessons with 10+ independent confirmations become Gold Standard. Context-weighted: other brains with similar tech stacks see your lesson ranked higher in fedbrain_search. |
| fedbrain_searchA | FedBrain context-weighted search: Search the global commons, weighting results by tech-stack similarity. Brains with matching domain context (Go/Kubernetes/Postgres) rank higher than unrelated stacks. Shows certificate provenance, confirm_count, and Gold Standard badges. |
| fedbrain_confirmA | Confirm that a syndicated lesson from the global commons worked for you. Propagates confirmation back — increments confirm_count on the knowledge certificate. Also updates your local CKG confidence. At 10 independent confirmations → Gold Standard. |
| fedbrain_statusA | Show your FedBrain federation status: lessons contributed to global commons, recent confirmations, Gold Standard lessons, pending propagations. Use to track your brain's global knowledge contribution. |
| crystal_viewA | Inspect the current Memory Crystal — the compressed wisdom distilled from all past sessions. Shows top patterns per category, lesson count, and when the crystal was last refreshed. Call after session_start when you want to quickly see accumulated wisdom across all past work. |
| compact_recoverA | Call FIRST after any context limit hit / compaction. Reconstructs full context from Memory Crystal + recent sessions + WIP registry + open failures. Returns a condensed briefing so the new context window starts exactly where the previous one left off — no lost progress. |
| brain_from_gitA | Bootstrap brain lessons from git history. Parses commit messages and infers fix/feature/refactor lessons automatically. Great for onboarding an existing codebase — run once and the brain instantly knows your team's accumulated patterns. Supports limit and branch options. |
| brain_predict_failuresA | Pre-deploy failure prediction with probability percentages. Given a change context (e.g. "upgrading Keycloak 21→24" or "deploying Redis 7 to prod"), returns the top likely failure modes ranked by probability, with pre-loaded fixes. Uses CKG causal edges + lesson history. Call before any significant deploy, migration, or infrastructure change. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/cachly-dev/cachly-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server