Skip to main content
Glama
T1nker-1220

Knowledge Graph Memory Server

get_lesson_recommendations

Retrieve relevant lessons based on your current context to enhance learning and problem-solving with personalized recommendations.

Instructions

Get relevant lessons based on the current context

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contextYesThe current context to find relevant lessons for

Implementation Reference

  • Main handler function in KnowledgeGraphManager class that implements get_lesson_recommendations tool logic: loads lessons from files, calculates similarity scores based on error patterns, observations, relations, and success rates against provided context, filters and sorts by relevance.
    async getLessonRecommendations(context: string): Promise<LessonEntity[]> {
      // Load all files containing lessons
      const lessonFiles = await this.fileManager.getFilesForEntityType('lesson');
      const allLessons: LessonEntity[] = [];
    
      // Load and merge lessons from all files
      for (const filePath of lessonFiles) {
        try {
          const fileContent = await fs.readFile(filePath, 'utf-8');
          const fileGraph = JSON.parse(fileContent);
          const lessons = fileGraph.entities.filter((e: Entity): e is LessonEntity =>
            e.entityType === 'lesson'
          );
          allLessons.push(...lessons);
        } catch (error) {
          console.error(`Error loading lessons from ${filePath}:`, error);
        }
      }
    
      // Calculate relevance scores for each lesson
      const scoredLessons = await Promise.all(
        allLessons.map(async (lesson) => {
          let score = 0;
    
          // Check error pattern fields
          if (lesson.errorPattern) {
            score += this.calculateSimilarity(lesson.errorPattern.type, context) * 0.3;
            score += this.calculateSimilarity(lesson.errorPattern.message, context) * 0.3;
            score += this.calculateSimilarity(lesson.errorPattern.context, context) * 0.2;
          }
    
          // Check observations
          const observationScores = lesson.observations.map(obs =>
            this.calculateSimilarity(obs, context)
          );
          if (observationScores.length > 0) {
            score += Math.max(...observationScores) * 0.2;
          }
    
          // Check related lessons
          const relatedLessons = await this.getRelatedLessons(lesson.name);
          if (relatedLessons.length > 0) {
            score *= 1.2; // Boost score for lessons with relations
          }
    
          // Consider success rate
          const successRate = lesson.metadata?.successRate ?? 0;
          score *= (1 + successRate) / 2; // Weight by success rate
    
          return { lesson, score };
        })
      );
    
      // Filter lessons with a minimum relevance score and sort by score
      return scoredLessons
        .filter(({ score }) => score > 0.1) // Minimum relevance threshold
        .sort((a, b) => b.score - a.score)
        .map(({ lesson }) => lesson);
    }
  • index.ts:1197-1210 (registration)
    Tool registration in the ListToolsRequestSchema handler, defining name, description, and input schema.
    {
      name: "get_lesson_recommendations",
      description: "Get relevant lessons based on the current context",
      inputSchema: {
        type: "object",
        properties: {
          context: {
            type: "string",
            description: "The current context to find relevant lessons for"
          }
        },
        required: ["context"]
      }
    }
  • Dispatch case in CallToolRequestSchema handler that invokes the getLessonRecommendations method.
    case "get_lesson_recommendations":
      return { content: [{ type: "text", text: JSON.stringify(await knowledgeGraphManager.getLessonRecommendations(args.context as string), null, 2) }] };
  • Type definition for LessonEntity used in the tool's return type.
    interface LessonEntity extends Entity {
      errorPattern: ErrorPattern;
      metadata: Metadata;
      verificationSteps: VerificationStep[];
    }
  • Helper function used by getLessonRecommendations to compute similarity between context and lesson components.
    private calculateSimilarity(str1: string, str2: string): number {
      const s1 = str1.toLowerCase();
      const s2 = str2.toLowerCase();
    
      // Exact match
      if (s1 === s2) return 1;
    
      // Contains full string
      if (s1.includes(s2) || s2.includes(s1)) return 0.8;
    
      // Split into words and check for word matches
      const words1 = s1.split(/\s+/);
      const words2 = s2.split(/\s+/);
    
      const commonWords = words1.filter(w => words2.includes(w));
      if (commonWords.length > 0) {
        return 0.5 * (commonWords.length / Math.max(words1.length, words2.length));
      }
    
      // Partial word matches
      const partialMatches = words1.filter(w1 =>
        words2.some(w2 => w1.includes(w2) || w2.includes(w1))
      );
    
      return 0.3 * (partialMatches.length / Math.max(words1.length, words2.length));
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool 'Get[s] relevant lessons' but doesn't disclose behavioral traits like whether it's read-only, requires authentication, has rate limits, returns structured data, or handles errors. For a tool with no annotations, this leaves significant gaps in understanding its operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. It avoids unnecessary words, but could be more structured by including key details like usage context or output format. Overall, it's appropriately sized with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (inference-based recommendations), lack of annotations, and no output schema, the description is incomplete. It doesn't explain what 'relevant' entails, how lessons are selected, the return format, or error handling. For a recommendation tool with no structured support, more detail is needed to guide the agent effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with one parameter 'context' documented as 'The current context to find relevant lessons for'. The description adds no additional meaning beyond this, as it only repeats 'based on the current context'. With high schema coverage, the baseline score of 3 is appropriate, as the schema already provides adequate parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get relevant lessons based on the current context' clearly states the verb 'Get' and resource 'lessons', but it's vague about what 'relevant' means and doesn't differentiate from sibling tools like 'search_nodes' or 'find_similar_errors'. It specifies the action but lacks precision in scope or method.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'search_nodes' or 'find_similar_errors'. The description implies usage based on 'current context' but doesn't specify scenarios, prerequisites, or exclusions, leaving the agent to guess when this is the appropriate choice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/T1nker-1220/memories-with-lessons-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server