Skip to main content
Glama
QuixiAI

AGI MCP Server

by QuixiAI

create_memory

Store structured AI memories with typed content and embeddings to enable persistent knowledge retention across conversations.

Instructions

Create a new memory with optional type-specific metadata

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
typeYesType of memory to create
contentYesThe main content/text of the memory
embeddingYesVector embedding for the memory content
importanceNoImportance score (0.0 to 1.0)
metadataNoType-specific metadata (action_taken, context, confidence, etc.)

Implementation Reference

  • mcp.js:536-544 (handler)
    MCP tool handler for 'create_memory': extracts tool arguments and invokes memoryManager.createMemory(), serializing the returned memory object as JSON in the response content.
    case "create_memory":
      const memory = await memoryManager.createMemory(
        args.type,
        args.content,
        args.embedding,
        args.importance || 0.0,
        args.metadata || {}
      );
      return { content: [{ type: "text", text: JSON.stringify(memory, null, 2) }] };
  • mcp.js:29-61 (registration)
    Registration of the 'create_memory' tool in the ListToolsRequestHandler response, including name, description, and detailed input schema with required fields and validation.
      name: "create_memory",
      description: "Create a new memory with optional type-specific metadata",
      inputSchema: {
        type: "object",
        properties: {
          type: {
            type: "string",
            enum: ["episodic", "semantic", "procedural", "strategic"],
            description: "Type of memory to create"
          },
          content: {
            type: "string",
            description: "The main content/text of the memory"
          },
          embedding: {
            type: "array",
            items: { type: "number" },
            description: "Vector embedding for the memory content"
          },
          importance: {
            type: "number",
            description: "Importance score (0.0 to 1.0)",
            default: 0.0
          },
          metadata: {
            type: "object",
            description: "Type-specific metadata (action_taken, context, confidence, etc.)",
            default: {}
          }
        },
        required: ["type", "content", "embedding"]
      }
    },
  • Helper function in MemoryManager class that implements the core logic: inserts the memory into the main memories table and type-specific table (episodic, semantic, etc.) using a database transaction.
    async createMemory(type, content, embedding, importance = 0.0, metadata = {}) {
      try {
        // Start transaction
        const result = await this.db.transaction(async (tx) => {
          // Insert main memory record
          const [memory] = await tx.insert(schema.memories).values({
            type,
            content,
            embedding: embedding,
            importance,
            decayRate: metadata.decayRate || 0.01
          }).returning();
    
          // Insert type-specific details
          switch (type) {
            case 'episodic':
              await tx.insert(schema.episodicMemories).values({
                memoryId: memory.id,
                actionTaken: metadata.action_taken || null,
                context: metadata.context || null,
                result: metadata.result || null,
                emotionalValence: metadata.emotional_valence || 0.0,
                eventTime: metadata.event_time || new Date(),
                verificationStatus: metadata.verification_status || null
              });
              break;
    
            case 'semantic':
              await tx.insert(schema.semanticMemories).values({
                memoryId: memory.id,
                confidence: metadata.confidence || 0.8,
                category: metadata.category || [],
                relatedConcepts: metadata.related_concepts || [],
                sourceReferences: metadata.source_references || null,
                contradictions: metadata.contradictions || null
              });
              break;
    
            case 'procedural':
              await tx.insert(schema.proceduralMemories).values({
                memoryId: memory.id,
                steps: metadata.steps || {},
                prerequisites: metadata.prerequisites || {},
                successCount: metadata.success_count || 0,
                totalAttempts: metadata.total_attempts || 0,
                failurePoints: metadata.failure_points || null
              });
              break;
    
            case 'strategic':
              await tx.insert(schema.strategicMemories).values({
                memoryId: memory.id,
                patternDescription: metadata.pattern_description || content,
                confidenceScore: metadata.confidence_score || 0.7,
                supportingEvidence: metadata.supporting_evidence || null,
                successMetrics: metadata.success_metrics || null,
                adaptationHistory: metadata.adaptation_history || null,
                contextApplicability: metadata.context_applicability || null
              });
              break;
          }
    
          return memory;
        });
    
        return result;
      } catch (error) {
        console.error('Error creating memory:', error);
        throw error;
      }
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/QuixiAI/agi-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server