refine_prompt
Refines prompts by applying semantic memory to improve context and efficiency, reducing token usage.
Instructions
Refines a prompt using semantic memory to make it more contextual and efficient.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The original prompt that needs refinement. |
Implementation Reference
- src/index.ts:138-161 (handler)The CallTool handler for 'refine_prompt'. It takes the prompt argument, optionally retrieves relevant semantic memories via LanceDB vector search, and returns a refined prompt with context.
if (name === "refine_prompt") { const prompt = args?.prompt as string; let contextExtra = ""; // Try to fetch relevant memories if (table) { const queryVector = await getEmbedding(prompt); const results = await table.vectorSearch(queryVector).limit(3).toArray(); if (results.length > 0) { contextExtra = "\n[Context Retrieved from Memory]:\n" + results.map((r: any) => `- ${r.text}`).join("\n"); } } return { content: [ { type: "text", text: `[Refined Prompt]: ${prompt}\n${contextExtra}\n\n(Antigravity can now use the information above to generate a more precise response)`, }, ], }; } - src/index.ts:76-88 (schema)Registration of 'refine_prompt' in the ListTools handler, including its description and inputSchema (requires a 'prompt' string).
name: "refine_prompt", description: "Refines a prompt using semantic memory to make it more contextual and efficient.", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The original prompt that needs refinement.", }, }, required: ["prompt"], }, }, - src/index.ts:111-164 (registration)The CallToolRequestSchema handler that dispatches to 'refine_prompt' (alongside 'learn_context') based on the tool name.
server.setRequestHandler(CallToolRequestSchema, async (request) => { const { name, arguments: args } = request.params; if (name === "learn_context") { const info = args?.information as string; const category = (args?.category as string) || "general"; const vector = await getEmbedding(info); const data = [{ vector, text: info, category, timestamp: new Date().toISOString() }]; if (!table) { table = await db.createTable(TABLE_NAME, data); } else { await table.add(data); } return { content: [{ type: "text", text: `Learned and stored in semantic memory: "${info}"` }], }; } if (name === "refine_prompt") { const prompt = args?.prompt as string; let contextExtra = ""; // Try to fetch relevant memories if (table) { const queryVector = await getEmbedding(prompt); const results = await table.vectorSearch(queryVector).limit(3).toArray(); if (results.length > 0) { contextExtra = "\n[Context Retrieved from Memory]:\n" + results.map((r: any) => `- ${r.text}`).join("\n"); } } return { content: [ { type: "text", text: `[Refined Prompt]: ${prompt}\n${contextExtra}\n\n(Antigravity can now use the information above to generate a more precise response)`, }, ], }; } throw new Error(`Tool not found: ${name}`); }); - src/index.ts:72-107 (registration)The ListToolsRequestSchema handler that registers 'refine_prompt' as an available tool with its input schema.
server.setRequestHandler(ListToolsRequestSchema, async () => { return { tools: [ { name: "refine_prompt", description: "Refines a prompt using semantic memory to make it more contextual and efficient.", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The original prompt that needs refinement.", }, }, required: ["prompt"], }, }, { name: "learn_context", description: "Memorizes important information (preference, technical rule, context) for future use.", inputSchema: { type: "object", properties: { information: { type: "string", description: "The information to be remembered.", }, category: { type: "string", description: "Information category (e.g., 'preference', 'architecture', 'style').", }, }, required: ["information"], }, }, ], - src/index.ts:52-58 (helper)getEmbedding is a helper function that uses Ollama to generate embeddings for text, used by refine_prompt to perform vector similarity search.
async function getEmbedding(text: string): Promise<number[]> { const response = await ollama.embed({ model: EMBEDDING_MODEL, input: text, }); return response.embeddings[0]!; }