Skip to main content
Glama
by ricleedo

search-memory

Find relevant information using semantic search by querying stored vector embeddings with natural language.

Instructions

Search for information in vector database

Input Schema

NameRequiredDescriptionDefault
maxMatchesNoMaximum number of matches to return
queryYesThe search query

Input Schema (JSON Schema)

{ "properties": { "maxMatches": { "description": "Maximum number of matches to return", "type": "number" }, "query": { "description": "The search query", "type": "string" } }, "required": [ "query" ], "type": "object" }

Implementation Reference

  • The handler function for the 'search-memory' MCP tool. It sends the query to the EmbeddingApiClient's vectorSearch method, handles errors and empty results, and returns the retrieved context as text content.
    async ({ query, maxMatches }) => { const request = { prompt: query, match_count: maxMatches, }; const response = await apiClient.vectorSearch(request); if (response.error) { return { isError: true, content: [ { type: "text", text: `Error searching content: ${response.error}`, }, ], }; } if (!response.contextText || response.contextText.trim() === "") { return { content: [ { type: "text", text: "No matching content found for your query.", }, ], }; } return { content: [ { type: "text", text: response.contextText, }, ], }; }
  • Zod input schema defining parameters for the search-memory tool: 'query' (string) and optional 'maxMatches' (number).
    { query: z.string().describe("The search query"), maxMatches: z .number() .optional() .describe("Maximum number of matches to return"), },
  • src/index.ts:67-117 (registration)
    Full registration of the 'search-memory' tool on the MCP server, specifying name, description, input schema, and execution handler.
    server.tool( "search-memory", "Search for information in vector database", { query: z.string().describe("The search query"), maxMatches: z .number() .optional() .describe("Maximum number of matches to return"), }, async ({ query, maxMatches }) => { const request = { prompt: query, match_count: maxMatches, }; const response = await apiClient.vectorSearch(request); if (response.error) { return { isError: true, content: [ { type: "text", text: `Error searching content: ${response.error}`, }, ], }; } if (!response.contextText || response.contextText.trim() === "") { return { content: [ { type: "text", text: "No matching content found for your query.", }, ], }; } return { content: [ { type: "text", text: response.contextText, }, ], }; } );
  • Supporting prompt template registered for 'search-memory', which generates a user message encouraging use of the tool for the given query.
    server.prompt( "search-memory", { query: z.string().describe("The search query"), }, ({ query }) => ({ messages: [ { role: "user", content: { type: "text", text: `Please search for information about: ${query}\n\nYou can use the search-memory tool to find relevant information.`, }, }, ], }) );

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ricleedo/Knowledge-EmbeddingAPI-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server