Send chat messages to AI models with optional RAG capabilities for retrieving information from documents. Configure parameters like temperature and system prompts to customize responses.
Send chat messages to Prem AI models with optional RAG for retrieving context from documents. Configure system prompts, models, and generation parameters.
Update project indexing with incremental RAG capabilities to maintain current semantic search across codebases by processing modified files and adjusting embeddings.
Provides retrieval-augmented generation (RAG) capabilities by ingesting various document formats into a persistent ChromaDB vector store. It enables semantic search and retrieval using either OpenAI or Ollama embeddings for processing local files, directories, and URLs.
A server that integrates Retrieval-Augmented Generation (RAG) with the Model Control Protocol (MCP) to provide web search capabilities and document analysis for AI assistants.
An advanced MCP server providing RAG-enabled memory through a knowledge graph with vector search capabilities, enabling intelligent information storage, semantic retrieval, and document processing.