Implements the Chain of Draft reasoning approach to generate minimalistic intermediate reasoning outputs while solving tasks, significantly reducing token usage while maintaining accuracy.
Provides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings, allowing users to add, search, list, and delete documentation with metadata support.
A server that allows AI assistants to browse and read files from specified GitHub repositories, providing access to repository contents via the Model Context Protocol.
An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context.
Uses Ollama or OpenAI to generate embeddings.
Docker files included
Enables integration with DuckDuckGo search capabilities for LLMs, supporting comprehensive web search, regional filtering, result types, and safe browsing with caching and customizable search parameters.
An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context