Skip to main content
Glama

TxtAI MCP Server

by neuml

txtai is an all-in-one AI framework for semantic search, LLM orchestration and language model workflows.

architecture architecture

The key component of txtai is an embeddings database, which is a union of vector indexes (sparse and dense), graph networks and relational databases.

This foundation enables vector search and/or serves as a powerful knowledge source for large language model (LLM) applications.

Build autonomous agents, retrieval augmented generation (RAG) processes, multi-model workflows and more.

Summary of txtai features:

  • 🔎 Vector search with SQL, object storage, topic modeling, graph analysis and multimodal indexing

  • 📄 Create embeddings for text, documents, audio, images and video

  • 💡 Pipelines powered by language models that run LLM prompts, question-answering, labeling, transcription, translation, summarization and more

  • ↪️️ Workflows to join pipelines together and aggregate business logic. txtai processes can be simple microservices or multi-model workflows.

  • 🤖 Agents that intelligently connect embeddings, pipelines, workflows and other agents together to autonomously solve complex problems

  • ⚙️ Web and Model Context Protocol (MCP) APIs. Bindings available for JavaScript, Java, Rust and Go.

  • 🔋 Batteries included with defaults to get up and running fast

  • ☁️ Run local or scale out with container orchestration

txtai is built with Python 3.10+, Hugging Face Transformers, Sentence Transformers and FastAPI. txtai is open-source under an Apache 2.0 license.

Interested in an easy and secure way to run hosted txtai applications? Then join the

Why txtai?

why why

New vector databases, LLM frameworks and everything in between are sprouting up daily. Why build with txtai?

# Get started in a couple lines import txtai embeddings = txtai.Embeddings() embeddings.index(["Correct", "Not what we hoped"]) embeddings.search("positive", 1) #[(0, 0.29862046241760254)]
  • Built-in API makes it easy to develop applications using your programming language of choice

# app.yml embeddings: path: sentence-transformers/all-MiniLM-L6-v2
CONFIG=app.yml uvicorn "txtai.api:app" curl -X GET "http://localhost:8000/search?query=positive"
  • Run local - no need to ship data off to disparate remote services

  • Work with micromodels all the way up to large language models (LLMs)

  • Low footprint - install additional dependencies and scale up when needed

  • Learn by example - notebooks cover all available functionality

Use Cases

The following sections introduce common txtai use cases. A comprehensive set of over 60 example notebooks and applications are also available.

Semantic Search

Build semantic/similarity/vector/neural search applications.

demo

Traditional search systems use keywords to find data. Semantic search has an understanding of natural language and identifies results that have the same meaning, not necessarily the same keywords.

search search

Get started with the following examples.

Notebook

Description

Introducing txtai

▶️

Overview of the functionality provided by txtai

Open In Colab

Similarity search with images

Embed images and text into the same space for search

Open In Colab

Build a QA database

Question matching with semantic search

Open In Colab

Semantic Graphs

Explore topics, data connectivity and run network analysis

Open In Colab

LLM Orchestration

Autonomous agents, retrieval augmented generation (RAG), chat with your data, pipelines and workflows that interface with large language models (LLMs).

llm

See below to learn more.

Notebook

Description

Prompt templates and task chains

Build model prompts and connect tasks together with workflows

Open In Colab

Integrate LLM frameworks

Integrate llama.cpp, LiteLLM and custom generation frameworks

Open In Colab

Build knowledge graphs with LLMs

Build knowledge graphs with LLM-driven entity extraction

Open In Colab

Parsing the stars with txtai

Explore an astronomical knowledge graph of known stars, planets, galaxies

Open In Colab

Agents

Agents connect embeddings, pipelines, workflows and other agents together to autonomously solve complex problems.

agent

txtai agents are built on top of the smolagents framework. This supports all LLMs txtai supports (Hugging Face, llama.cpp, OpenAI / Claude / AWS Bedrock via LiteLLM).

See the link below to learn more.

Notebook

Description

Analyzing Hugging Face Posts with Graphs and Agents

Explore a rich dataset with Graph Analysis and Agents

Open In Colab

Granting autonomy to agents

Agents that iteratively solve problems as they see fit

Open In Colab

Analyzing LinkedIn Company Posts with Graphs and Agents

Exploring how to improve social media engagement with AI

Open In Colab

Retrieval augmented generation

Retrieval augmented generation (RAG) reduces the risk of LLM hallucinations by constraining the output with a knowledge base as context. RAG is commonly used to "chat with your data".

rag rag

A novel feature of txtai is that it can provide both an answer and source citation.

Notebook

Description

Build RAG pipelines with txtai

Guide on retrieval augmented generation including how to create citations

Open In Colab

Chunking your data for RAG

Extract, chunk and index content for effective retrieval

Open In Colab

Advanced RAG with graph path traversal

Graph path traversal to collect complex sets of data for advanced RAG

Open In Colab

Speech to Speech RAG

▶️

Full cycle speech to speech workflow with RAG

Open In Colab

Language Model Workflows

Language model workflows, also known as semantic workflows, connect language models together to build intelligent applications.

flows flows

While LLMs are powerful, there are plenty of smaller, more specialized models that work better and faster for specific tasks. This includes models for extractive question-answering, automatic summarization, text-to-speech, transcription and translation.

Notebook

Description

Run pipeline workflows

▶️

Simple yet powerful constructs to efficiently process data

Open In Colab

Building abstractive text summaries

Run abstractive text summarization

Open In Colab

Transcribe audio to text

Convert audio files to text

Open In Colab

Translate text between languages

Streamline machine translation and language detection

Open In Colab

Installation

install install

The easiest way to install is via pip and PyPI

pip install txtai

Python 3.10+ is supported. Using a Python virtual environment is recommended.

See the detailed install instructions for more information covering optional dependencies, environment specific prerequisites, installing from source, conda support and how to run with containers.

Model guide

models

See the table below for the current recommended models. These models all allow commercial use and offer a blend of speed and performance.

Models can be loaded as either a path from the Hugging Face Hub or a local directory. Model paths are optional, defaults are loaded when not specified. For tasks with no recommended model, txtai uses the default models as shown in the Hugging Face Tasks guide.

See the following links to learn more.

Powered by txtai

The following applications are powered by txtai.

apps

Application

Description

rag

Retrieval Augmented Generation (RAG) application

ragdata

Build knowledge bases for RAG

paperai

Semantic search and workflows for medical/scientific papers

annotateai

Automatically annotate papers with LLMs

In addition to this list, there are also many other open-source projects, published research and closed proprietary/commercial projects that have built on txtai in production.

Further Reading

further further

Documentation

Full documentation on txtai including configuration settings for embeddings, pipelines, workflows, API and a FAQ with common questions/issues is available.

Contributing

For those who would like to contribute to txtai, please see this guide.

-
security - not tested
A
license - permissive license
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows. All functionality can be served via it's API and the API supports MCP.

Docs: https://neuml.github.io/txtai/api/mcp/

  1. Use Cases
    1. Semantic Search
    2. LLM Orchestration
    3. Language Model Workflows
  2. Installation
    1. Model guide
      1. Powered by txtai
        1. Further Reading
          1. Documentation
            1. Contributing

              Related MCP Servers

              • -
                security
                A
                license
                -
                quality
                An MCP server aimed to be portable, local, easy and convenient to support semantic/graph based retrieval of txtai "all in one" embeddings database. Any txtai embeddings db in tar.gz form can be loaded
                Last updated -
                49
                MIT License
                • Linux
                • Apple
              • -
                security
                F
                license
                -
                quality
                An MCP server that integrates real-time web search capabilities into AI assistants using the Exa API, providing both basic and advanced search functionality with formatted markdown results.
                Last updated -
                141
                • Linux
                • Apple
              • -
                security
                F
                license
                -
                quality
                Model Context Protocol (MCP) server implementation for semantic search and memory management using TxtAI. This server provides a robust API for storing, retrieving, and managing text-based memories with semantic search capabilities. You can use Claude and Cline AI Also
                Last updated -
                10
                • Apple
              • -
                security
                A
                license
                -
                quality
                A Model Context Protocol (MCP) server that provides read-only TDengine database queries for AI assistants, allowing users to execute queries, explore database structures, and investigate data directly from AI-powered tools.
                Last updated -
                9
                MIT License

              View all related MCP servers

              MCP directory API

              We provide all the information about MCP servers via our MCP API.

              curl -X GET 'https://glama.ai/api/mcp/v1/servers/neuml/txtai'

              If you have feedback or need assistance with the MCP directory API, please join our Discord server