Skip to main content
Glama

TxtAI MCP Server

by neuml
build.yml2.04 kB
# GitHub Actions build workflow name: build on: ["push", "pull_request"] jobs: build: runs-on: ${{ matrix.os }} strategy: matrix: os: [ubuntu-latest, macos-latest, windows-latest] timeout-minutes: 60 steps: - name: Checkout code uses: actions/checkout@v5 - name: Install Python uses: actions/setup-python@v6 with: python-version: "3.10" - name: Install Java uses: actions/setup-java@v5 with: distribution: "zulu" java-version: 21 - name: Install dependencies - Linux run: sudo apt-get update && sudo apt-get install libportaudio2 libsndfile1 if: matrix.os == 'ubuntu-latest' - name: Install dependencies - macOS run: | echo "OMP_NUM_THREADS=1" >> $GITHUB_ENV echo "PYTORCH_MPS_DISABLE=1" >> $GITHUB_ENV echo "LLAMA_NO_METAL=1" >> $GITHUB_ENV echo "TIKA_STARTUP_SLEEP=30" >> $GITHUB_ENV echo "TIKA_STARTUP_MAX_RETRY=10" >> $GITHUB_ENV brew install portaudio sudo xcode-select -s "/Applications/Xcode_16.app" if: matrix.os == 'macos-latest' - name: Install dependencies - Windows run: | "PYTHONIOENCODING=utf-8" >> $env:GITHUB_ENV choco install wget if: matrix.os == 'windows-latest' - name: Build run: | pip install -U wheel pip install .[all,dev] pip cache purge python -c "import nltk; nltk.download(['punkt', 'punkt_tab', 'averaged_perceptron_tagger_eng'])" python --version make data coverage env: HF_HUB_ETAG_TIMEOUT: 100 HF_HUB_DOWNLOAD_TIMEOUT: 100 HF_XET_CHUNK_CACHE_SIZE_BYTES: 0 - uses: pre-commit/action@v3.0.1 if: matrix.os == 'ubuntu-latest' - name: Test Coverage run: coveralls --service=github if: matrix.os == 'ubuntu-latest' env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/neuml/txtai'

If you have feedback or need assistance with the MCP directory API, please join our Discord server