Skip to main content
Glama
Dockerfile573 B
# Set base image ARG BASE_IMAGE=neuml/txtai-cpu FROM $BASE_IMAGE # Application script to copy into image ARG APP=api.py # Install Lambda Runtime Interface Client and Mangum ASGI bindings RUN pip install awslambdaric mangum # Copy configuration COPY config.yml . # Run local API instance to cache models in container RUN python -c "from txtai.api import API; API('config.yml', False)" # Copy application COPY $APP ./app.py # Start runtime client using default application handler ENV CONFIG "config.yml" ENTRYPOINT ["python", "-m", "awslambdaric"] CMD ["app.handler"]

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/neuml/txtai'

If you have feedback or need assistance with the MCP directory API, please join our Discord server