Skip to main content
Glama

ZenML MCP Server

Official
by zenml-io
docker-publish.yml1.84 kB
name: Docker Publish on: push: branches: [main] workflow_dispatch: env: IMAGE_NAME: docker.io/zenmldocker/mcp-zenml permissions: contents: read jobs: build-and-push: runs-on: ubuntu-latest concurrency: group: docker-publish-${{ github.ref }} cancel-in-progress: true steps: - name: Checkout uses: actions/checkout@v4 - name: Set up QEMU uses: docker/setup-qemu-action@v3 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 - name: Log in to Docker Hub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }} - name: Extract metadata (tags, labels) id: meta uses: docker/metadata-action@v5 with: images: ${{ env.IMAGE_NAME }} tags: | type=raw,value=latest,enable={{is_default_branch}} type=sha,format=short,prefix=sha- labels: | org.opencontainers.image.title=ZenML MCP Server org.opencontainers.image.description=Model Context Protocol server for ZenML org.opencontainers.image.source=https://github.com/${{ github.repository }} org.opencontainers.image.licenses=MIT io.modelcontextprotocol.server.name=io.github.zenml-io/mcp-zenml - name: Build and push uses: docker/build-push-action@v6 with: context: . file: ./Dockerfile push: true platforms: linux/amd64,linux/arm64 tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }} cache-from: type=gha cache-to: type=gha,mode=max - name: Image digest run: echo "Digest -> ${{ steps.meta.outputs.digest }}"

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zenml-io/mcp-zenml'

If you have feedback or need assistance with the MCP directory API, please join our Discord server