Skip to main content
Glama

MCP Proxy Server

cd.yaml4.99 kB
name: Publish to container registries on: release: types: [published] workflow_dispatch: push: branches: - main paths: - src/** - Dockerfile - pyproject.toml pull_request: paths: - src/** - Dockerfile - pyproject.toml jobs: docker-hub: name: Push multi-arch Docker image to Docker Hub runs-on: ubuntu-latest permissions: contents: read steps: - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 - name: Set up QEMU uses: docker/setup-qemu-action@4574d27a4764455b42196d70a065bc6853246a25 #v3.4.0 - name: Set up Docker Buildx uses: docker/setup-buildx-action@f7ce87c1d6bead3e36075b2ce75da1f6cc28aaca #v3.9.0 - name: Extract tags and labels for Docker id: meta uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0 with: images: sparfenyuk/mcp-proxy tags: | type=sha,format=short,prefix=commit- type=ref,event=tag labels: | org.opencontainers.image.authors=Sergey Parfenyuk org.opencontainers.image.source=https://github.com/sparfenyuk/mcp-proxy org.opencontainers.image.url=https://github.com/sparfenyuk/mcp-proxy org.opencontainers.image.title=mcp-proxy org.opencontainers.image.description=Connect to MCP servers that run on SSE transport, or expose stdio servers as an SSE server using the MCP Proxy server. org.opencontainers.image.licenses=MIT org.opencontainers.image.base.name=docker.io/library/python:3.13-alpine - name: Log in to Docker Hub uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0 if: github.event_name != 'pull_request' with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} - name: Build and push Docker image uses: docker/build-push-action@ca877d9245402d1537745e0e356eab47c3520991 # v6.13.0 with: context: . platforms: linux/amd64,linux/arm64 push: ${{ github.event_name != 'pull_request' }} tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }} annotations: ${{ steps.meta.outputs.annotations }} cache-from: type=local,src=/tmp/.buildx-cache cache-to: type=local,dest=/tmp/.buildx-cache - name: Clean Docker cache if: github.event_name != 'pull_request' run: | docker system prune --force ghcr-io: name: Push multi-arch Docker image to ghcr.io runs-on: ubuntu-latest permissions: contents: read packages: write steps: - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 - name: Set up QEMU uses: docker/setup-qemu-action@4574d27a4764455b42196d70a065bc6853246a25 #v3.4.0 - name: Set up Docker Buildx uses: docker/setup-buildx-action@f7ce87c1d6bead3e36075b2ce75da1f6cc28aaca #v3.9.0 - name: Extract tags and labels for Docker id: meta uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0 with: images: ghcr.io/sparfenyuk/mcp-proxy tags: | type=sha,format=short,prefix=commit- type=ref,event=tag labels: | org.opencontainers.image.authors=Sergey Parfenyuk org.opencontainers.image.source=https://github.com/sparfenyuk/mcp-proxy org.opencontainers.image.url=https://github.com/sparfenyuk/mcp-proxy org.opencontainers.image.title=mcp-proxy org.opencontainers.image.description=Connect to MCP servers that run on SSE transport, or expose stdio servers as an SSE server using the MCP Proxy server. org.opencontainers.image.licenses=MIT org.opencontainers.image.base.name=docker.io/library/python:3.13-alpine - name: Log in to GHCR uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0 if: github.event_name != 'pull_request' with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Build and push Docker image uses: docker/build-push-action@ca877d9245402d1537745e0e356eab47c3520991 # v6.13.0 with: context: . platforms: linux/amd64,linux/arm64 push: ${{ github.event_name != 'pull_request' }} tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }} annotations: ${{ steps.meta.outputs.annotations }} cache-from: type=local,src=/tmp/.buildx-cache cache-to: type=local,dest=/tmp/.buildx-cache - name: Clean Docker cache if: github.event_name != 'pull_request' run: | docker system prune --force

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sparfenyuk/mcp-proxy'

If you have feedback or need assistance with the MCP directory API, please join our Discord server