Skip to main content
Glama
compose.yaml1.33 kB
services: paddleocr-vl-api: image: ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddleocr-vl:${API_IMAGE_TAG_SUFFIX} container_name: paddleocr-vl-api ports: - 8080:8080 depends_on: paddleocr-vlm-server: condition: service_healthy deploy: resources: reservations: devices: - driver: nvidia device_ids: ["0"] capabilities: [gpu] # TODO: Allow using a regular user user: root restart: unless-stopped environment: - VLM_BACKEND=${VLM_BACKEND:-vllm} command: /bin/bash -c "paddlex --serve --pipeline /home/paddleocr/pipeline_config_${VLM_BACKEND}.yaml" healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:8080/health || exit 1"] paddleocr-vlm-server: image: ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddleocr-genai-${VLM_BACKEND}-server:${VLM_IMAGE_TAG_SUFFIX} container_name: paddleocr-vlm-server deploy: resources: reservations: devices: - driver: nvidia device_ids: ["0"] capabilities: [gpu] # TODO: Allow using a regular user user: root restart: unless-stopped healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:8080/health || exit 1"] start_period: 300s

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PaddlePaddle/PaddleOCR'

If you have feedback or need assistance with the MCP directory API, please join our Discord server