We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/PaddlePaddle/PaddleOCR'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
index.en.md•821 B
---
comments: true
---
# PP-OCR Deployment
## Paddle Deployment Introduction
Paddle provides a variety of deployment schemes to meet the deployment requirements of different scenarios. Please choose according to the actual situation:

PP-OCR has supported multi deployment schemes. Click the link to get the specific tutorial.
- [Python Inference](./python_infer.en.md)
- [C++ Inference](./cpp_infer.en.md)
- [Serving (Python/C++)](./paddle_server.en.md)
- [Paddle-Lite (ARM CPU/OpenCL ARM GPU)](../..//legacy/lite.en.md)
- [Paddle2ONNX](../../legacy/paddle2onnx.en.md)
If you need the deployment tutorial of academic algorithm models other than PP-OCR, please directly enter the main page of corresponding algorithms, [entrance](../../algorithm/overview.en.md)。