Skip to main content
Glama
nim-embed-deploy-commands.txt•777 B
#!/bin/bash set -e echo "Checking Docker..." sudo systemctl status docker --no-pager || sudo systemctl start docker echo "Stopping existing NIM container..." sudo docker stop nim-embedding 2>/dev/null || true sudo docker rm nim-embedding 2>/dev/null || true echo "Starting NVIDIA NIM Embedding Service..." sudo docker run -d --name nim-embedding --gpus all --restart unless-stopped -p 8080:8000 nvcr.io/nim/nvidia/nv-embedqa-e5-v5:latest || sudo docker run -d --name nim-embedding --restart unless-stopped -p 8080:8000 -e MODEL=nvidia/nv-embedqa-e5-v5 nvcr.io/nvidia/nv-ingest:latest sleep 15 echo "Checking service..." sudo docker logs nim-embedding --tail 30 curl -s http://localhost:8080/v1/models || echo "Starting up..." echo "NIM Embedding Service deployed on port 8080"

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/isc-tdyar/medical-graphrag-assistant'

If you have feedback or need assistance with the MCP directory API, please join our Discord server