Skip to main content
Glama

GPT Researcher MCP Server

by assafelovic
docker-compose.yml996 B
version: '3.8' services: gptr-mcp: build: . ports: - "8000:8000" environment: - MCP_TRANSPORT=sse - DOCKER_CONTAINER=true - OPENAI_API_KEY=${OPENAI_API_KEY} - TAVILY_API_KEY=${TAVILY_API_KEY} - PYTHONUNBUFFERED=1 env_file: - .env volumes: # Mount logs directory for persistent logging - ./logs:/app/logs restart: unless-stopped healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8000/health"] interval: 30s timeout: 10s retries: 3 start_period: 40s # Optional: Add a reverse proxy for production nginx: image: nginx:alpine ports: - "80:80" - "443:443" volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro depends_on: - gptr-mcp restart: unless-stopped profiles: - production networks: gptr-mcp-net: driver: bridge # To connect to existing n8n network, run: # docker network connect n8n-mcp-net gptr-mcp

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/assafelovic/gptr-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server