We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/AdamGustavsson/ClaimsMCP'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
env.example•765 B
# API Keys (optional if client supports MCP sampling)
OPENAI_API_KEY="your-openai-api-key-here"
# LLM Configuration
LLM_MODEL="gpt-4o-mini" # Model that supports structured outputs (for OpenAI API fallback)
# MCP Sampling Configuration
SAMPLING_MAX_TOKENS_SELECTION="500" # Max tokens for selection stage
SAMPLING_MAX_TOKENS_DISAMBIGUATION="400" # Max tokens for disambiguation stage
SAMPLING_MAX_TOKENS_DECOMPOSITION="800" # Max tokens for decomposition stage
SAMPLING_MAX_RETRIES="2" # Max retries for malformed JSON responses
# Logging Configuration
LOG_LLM_CALLS="true" # Set to "false" to disable logging
LOG_OUTPUT="stderr" # "stderr" or "file" - where to send logs
LOG_FILE="claimify_llm.log" # Used only if LOG_OUTPUT="file"