Skip to main content
Glama

Data Planning Agent

by opendedup
.env.example•4.49 kB
# ============================================================================= # Data Planning Agent - Environment Variables Example # ============================================================================= # Copy this file to .env and fill in your actual values # NEVER commit the .env file to git - it contains sensitive information! # ============================================================================= # ----------------------------------------------------------------------------- # Gemini Configuration # ----------------------------------------------------------------------------- # Gemini API Key for conversational AI GEMINI_API_KEY=your-gemini-api-key-here # Gemini model to use (gemini-2.5-pro recommended for planning tasks) GEMINI_MODEL=gemini-2.5-pro # ----------------------------------------------------------------------------- # Output Configuration # ----------------------------------------------------------------------------- # Default output directory for generated Data PRPs # Supports local paths: ./output or /absolute/path # Supports GCS paths: gs://your-bucket/planning-sessions OUTPUT_DIR=./output # ----------------------------------------------------------------------------- # MCP Service Configuration # ----------------------------------------------------------------------------- # MCP server name MCP_SERVER_NAME=data-planning-agent # MCP server version MCP_SERVER_VERSION=1.0.0 # MCP transport mode (stdio or http) # - stdio: For local development and subprocess communication (default) # - http: For containerized deployment and remote connections MCP_TRANSPORT=stdio # Host address for HTTP server (only used when MCP_TRANSPORT=http) # Use 0.0.0.0 in containers to accept connections from any interface MCP_HOST=0.0.0.0 # Port for MCP HTTP service (only used when MCP_TRANSPORT=http) MCP_PORT=8080 # ----------------------------------------------------------------------------- # Optional Configuration # ----------------------------------------------------------------------------- # Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) LOG_LEVEL=INFO # Maximum conversation turns before forcing completion (safety limit) MAX_CONVERSATION_TURNS=10 # ----------------------------------------------------------------------------- # Context Configuration (Optional) # ----------------------------------------------------------------------------- # Directory containing organizational context markdown files # Supports local paths: ./context or /absolute/path/to/context # Supports GCS paths: gs://your-bucket/context/ # Context files are prepended to all AI prompts to customize agent behavior # CONTEXT_DIR=./context # ----------------------------------------------------------------------------- # Vertex AI Search Datastore Configuration (for grounding) # ----------------------------------------------------------------------------- # GCP project ID where the Vertex AI Search datastore is located # This datastore should contain the data catalog from data-discovery-agent VERTEX_PROJECT_ID=your-gcp-project-id # Location of the Vertex AI Search datastore (typically 'global') VERTEX_DATASTORE_LOCATION=global # Vertex AI Search datastore ID for data grounding # This enables Gemini to access the data catalog for context-aware planning # The datastore should be created by the data-discovery-agent VERTEX_DATASTORE_ID=data-discovery-metadata # ----------------------------------------------------------------------------- # Search Fan-out Configuration # ----------------------------------------------------------------------------- # Enable intelligent search fan-out when no direct matches are found # When enabled, the system generates related queries to broaden the search ENABLE_SEARCH_FANOUT=true # Number of related queries to generate for fan-out search # Recommended: 3-5 queries for good coverage without being too slow SEARCH_FANOUT_COUNT=4 # ----------------------------------------------------------------------------- # Question Reflection Configuration # ----------------------------------------------------------------------------- # Enable question reflection/self-correction for higher quality questions # When enabled, the AI reviews and refines its own questions before presenting them # This helps eliminate redundant questions and ensures focus on product structure # Recommendation: Keep enabled (true) for production, disable for faster testing ENABLE_QUESTION_REFLECTION=true

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/opendedup/data-planning-agent'

If you have feedback or need assistance with the MCP directory API, please join our Discord server