initialize_mcts
Initialize Monte Carlo Tree Search (MCTS) for in-depth analysis of a specific question using a specified LLM provider and model, supporting structured conversational exploration.
Instructions
Initialize MCTS for a question
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| chat_id | No | Unique identifier for this conversation | default |
| model | No | Model name (optional) | |
| provider | No | LLM provider | gemini |
| question | Yes | The question to analyze |
Implementation Reference
- src/mcts_mcp_server/server.py:172-236 (handler)The primary handler function that executes the initialize_mcts tool logic. Validates the input parameters, checks for required Gemini API key, initializes and updates the global server_state dictionary with the provided question, chat_id, provider, and model, resets MCTS progress counters, sets initialized flag to True, logs the initialization, and returns a status dictionary with configuration details.async def initialize_mcts( question: str, chat_id: str = "default", provider: str = "gemini", model: str | None = None ) -> dict[str, Any]: """ Initialize MCTS for a question. Args: question: The question or topic to analyze chat_id: Unique identifier for this conversation session provider: LLM provider to use (currently only 'gemini' supported) model: Specific model name to use (optional, defaults to gemini-2.0-flash-lite) Returns: Dict containing initialization status, configuration, and any error messages Raises: Exception: If initialization fails due to missing API key or other errors """ try: # Validate inputs if not question.strip(): return {"error": "Question cannot be empty", "status": "error"} if provider.lower() != "gemini": return {"error": "Only 'gemini' provider is currently supported", "status": "error"} # Check if API key is available api_key = os.getenv("GEMINI_API_KEY") or os.getenv("GOOGLE_API_KEY") if not api_key: return { "error": "GEMINI_API_KEY or GOOGLE_API_KEY environment variable required", "status": "error", "setup_help": "Set your API key with: export GEMINI_API_KEY='your-key-here'" } # Update state server_state.update({ "current_question": question, "chat_id": chat_id, "provider": provider.lower(), "model": model or "gemini-2.0-flash-lite", "iterations_completed": 0, "best_score": 0.0, "best_analysis": "", "initialized": True }) logger.info(f"Initialized MCTS for question: {question[:50]}...") return { "status": "initialized", "question": question, "chat_id": chat_id, "provider": server_state["provider"], "model": server_state["model"], "message": "MCTS initialized successfully. Use run_mcts_search to begin analysis." } except Exception as e: logger.error(f"Error initializing MCTS: {e}") return {"error": f"Initialization failed: {e!s}", "status": "error"}
- src/mcts_mcp_server/server.py:85-98 (registration)MCP tool registration within the list_tools handler. Defines the tool name, description, and input schema specifying required 'question' parameter and optional 'chat_id', 'provider', 'model' fields.types.Tool( name="initialize_mcts", description="Initialize MCTS for a question", inputSchema={ "type": "object", "properties": { "question": {"type": "string", "description": "The question to analyze"}, "chat_id": {"type": "string", "description": "Unique identifier for this conversation", "default": "default"}, "provider": {"type": "string", "description": "LLM provider", "default": "gemini"}, "model": {"type": "string", "description": "Model name (optional)"} }, "required": ["question"] } ),
- src/mcts_mcp_server/server.py:88-97 (schema)Input schema definition for the initialize_mcts tool, outlining the expected JSON object structure with properties and requirements.inputSchema={ "type": "object", "properties": { "question": {"type": "string", "description": "The question to analyze"}, "chat_id": {"type": "string", "description": "Unique identifier for this conversation", "default": "default"}, "provider": {"type": "string", "description": "LLM provider", "default": "gemini"}, "model": {"type": "string", "description": "Model name (optional)"} }, "required": ["question"] }