Skip to main content
Glama

launch_optuna_dashboard

Launch the Optuna dashboard to visualize and analyze hyperparameter optimization results, enabling interactive exploration via the Optuna MCP Server.

Instructions

Launch the Optuna dashboard

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
portNo

Implementation Reference

  • The handler function for the 'launch_optuna_dashboard' tool. It checks if the dashboard is already running, determines the storage from the study or class attribute, starts the optuna_dashboard.run_server in a daemon thread, stores the thread and port, and returns the dashboard URL.
    @mcp.tool(structured_output=True) def launch_optuna_dashboard(port: int = 58080) -> str: """Launch the Optuna dashboard""" storage: str | optuna.storages.BaseStorage | None = None if mcp.dashboard_thread_port is not None: return f"Optuna dashboard is already running. Open http://127.0.0.1:{mcp.dashboard_thread_port[1]}." if mcp.study is not None: storage = mcp.study._storage elif mcp.storage is not None: storage = mcp.storage else: raise McpError( ErrorData( code=INTERNAL_ERROR, message="No study has been created. Please create a study first.", ) ) def runner(storage: optuna.storages.BaseStorage | str, port: int) -> None: try: optuna_dashboard.run_server(storage=storage, host="127.0.0.1", port=port) except Exception as e: print(f"Error starting the dashboard: {e}", file=sys.stderr) sys.exit(1) # TODO(y0z): Consider better implementation thread = threading.Thread( target=runner, args=(storage, port), daemon=True, ) thread.start() mcp.dashboard_thread_port = (thread, port) return f"Optuna dashboard is running at http://127.0.0.1:{port}"
  • The call to register_tools(mcp) in main(), which registers all tools including 'launch_optuna_dashboard'.
    mcp = register_tools(mcp)
  • Test function that verifies the 'launch_optuna_dashboard' tool works correctly, including mocking the dashboard server start.
    @pytest.mark.anyio async def test_launch_optuna_dashboard( mcp: OptunaMCP, port: int | None, expected_port: int ) -> None: await mcp.call_tool("create_study", arguments={"study_name": "test_study"}) assert mcp.study is not None mcp.study.optimize( lambda trial: trial.suggest_float("x", 0.0, 1.0) + trial.suggest_float("y", 0.0, 1.0), n_trials=10, ) with patch.object(optuna_dashboard, "run_server", return_value=None): arguments = {} if port is None else {"port": port} result = await mcp.call_tool("launch_optuna_dashboard", arguments=arguments) assert isinstance(result, Sequence) assert len(result) == 2 assert isinstance(result[0], list) assert isinstance(result[0][0], TextContent) assert result[0][0].text.endswith(f":{expected_port}") assert isinstance(result[1], dict) assert result[1]["result"].endswith(f":{expected_port}") assert mcp.dashboard_thread_port is not None assert mcp.dashboard_thread_port[0] is not None assert mcp.dashboard_thread_port[1] == expected_port

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/optuna/optuna-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server