Skip to main content
Glama

read_live_algorithm

Retrieve live algorithm statistics, runtime data, and performance details from QuantConnect to monitor and analyze trading strategies in real-time.

Instructions

Read comprehensive live algorithm statistics, runtime data, and details.

Args: project_id: ID of the project with the live algorithm deploy_id: Optional deploy ID for specific algorithm (omit to get latest)

Returns: Dictionary containing detailed live algorithm statistics, runtime data, charts, and files

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYes
deploy_idNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The primary handler function for the 'read_live_algorithm' MCP tool. It authenticates with QuantConnect, makes a POST request to the 'live/read' API endpoint, parses the response, and returns comprehensive live algorithm details including status, runtime statistics, charts, and files.
    @mcp.tool()
    async def read_live_algorithm(
        project_id: int, deploy_id: Optional[str] = None
    ) -> Dict[str, Any]:
        """
        Read comprehensive live algorithm statistics, runtime data, and details.
    
        Args:
            project_id: ID of the project with the live algorithm
            deploy_id: Optional deploy ID for specific algorithm (omit to get latest)
    
        Returns:
            Dictionary containing detailed live algorithm statistics, runtime data, charts, and files
        """
        auth = get_auth_instance()
        if auth is None:
            return {
                "status": "error",
                "error": "QuantConnect authentication not configured. Use configure_auth() first.",
            }
    
        try:
            # Prepare request data
            request_data = {"projectId": project_id}
            if deploy_id:
                request_data["deployId"] = deploy_id
    
            # Make API request
            response = await auth.make_authenticated_request(
                endpoint="live/read", method="POST", json=request_data
            )
    
            # Parse response
            if response.status_code == 200:
                data = response.json()
    
                if data.get("success", False):
                    # Extract all the detailed information from LiveAlgorithmResults
                    deploy_id = data.get("deployId")
                    status = data.get("status")
                    message = data.get("message")
                    clone_id = data.get("cloneId")
                    launched = data.get("launched")
                    stopped = data.get("stopped")
                    brokerage = data.get("brokerage")
                    security_types = data.get("securityTypes")
                    project_name = data.get("projectName")
                    data_center = data.get("dataCenter")
                    public = data.get("public")
                    files = data.get("files", [])
                    runtime_statistics = data.get("runtimeStatistics", {})
                    charts = data.get("charts", {})
                    
                    return {
                        "status": "success",
                        "project_id": project_id,
                        "deploy_id": deploy_id,
                        "live_status": status,
                        "message": message,
                        "clone_id": clone_id,
                        "launched": launched,
                        "stopped": stopped,
                        "brokerage": brokerage,
                        "security_types": security_types,
                        "project_name": project_name,
                        "data_center": data_center,
                        "public": public,
                        "files": files,
                        "runtime_statistics": runtime_statistics,
                        "charts": charts,
                        "total_files": len(files),
                        "has_runtime_stats": bool(runtime_statistics),
                        "response": f"Successfully read live algorithm {deploy_id} for project {project_id}",
                    }
                else:
                    # API returned success=false
                    errors = data.get("errors", ["Unknown error"])
                    return {
                        "status": "error",
                        "error": "Failed to read live algorithm",
                        "details": errors,
                        "project_id": project_id,
                        "deploy_id": deploy_id,
                    }
    
            elif response.status_code == 401:
                return {
                    "status": "error",
                    "error": "Authentication failed. Check your credentials and ensure they haven't expired.",
                }
    
            else:
                return {
                    "status": "error",
                    "error": f"API request failed with status {response.status_code}",
                    "response_text": (
                        response.text[:500]
                        if hasattr(response, "text")
                        else "No response text"
                    ),
                }
    
        except Exception as e:
            return {
                "status": "error",
                "error": f"Failed to read live algorithm: {str(e)}",
                "project_id": project_id,
                "deploy_id": deploy_id,
            }
  • Registers the live_tools module (containing read_live_algorithm) by calling register_live_tools(mcp) during server initialization in the main entrypoint.
    safe_print("🔧 Registering QuantConnect tools...")
    register_auth_tools(mcp)
    register_project_tools(mcp)
    register_file_tools(mcp)
    register_backtest_tools(mcp)
    register_live_tools(mcp)
    register_optimization_tools(mcp)
  • Alternative registration of live_tools module in the server.py module, called during server setup.
    safe_print("🔧 Registering QuantConnect tools...")
    register_auth_tools(mcp)
    register_project_tools(mcp)
    register_file_tools(mcp)
    register_backtest_tools(mcp)
    register_live_tools(mcp)
    register_optimization_tools(mcp)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states it's a read operation, which is clear, but lacks details on permissions, rate limits, error conditions, or whether it requires the algorithm to be active. This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. The Args and Returns sections are structured but could be more integrated; every sentence adds value, though it's slightly verbose with 'comprehensive' and repeated 'live algorithm.'

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, no annotations, and an output schema present (which covers return values), the description is partially complete. It explains parameters and returns at a high level but lacks behavioral context and usage guidance, leaving gaps for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains that 'project_id' identifies the project and 'deploy_id' is optional for a specific algorithm (omit for latest), adding meaningful context beyond the schema's basic types. However, it doesn't clarify data formats or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool reads 'comprehensive live algorithm statistics, runtime data, and details,' which is a specific verb+resource combination. It distinguishes from siblings like 'list_live_algorithms' (which likely lists multiple) and 'read_live_chart' (which focuses on charts only), though it doesn't explicitly mention these distinctions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose it over 'list_live_algorithms' for overviews or 'read_live_chart' for specific data, nor does it specify prerequisites like needing a running algorithm.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/taylorwilsdon/quantconnect-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server