Skip to main content
Glama

read_compilation_result

Retrieve compilation results for QuantConnect trading algorithms, including state, logs, and error details to verify code compilation status.

Instructions

Read the result of a compilation job in QuantConnect.

Args: project_id: The ID of the project that was compiled. compile_id: The compile ID returned from compile_project.

Returns: A dictionary containing the compilation result with state, logs, and errors.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYes
compile_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function decorated with @mcp.tool() that implements the read_compilation_result tool. It fetches compilation results from the QuantConnect API, handles authentication, processes logs for warnings/errors, and returns structured results.
    @mcp.tool()
    async def read_compilation_result(project_id: int, compile_id: str) -> Dict[str, Any]:
        """
        Read the result of a compilation job in QuantConnect.
    
        Args:
            project_id: The ID of the project that was compiled.
            compile_id: The compile ID returned from compile_project.
    
        Returns:
            A dictionary containing the compilation result with state, logs, and errors.
        """
        auth = get_auth_instance()
        if auth is None:
            return {
                "status": "error",
                "error": "QuantConnect authentication not configured. Use configure_auth() first.",
            }
    
        try:
            # Prepare request data with project ID and compile ID in JSON payload
            request_data = {"projectId": project_id, "compileId": compile_id}
            
            response = await auth.make_authenticated_request(
                endpoint="compile/read", method="POST", json=request_data
            )
    
            if response.status_code == 200:
                data = response.json()
                if data.get("success"):
                    logs = data.get("logs", [])
                    errors = data.get("errors", [])
                    state = data.get("state")
                    
                    # Check for compilation warnings in logs that indicate issues
                    warnings = []
                    for log in logs:
                        if "Warning" in log:
                            warnings.append(log)
                    
                    # If there are warnings or explicit errors, treat as compilation failure
                    if warnings or errors:
                        return {
                            "status": "error",
                            "compile_id": data.get("compileId"),
                            "state": state,
                            "project_id": data.get("projectId"),
                            "signature": data.get("signature"),
                            "signature_order": data.get("signatureOrder", []),
                            "logs": logs,
                            "errors": errors,
                            "warnings": warnings,
                            "message": f"Compilation completed with {len(warnings)} warnings and {len(errors)} errors. Code issues must be fixed before proceeding.",
                            "error": f"Compilation failed: {len(warnings)} warnings, {len(errors)} errors found",
                        }
                    
                    return {
                        "status": "success",
                        "compile_id": data.get("compileId"),
                        "state": state,
                        "project_id": data.get("projectId"),
                        "signature": data.get("signature"),
                        "signature_order": data.get("signatureOrder", []),
                        "logs": logs,
                        "errors": errors,
                        "message": f"Compilation result retrieved successfully. State: {state}",
                    }
                else:
                    return {
                        "status": "error",
                        "error": "Failed to read compilation result.",
                        "details": data.get("errors", []),
                        "project_id": project_id,
                        "compile_id": compile_id,
                    }
            elif response.status_code == 401:
                return {
                    "status": "error",
                    "error": "Authentication failed. Check your credentials and ensure they haven't expired.",
                }
            else:
                return {
                    "status": "error",
                    "error": f"API request failed with status {response.status_code}",
                    "response_text": response.text[:500] if hasattr(response, "text") else "No response text",
                }
        except Exception as e:
            return {
                "status": "error",
                "error": f"An unexpected error occurred: {e}",
                "project_id": project_id,
                "compile_id": compile_id,
            }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the tool reads compilation results but doesn't describe important behavioral aspects like whether this is a read-only operation, if it requires authentication, potential rate limits, or what happens if the compilation is still in progress. The description adds minimal context beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement followed by Args and Returns sections. Every sentence adds value: the first establishes context, the parameter explanations are necessary, and the return statement provides output expectations. No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (though not shown here), the description doesn't need to detail return values. It covers the essential purpose and parameters adequately. However, for a tool with no annotations and 2 required parameters, it could benefit from more behavioral context about authentication requirements or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining both parameters: 'project_id: The ID of the project that was compiled' and 'compile_id: The compile ID returned from compile_project'. This adds crucial semantic meaning that the schema alone doesn't provide, though it doesn't cover format details like integer constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Read the result of a compilation job') and resource ('in QuantConnect'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this tool from other 'read_' siblings like read_backtest or read_optimization, which would require a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by referencing 'compile_id returned from compile_project', suggesting this tool should be used after compilation. However, it doesn't provide explicit guidance on when to use this versus alternatives like checking project status directly, nor does it mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/taylorwilsdon/quantconnect-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server