Skip to main content
Glama

create_backtest

Launch a new backtest for a compiled trading strategy to test its performance using historical data and customizable parameters.

Instructions

Create a new backtest for a compiled project.

Args: project_id: ID of the project to backtest compile_id: Compile ID from a successful project compilation backtest_name: Name for the backtest parameters: Optional dictionary of parameters for the backtest (e.g., {"ema_fast": 10, "ema_slow": 100})

Returns: Dictionary containing backtest creation result and backtest details

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYes
compile_idYes
backtest_nameYes
parametersNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core implementation of the 'create_backtest' tool. This async function, decorated with @mcp.tool(), handles authentication, constructs the API request to QuantConnect's backtests/create endpoint, processes parameters, and returns structured success/error responses with backtest details.
    @mcp.tool()
    async def create_backtest(
        project_id: int,
        compile_id: str,
        backtest_name: str,
        parameters: Optional[Dict[str, Any]] = None,
    ) -> Dict[str, Any]:
        """
        Create a new backtest for a compiled project.
    
        Args:
            project_id: ID of the project to backtest
            compile_id: Compile ID from a successful project compilation
            backtest_name: Name for the backtest
            parameters: Optional dictionary of parameters for the backtest (e.g., {"ema_fast": 10, "ema_slow": 100})
    
        Returns:
            Dictionary containing backtest creation result and backtest details
        """
        auth = get_auth_instance()
        if auth is None:
            return {
                "status": "error",
                "error": "QuantConnect authentication not configured. Use configure_auth() first.",
            }
    
        try:
            # Prepare request data
            request_data = {
                "projectId": project_id,
                "compileId": compile_id,
                "backtestName": backtest_name,
            }
    
            # Add parameters if provided
            if parameters:
                for key, value in parameters.items():
                    request_data[f"parameters[{key}]"] = value
    
            # Make API request
            response = await auth.make_authenticated_request(
                endpoint="backtests/create", method="POST", json=request_data
            )
    
            # Parse response
            if response.status_code == 200:
                data = response.json()
    
                if data.get("success", False):
                    backtest_results = data.get("backtest", [])
                    debugging = data.get("debugging", False)
    
                    if backtest_results:
                        backtest = backtest_results[0]
                        return {
                            "status": "success",
                            "project_id": project_id,
                            "compile_id": compile_id,
                            "backtest_name": backtest_name,
                            "backtest": backtest,
                            "debugging": debugging,
                            "message": f"Successfully created backtest '{backtest_name}' for project {project_id}",
                        }
                    else:
                        return {
                            "status": "success",
                            "project_id": project_id,
                            "compile_id": compile_id,
                            "backtest_name": backtest_name,
                            "debugging": debugging,
                            "message": f"Backtest '{backtest_name}' created but no results yet",
                            "note": "Backtest may still be initializing",
                        }
                else:
                    # API returned success=false
                    errors = data.get("errors", ["Unknown error"])
                    if any("Compile id not found" in e for e in errors):
                        return {
                            "status": "error",
                            "error": "Compile ID not found. Please compile the project first using the 'compile_project' tool.",
                            "details": errors,
                            "project_id": project_id,
                            "compile_id": compile_id,
                        }
                    return {
                        "status": "error",
                        "error": "Backtest creation failed",
                        "details": errors,
                        "project_id": project_id,
                        "compile_id": compile_id,
                        "backtest_name": backtest_name,
                    }
    
            elif response.status_code == 401:
                return {
                    "status": "error",
                    "error": "Authentication failed. Check your credentials and ensure they haven't expired.",
                }
    
            else:
                return {
                    "status": "error",
                    "error": f"API request failed with status {response.status_code}",
                    "response_text": (
                        response.text[:500]
                        if hasattr(response, "text")
                        else "No response text"
                    ),
                }
    
        except Exception as e:
            return {
                "status": "error",
                "error": f"Failed to create backtest: {str(e)}",
                "project_id": project_id,
                "compile_id": compile_id,
                "backtest_name": backtest_name,
            }
  • Top-level registration call in the main entry point that invokes register_backtest_tools(mcp), thereby registering the 'create_backtest' handler to the FastMCP server.
    register_backtest_tools(mcp)
  • Registration call in the server module that registers backtest tools including 'create_backtest' to the MCP instance.
    register_backtest_tools(mcp)
  • Function signature and docstring defining the input schema (parameters: project_id, compile_id, backtest_name, optional parameters dict) and output (Dict[str, Any] with status, details, etc.).
    async def create_backtest(
        project_id: int,
        compile_id: str,
        backtest_name: str,
        parameters: Optional[Dict[str, Any]] = None,
    ) -> Dict[str, Any]:
        """
        Create a new backtest for a compiled project.
    
        Args:
            project_id: ID of the project to backtest
            compile_id: Compile ID from a successful project compilation
            backtest_name: Name for the backtest
            parameters: Optional dictionary of parameters for the backtest (e.g., {"ema_fast": 10, "ema_slow": 100})
    
        Returns:
            Dictionary containing backtest creation result and backtest details
        """
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It states this is a creation operation but doesn't disclose behavioral traits like whether this is an async/long-running process, what permissions are required, error conditions, or what happens if parameters conflict with project settings. The description mentions 'successful project compilation' as a prerequisite but doesn't elaborate on failure modes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with clear sections (purpose, args, returns). The purpose statement is front-loaded, and each parameter explanation earns its place. Minor improvement could be integrating the example more naturally rather than parenthetically.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with no schema descriptions and an output schema exists (so return values don't need explanation), the description covers parameter semantics adequately. However, as a creation tool with no annotations, it should provide more behavioral context about the operation's nature (sync/async, side effects, error handling) to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining all 4 parameters: project_id identifies the project, compile_id comes from successful compilation, backtest_name names the backtest, and parameters provides optional dictionary with examples. This adds significant meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a new backtest for a compiled project, specifying the verb (create) and resource (backtest). It distinguishes from siblings like list_backtests (read) and delete_backtest (delete), but doesn't explicitly differentiate from create_optimization which is a related creation operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'compiled project' and requiring a compile_id from successful compilation, suggesting prerequisites. However, it doesn't explicitly state when to use this vs. alternatives like create_optimization or when not to use it (e.g., if project isn't compiled).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/taylorwilsdon/quantconnect-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server