Skip to main content
Glama

read_file

Access and retrieve file contents from QuantConnect projects to analyze trading strategies, research data, or implementation code for informed decision-making.

Instructions

Read a specific file from a project or all files if no name provided.

Args: project_id: ID of the project to read files from name: Optional name of specific file to read. If not provided, reads all files.

Returns: Dictionary containing file content(s) or error information

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYes
nameNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core handler implementation for the 'read_file' MCP tool. This async function handles reading specific files or all files from a QuantConnect project via authenticated API calls to 'files/read'. Includes input validation via type hints, comprehensive error handling, and structured JSON responses. Registered directly via @mcp.tool() decorator.
    @mcp.tool()
    async def read_file(project_id: int, name: Optional[str] = None) -> Dict[str, Any]:
        """
        Read a specific file from a project or all files if no name provided.
    
        Args:
            project_id: ID of the project to read files from
            name: Optional name of specific file to read. If not provided, reads all files.
    
        Returns:
            Dictionary containing file content(s) or error information
        """
        auth = get_auth_instance()
        if auth is None:
            return {
                "status": "error",
                "error": "QuantConnect authentication not configured. Use configure_auth() first.",
            }
    
        try:
            # Prepare request data
            request_data: Dict[str, Any] = {"projectId": project_id}
            if name is not None:
                request_data["name"] = name
    
            # Make API request
            response = await auth.make_authenticated_request(
                endpoint="files/read", method="POST", json=request_data
            )
    
            # Parse response
            if response.status_code == 200:
                data = response.json()
    
                if data.get("success", False):
                    files = data.get("files", [])
    
                    # If specific file was requested
                    if name is not None:
                        if files:
                            file_data = files[0]
                            return {
                                "status": "success",
                                "project_id": project_id,
                                "file": file_data,
                                "message": f"Successfully read file '{name}' from project {project_id}",
                            }
                        else:
                            return {
                                "status": "error",
                                "error": f"File '{name}' not found in project {project_id}",
                            }
    
                    # If all files were requested
                    else:
                        return {
                            "status": "success",
                            "project_id": project_id,
                            "files": files,
                            "total_files": len(files),
                            "message": f"Successfully read {len(files)} files from project {project_id}",
                        }
                else:
                    # API returned success=false
                    errors = data.get("errors", ["Unknown error"])
                    return {
                        "status": "error",
                        "error": "File read failed",
                        "details": errors,
                        "project_id": project_id,
                        "file_name": name,
                    }
    
            elif response.status_code == 401:
                return {
                    "status": "error",
                    "error": "Authentication failed. Check your credentials and ensure they haven't expired.",
                }
    
            else:
                return {
                    "status": "error",
                    "error": f"API request failed with status {response.status_code}",
                    "response_text": (
                        response.text[:500]
                        if hasattr(response, "text")
                        else "No response text"
                    ),
                }
    
        except Exception as e:
            return {
                "status": "error",
                "error": f"Failed to read file(s): {str(e)}",
                "project_id": project_id,
                "file_name": name,
            }
  • Server initialization calls register_file_tools(mcp), which defines and registers the read_file tool (along with other file tools) using FastMCP's @mcp.tool() decorators.
    register_file_tools(mcp)
  • Entry point script also registers file tools, importing mcp from server.py, providing an alternative invocation path.
    register_file_tools(mcp)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool reads files, implying a read-only operation, but doesn't disclose behavioral traits like permissions needed, error handling details, rate limits, or whether it returns raw content or metadata. For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by a structured 'Args' and 'Returns' section. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 parameters, no nested objects) and the presence of an output schema (which handles return values), the description is mostly complete. It covers the dual behavior and parameter meanings. However, it lacks context on errors or operational limits, which could be useful despite the output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics: 'project_id' is explained as 'ID of the project to read files from', and 'name' as 'Optional name of specific file to read. If not provided, reads all files.' This clarifies purpose and default behavior, though it doesn't detail formats (e.g., string constraints). Given the coverage gap, it does well but not perfectly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Read a specific file from a project or all files if no name provided.' It specifies the verb ('Read') and resource ('file from a project'), and distinguishes its dual behavior (specific vs. all files). However, it doesn't explicitly differentiate from siblings like 'read_project' or 'read_backtest', which lowers it from a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the parameter explanation: use with 'name' for a specific file or without for all files. However, it lacks explicit guidance on when to choose this tool over alternatives (e.g., 'read_project' for project metadata) or any prerequisites. This makes it adequate but with gaps in sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/taylorwilsdon/quantconnect-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server