Skip to main content
Glama

mean_variance_optimize

Calculate optimal portfolio weights using Mean-Variance Optimization to maximize Sharpe ratio for given tickers and lookback period.

Instructions

Calculates optimal portfolio weights using Mean-Variance Optimization (Max Sharpe).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
tickersYes
lookbackNo1y

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core handler function implementing mean-variance portfolio optimization. Fetches historical prices, computes returns and covariance, then uses SciPy's minimize to find weights that maximize the Sharpe ratio (long-only constraints). Returns formatted allocation, expected return, volatility, and Sharpe.
    def mean_variance_optimize(tickers: List[str], lookback: str = "1y") -> str:
        """
        Calculates optimal portfolio weights using Mean-Variance Optimization (Max Sharpe).
        """
        try:
            logger.info(f"Starting Mean-Variance Optimization for: {tickers}")
            
            # 1. Fetch Data
            data = {}
            for ticker in tickers:
                # Get 1 year of data
                history_json = get_price(ticker, period="1y", interval="1d")
                history = json.loads(history_json)
                
                if not history:
                    logger.warning(f"Optimization skipped: No data for {ticker}")
                    return f"Could not fetch data for {ticker}"
                    
                df = pd.DataFrame(history)
                df['Date'] = pd.to_datetime(df['Date'])
                df.set_index('Date', inplace=True)
                data[ticker] = df['Close']
                
            prices = pd.DataFrame(data)
            returns = prices.pct_change().dropna()
            
            if returns.empty:
                logger.warning("Optimization failed: Insufficient data overlap")
                return "Insufficient data overlap for optimization."
                
            # 2. Optimization Setup
            n_assets = len(tickers)
            mean_returns = returns.mean() * 252
            cov_matrix = returns.cov() * 252
            
            # Objective: Maximize Sharpe Ratio (Minimize negative Sharpe)
            def negative_sharpe(weights):
                p_ret = np.sum(weights * mean_returns)
                p_vol = np.sqrt(np.dot(weights.T, np.dot(cov_matrix, weights)))
                # Handle potential division by zero or very small volatility
                if p_vol < 1e-6:
                    return 1e10 # Return a very large number to penalize near-zero volatility
                return -p_ret / p_vol
                
            # Constraints: Sum of weights = 1
            constraints = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1})
            # Bounds: 0 <= weight <= 1 (Long only)
            bounds = tuple((0, 1) for _ in range(n_assets))
            # Initial Guess: Equal weights
            init_guess = n_assets * [1. / n_assets]
            
            # 3. Run Optimization
            result = minimize(negative_sharpe, init_guess, method='SLSQP', bounds=bounds, constraints=constraints)
            
            if not result.success:
                logger.error(f"Optimization failed: {result.message}")
                return f"Optimization failed: {result.message}"
                
            optimal_weights = result.x
            
            # 4. Format Output
            allocation = {ticker: weight for ticker, weight in zip(tickers, optimal_weights) if weight > 0.01}
            
            p_ret = np.sum(optimal_weights * mean_returns)
            p_vol = np.sqrt(np.dot(optimal_weights.T, np.dot(cov_matrix, optimal_weights)))
            sharpe = p_ret / p_vol
            
            logger.info(f"Optimization completed. Max Sharpe: {sharpe:.2f}")
            
            return (
                f"Optimal Allocation (Max Sharpe):\n"
                f"{json.dumps(allocation, indent=2)}\n\n"
                f"Expected Annual Return: {p_ret:.2%}\n"
                f"Expected Volatility: {p_vol:.2%}\n"
                f"Sharpe Ratio: {sharpe:.2f}"
            )
            
        except Exception as e:
            logger.error(f"Optimization error: {e}", exc_info=True)
            return f"Error optimizing portfolio: {str(e)}"
  • Helper function to fetch historical price data using yfinance, formats as JSON list of {'Date': str, 'Close': float}, used internally by mean_variance_optimize.
    def get_price(ticker: str, period: str, interval: str) -> str:
        """
        Fetches historical price data for a given ticker and returns it as a JSON string.
        This is a placeholder to satisfy the new mean_variance_optimize function's dependency.
        In a real scenario, this might be an API call or a more sophisticated data fetcher.
        """
        try:
            data = yf.download(ticker, period=period, interval=interval, progress=False)
            if data.empty:
                return json.dumps([])
            # Rename columns to match expected structure if necessary, e.g., 'Date' instead of index
            data_reset = data.reset_index()
            data_reset['Date'] = data_reset['Date'].dt.strftime('%Y-%m-%d') # Format date as string
            # Select relevant columns, e.g., 'Date', 'Close'
            # The new code expects 'Date' and 'Close' to be present.
            # If other columns are needed, adjust here.
            history_list = data_reset[['Date', 'Close']].to_dict(orient='records')
            return json.dumps(history_list)
        except Exception as e:
            logger.error(f"Error fetching data for {ticker} with yfinance: {e}")
            return json.dumps([])
  • server.py:395-398 (registration)
    Registers the mean_variance_optimize tool (along with risk_parity) with the FastMCP server using the register_tools helper function, making it available via MCP protocol.
    register_tools(
        [mean_variance_optimize, risk_parity],
        "Portfolio Optimization"
    )
  • app.py:292-292 (registration)
    Lists mean_variance_optimize in the Gradio UI tools_map under 'Portfolio Opt' category for UI toolbox display (not MCP registration).
    "Portfolio Opt": [mean_variance_optimize, risk_parity],
  • Function signature with type annotations defining input schema (tickers: List[str], lookback: str='1y') and output str, used by MCP for tool schema generation.
    def mean_variance_optimize(tickers: List[str], lookback: str = "1y") -> str:
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the optimization method ('Mean-Variance Optimization (Max Sharpe)') but lacks details on computational behavior, such as whether it requires historical data, handles constraints, or outputs specific metrics. For a tool with no annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence: 'Calculates optimal portfolio weights using Mean-Variance Optimization (Max Sharpe).' It is front-loaded with the core purpose and uses no unnecessary words, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (portfolio optimization with 2 parameters) and the presence of an output schema (which reduces the need to describe return values), the description is minimally adequate. However, with no annotations and 0% schema coverage, it lacks details on behavior and parameters. The description covers the basic purpose but falls short in providing a complete context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning parameters 'tickers' and 'lookback' are undocumented in the schema. The description adds no information about these parameters, such as what 'tickers' represents (e.g., stock symbols) or how 'lookback' is used (e.g., time period for data). With low coverage and no compensation in the description, this score reflects inadequate parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Calculates optimal portfolio weights using Mean-Variance Optimization (Max Sharpe).' It specifies the verb ('calculates'), resource ('optimal portfolio weights'), and method ('Mean-Variance Optimization (Max Sharpe)'), which distinguishes it from other portfolio-related tools like 'portfolio_risk' or 'risk_parity'. However, it doesn't explicitly differentiate from all siblings, such as 'monte_carlo_simulation' or 'run_backtest', which might also involve portfolio optimization.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context, or exclusions, nor does it reference sibling tools like 'portfolio_risk' or 'risk_parity' that might be related. Without such guidance, users must infer usage based on the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/N-lia/MonteWalk'

If you have feedback or need assistance with the MCP directory API, please join our Discord server