Skip to main content
Glama
grahammccain

Chart Library

portfolio

Read-onlyIdempotent

Run multi-holding cohort analysis to rank tail contributors by weight times p10, or get per-symbol track records with historical pattern accuracy.

Instructions

Portfolio-level analysis OR per-symbol track-record + Layer 5 memory.

Two modes:

  mode="basic" (default):
    Multi-holding conditional distribution. Runs per-holding cohorts
    in parallel, weight-averages the distributions, ranks tail
    contributors (weight × p10, most negative first). PM-agent
    primitive. Pass holdings=[{symbol, weight, date}].

  mode="symbol_intel":
    Per-symbol track record + Layer 5 memory — what does Chart
    Library know about this single symbol across all prior
    analyses? Returns prior cohort_observations, feature_reliability
    learned for the symbol, and the symbol's per-pattern accuracy
    history. Pass symbol=X, lookback_days=N.

Args:
    holdings: list of {symbol, weight, date} (mode="basic")
    symbol: ticker (mode="symbol_intel")
    mode: "basic" | "symbol_intel"
    horizons: forward horizons (mode="basic"; default [5, 10])
    top_k_per_holding: cohort size per holding (mode="basic")
    include_path_stats: include MAE/MFE (mode="basic"; slower)
    lookback_days: history window (mode="symbol_intel"; default 365)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
holdingsNo
symbolNo
modeNobasic
horizonsNo
top_k_per_holdingNo
include_path_statsNo
lookback_daysNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, establishing safety and repeatability. The description goes beyond by specifying parallel computation, weight-averaging of distributions, tail contribution ranking, and for symbol_intel mode, the return of prior observations, feature reliability, and per-pattern accuracy history. It also notes that include_path_stats adds MAE/MFE and is slower, providing operational insight.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-organized with a two-line summary, two clear mode sections, and a bullet-list argument summary. It is front-loaded with purpose. Every sentence adds value with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 7 parameters, two modes, parallel execution, and an output schema (context signals indicate output schema exists), the description covers all necessary aspects: mode selection, parameter roles, defaults, performance notes (parallelism, slower option), and the PM-agent primitive context. No gaps remain for an agent to select and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description fully compensates. It explains each parameter in context of the mode: holdings as list of {symbol, weight, date}, symbol as ticker, mode as 'basic'|'symbol_intel', horizons with default [5,10], top_k_per_holding default 300, include_path_stats as boolean (slower), and lookback_days default 365. This adds meaning and proper usage beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool operates in two distinct modes: basic for portfolio-level analysis and symbol_intel for per-symbol track record with Layer 5 memory. It uses specific verbs like 'runs', 'weight-averages', 'ranks', and 'returns', and distinguishes itself from siblings by describing a multi-holding conditional distribution and PM-agent primitive, which is unique among the listed sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly defines two modes with clear conditions: basic for multi-holding analysis and symbol_intel for single symbol memory. It details required arguments for each mode (holdings for basic, symbol for symbol_intel) and defaults. However, it does not mention when to avoid this tool or suggest alternative sibling tools, missing explicit exclusions or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/grahammccain/chart-library-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server