QuantRisk
Server Details
Portfolio risk analytics — VaR, Monte Carlo, optimization, options Greeks, stress testing.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- 78degrees/mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 10 of 10 tools scored.
Each tool has a clearly distinct purpose covering different aspects of risk analysis: risk metrics, option Greeks, portfolio comparison, correlations, simulation, optimization, attribution, historical data, sector exposure, and stress testing. No two tools overlap in function.
All tool names follow a consistent verb_noun pattern in snake_case (e.g., analyze_risk, calculate_greeks, compare_portfolios). The naming is predictable and clearly indicates the action and subject.
10 tools is an appropriate number for a risk analysis server, covering the main areas without being overwhelming. Each tool earns its place and the set is well-scoped for the domain.
The tool set covers core risk analysis workflows comprehensively, including metrics, options, simulation, optimization, attribution, and stress testing. However, it lacks some advanced features like backtesting or factor model analysis, which are minor gaps.
Available Tools
10 toolsanalyze_riskAInspect
Calculate core risk metrics for a portfolio — Value at Risk (VaR), Conditional VaR (CVaR), volatility, beta, and max drawdown.
| Name | Required | Description | Default |
|---|---|---|---|
| method | No | VaR calculation method. "historical" uses empirical return distribution, "parametric" assumes normality, "cornish_fisher" adjusts for skew and kurtosis. Default: "historical". | historical |
| benchmark | No | Benchmark ticker for beta calculation, e.g. SPY or QQQ. Default: SPY. | SPY |
| positions | Yes | Array of portfolio positions. Each entry needs a ticker and quantity. Free tier: max 20 positions. Paid tier: up to 500. | |
| horizon_days | No | Risk horizon in trading days. 1 = overnight, 21 ≈ 1 month, 252 ≈ 1 year. Default: 1. | |
| lookback_days | No | Number of historical trading days to use. 252 ≈ 1 year, 756 ≈ 3 years. Range: 30-1260. Default: 252. | |
| confidence_level | No | VaR confidence level as a decimal, e.g. 0.95 = 95%. Range: 0.01-0.99. Default: 0.95. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It lists metrics but does not disclose behavioral traits like data source reliance, whether calculations are reversible, or any side effects. Lacks detail on how the tool processes inputs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, concise sentence that front-loads the core purpose and lists outputs. No redundant information. Earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description lists output metrics, it lacks details on return format, error handling, or limitations (e.g., free tier constraints mentioned in schema). Given no output schema, it should explain expected results more thoroughly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so schema already documents parameters well. The description adds no new parameter details; it only lists output metrics. Baseline of 3 is appropriate as schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it calculates core risk metrics and lists specific outputs (VaR, CVaR, volatility, beta, max drawdown). It distinguishes from sibling tools like compare_portfolios or optimize_portfolio by focusing on risk assessment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs. alternatives. The description implies it's for risk metrics, but fails to provide context about prerequisites or when not to use it, such as for performance attribution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calculate_greeksAInspect
Calculate option Greeks (delta, gamma, theta, vega, rho) for individual options or an options portfolio. Uses Black-Scholes for European, binomial for American style. Paid tier only.
| Name | Required | Description | Default |
|---|---|---|---|
| options | Yes | Array of option positions to calculate Greeks for. 1-100 options. Results include per-option Greeks and aggregated portfolio Greeks. | |
| risk_free_rate | No | Annualized risk-free rate as a decimal, e.g. 0.05 = 5%. Used in Black-Scholes and binomial pricing models. Default: 0.05. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
It discloses the models used (Black-Scholes for European, binomial for American) and the paid-tier restriction, but lacks details on assumptions, data source, or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words; effectively front-loads the purpose and key details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is adequate for a calculation tool with a rich input schema, but it could briefly note the output structure since no output schema is provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds minimal value beyond the schema; it names the Greeks but does not elaborate on parameter behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it calculates option Greeks (delta, gamma, theta, vega, rho) for individual options or portfolios, distinguishing it from sibling tools like analyze_risk or monte_carlo_simulation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Paid tier only' as a usage restriction but does not provide explicit guidance on when to use versus alternatives, nor when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_portfoliosAInspect
Compare two or more portfolio allocations head-to-head across all key risk and return metrics. Paid tier only.
| Name | Required | Description | Default |
|---|---|---|---|
| portfolios | Yes | Two to five named portfolios to compare head-to-head. Each needs a unique name and a list of positions. Min: 2, max: 5. | |
| period_days | No | Lookback period in trading days used for return and risk calculations. 252 = ~1 year. Range: 30-1260. Default: 252. | |
| confidence_level | No | VaR confidence level as a decimal, e.g. 0.95 = 95%. Range: 0.01-0.99. Default: 0.95. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It mentions 'Paid tier only' as a constraint but does not disclose any behavioral traits like side effects or whether the tool is read-only. The description is adequate but lacks extra context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—one sentence plus a pricing note—with no filler. It is front-loaded and every part earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Although the tool has moderate complexity and no output schema, the description only vaguely mentions 'all key risk and return metrics' without specifying the output format or listing metrics. This leaves the agent with incomplete context about what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value beyond schema by specifying 'head-to-head across all key risk and return metrics' and noting 'Paid tier only', which provides context not present in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Compare') and resource ('portfolio allocations'), and clearly distinguishes the tool from siblings by focusing on head-to-head comparison across key metrics, which no other tool explicitly does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly states when to use this tool (comparing two or more portfolios head-to-head), but does not provide explicit guidance on when not to use it or mention alternatives, though siblings are distinct enough that confusion is unlikely.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
correlation_matrixBInspect
Compute the pairwise correlation matrix for a set of assets. Identifies highly correlated pairs and diversification opportunities.
| Name | Required | Description | Default |
|---|---|---|---|
| method | No | Correlation method. "pearson" = linear correlation (standard), "spearman" = rank-based (robust to outliers), "kendall" = concordance-based. Default: "pearson". | pearson |
| tickers | Yes | Tickers to include in the correlation matrix. Minimum 2, maximum 50. Free tier: max 10 tickers. Paid tier: up to 50. | |
| lookback_days | No | Historical window for computing correlations in trading days. 30 = ~6 weeks, 252 = ~1 year. Range: 30-1260. Default: 252. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility for disclosing behavioral traits. However, it fails to mention important details such as data source, handling of missing data, output format, rate limits, or computational constraints. The parameter descriptions include some behavioral hints (e.g., free/paid tier limits), but the primary description lacks transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of two clear sentences without any redundant or unnecessary words. The main purpose is front-loaded, making it efficient for an agent to quickly understand the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, the description is too minimal. It does not explain what the tool returns (e.g., a matrix object, a list of pairs, or a visualization), nor does it provide enough context for an agent to assess suitability beyond the basic purpose. The completeness is insufficient for a tool with three parameters and no structured output documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with each parameter (method, tickers, lookback_days) already documented with constraints, defaults, and explanations. The description text adds no further parameter-specific meaning, so a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Compute') and resource ('pairwise correlation matrix'), clearly stating the tool's primary function and adding value by mentioning identification of highly correlated pairs and diversification opportunities. It distinguishes itself from siblings like 'analyze_risk' and 'price_history'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives, nor does it mention any exclusions or prerequisites. The mention of 'diversification opportunities' vaguely implies portfolio analysis, but there is no comparison to sibling tools or 'when not to use'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
monte_carlo_simulationBInspect
Run Monte Carlo simulation on a portfolio to model the distribution of future returns, including percentile outcomes and probability of loss.
| Name | Required | Description | Default |
|---|---|---|---|
| seed | No | Random seed for reproducible results. Omit for a fresh random run each time. | |
| model | No | Stochastic process model. "gbm" = Geometric Brownian Motion (standard), "jump_diffusion" = adds jump risk for fat-tail scenarios. Default: "gbm". | gbm |
| num_paths | No | Number of simulation paths to run. More paths = more accurate but slower. Free tier: max 1,000. Paid tier: up to 100,000. Default: 10,000. | |
| positions | Yes | Array of portfolio positions. Free tier: max 20 positions. Paid tier: up to 500. | |
| horizon_days | No | Simulation horizon in trading days. 21 ≈ 1 month, 63 ≈ 1 quarter, 252 ≈ 1 year. Default: 21. | |
| lookback_days | No | Historical window used to estimate drift and volatility parameters. Range: 30-1260 trading days. Default: 252. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full responsibility for behavioral disclosure. It only hints at outputs (percentile outcomes, probability of loss) but omits other important traits such as computational intensity, rate limits, data dependencies, or side effects. This is insufficient for a simulation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 22 words, efficiently conveying the core purpose without extraneous information. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description provides useful context by mentioning percentile outcomes and probability of loss. However, it lacks details on the exact return format or structure of results. For a complex simulation tool, slightly more output context would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description does not add any parameter-specific meaning beyond what is already in the schema; it remains generic. Thus, the score stays at the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as running a Monte Carlo simulation on a portfolio to model future return distributions, including percentiles and loss probability. It is specific and distinct from sibling tools like analyze_risk or stress_test, which focus on other risk analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives such as stress_test or analyze_risk. There are no hints about prerequisites, limitations, or use cases, leaving the agent to infer appropriateness.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
optimize_portfolioBInspect
Find the optimal portfolio allocation using mean-variance optimization. Supports max Sharpe, min variance, and target return objectives. Paid tier only.
| Name | Required | Description | Default |
|---|---|---|---|
| tickers | Yes | Universe of tickers to optimize across. Must be 2-50 tickers. The optimizer will determine the best weights within this set. | |
| objective | No | Optimization objective. "max_sharpe" = maximize risk-adjusted return, "min_variance" = minimize portfolio volatility, "target_return" = hit a specific return with minimum risk. Default: "max_sharpe". | max_sharpe |
| constraints | No | Optional weight constraints. See ConstraintsInput for details. | |
| lookback_days | No | Historical window for estimating return and covariance. 252 = 1 year, 756 = 3 years, 1260 = 5 years. Range: 252-1260. Default: 756. | |
| target_return | No | Required when objective is "target_return". Annualized return as a decimal, e.g. 0.12 = 12% annual return target. | |
| risk_free_rate | No | Annualized risk-free rate as a decimal, e.g. 0.05 = 5%. Used in Sharpe ratio calculation. Default: 0.05. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It reveals the paid tier restriction but does not mention any side effects, authentication requirements, rate limits, or output format. The behavioral description is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with only two sentences. The first sentence front-loads the core purpose and method, and the second adds key variations and a condition (paid tier). No redundant or unnecessary text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having 6 parameters including nested objects and no output schema, the description does not explain what the tool returns (e.g., optimal weights, expected return). For a complex optimization tool, this is a significant gap in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning the input schema already documents all parameters with detailed descriptions. The description adds minimal extra value beyond listing the three objectives, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find the optimal portfolio allocation using mean-variance optimization' and specifies supported objectives (max Sharpe, min variance, target return). It distinguishes from sibling tools like analyze_risk or correlation_matrix by focusing on optimization.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions supported objectives and 'Paid tier only' but lacks explicit guidance on when to use this tool versus alternatives (e.g., monte_carlo_simulation). No prerequisites or exclusions are mentioned, leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
performance_attributionBInspect
Break down portfolio performance into factor exposures, sector allocation, and position contributions. Computes Sharpe, Sortino, Treynor, Calmar, and Information ratios.
| Name | Required | Description | Default |
|---|---|---|---|
| benchmark | No | Benchmark ticker for relative performance metrics (Information Ratio, Tracking Error, Beta). Default: SPY. | SPY |
| positions | Yes | Array of portfolio positions. Free tier: max 20 positions (basic ratios only). Paid tier: up to 500 positions with full factor attribution. | |
| period_days | No | Measurement period in trading days. 252 = ~1 year. Range: 30-1260. Default: 252. | |
| risk_free_rate | No | Annualized risk-free rate as a decimal, e.g. 0.05 = 5%. Used in Sharpe, Sortino, and Treynor ratios. Default: 0.05. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It does not disclose important behavioral traits such as data requirements, tier limits (free vs paid), or what the output contains beyond mentioning ratios. The schema indicates tier limits but description omits them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no unnecessary words. Efficiently captures the core functionality and computed metrics.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lists ratios but does not describe output structure or provide examples. Lacks mention of free/paid tier constraint (only in schema) and does not clarify data sourcing. Adequate but with gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so each parameter already has a clear description. The tool description adds value by explaining the computed ratios, but it does not add extra meaning beyond the schema for the parameters themselves.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool breaks down performance into factor exposures, sector allocation, and position contributions, and computes several risk-adjusted ratios. It distinguishes from sibling tools like analyze_risk or optimize_portfolio by focusing on attribution, but does not explicitly contrast.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings or when not to use it. No mention of prerequisites or context that would help an agent decide.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
price_historyAInspect
Fetch historical OHLCV price data for one or more tickers. Free tier: 1 ticker, 252 days. Paid tier: up to 20 tickers, 1260 days.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Number of historical trading days to return. Free tier: max 252 days (~1 year). Paid tier: up to 1260 days (~5 years). Default: 252. | |
| tickers | Yes | Ticker symbols to fetch price history for. Free tier: max 1 ticker. Paid tier: up to 20 tickers. | |
| interval | No | Price interval. "daily" returns one OHLCV row per trading day, "weekly" aggregates to weekly bars, "monthly" aggregates to monthly bars. Default: "daily". | daily |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description implies read-only behavior but does not explicitly state it. No mention of authorization, rate limits, or other behavioral traits. Adequate for a simple fetch operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Front-loaded with the main action and key constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema and description does not explain return format (e.g., OHLCV structure). For a raw data tool, more specifics about the output would be helpful for agent to use the result effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for each parameter. The description adds value by explaining tier-based restrictions on tickers and days, complementing the schema details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it fetches historical OHLCV price data for tickers. Differentiates free vs paid tiers. However, it does not explicitly distinguish from sibling tools like analyze_risk, though the function is distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes when to use (fetching historical prices) and tier limitations. Does not provide when-not-to-use or alternatives among sibling tools, but siblings are analytical, not data fetching.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sector_exposureAInspect
Break down portfolio exposure by GICS sector, market cap, and asset class. Returns concentration metrics including the Herfindahl-Hirschman Index.
| Name | Required | Description | Default |
|---|---|---|---|
| positions | Yes | Array of portfolio positions to analyze. Returns GICS sector weights, market cap breakdown, and concentration metrics. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that the tool returns concentration metrics and breaks down by sector/market cap/asset class, but does not mention computational cost, required permissions, or whether it is read-only. Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose. No wasted words. Efficient and to the point.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter analysis tool with no output schema, the description is reasonably complete. It explains inputs, outputs (concentration metrics, HHI), and breakdown dimensions. Could benefit from mentioning data source or assumptions, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage with a clear description of the 'positions' parameter. The tool description adds minimal value beyond the schema, mentioning the breakdown dimensions. Baseline 3 is appropriate as schema already does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool breaks down portfolio exposure by GICS sector, market cap, and asset class, and returns concentration metrics including the Herfindahl-Hirschman Index. This specific verb-resource combination distinguishes it from sibling tools like analyze_risk or performance_attribution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for sector exposure and concentration analysis but does not explicitly state when to use this tool over alternatives like analyze_risk or compare_portfolios. No guidance on prerequisites or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stress_testAInspect
Stress test a portfolio against historical crisis scenarios (GFC 2008, COVID 2020, etc.) or custom shocks (paid tier).
| Name | Required | Description | Default |
|---|---|---|---|
| positions | Yes | Array of portfolio positions. Free tier: max 20 positions and historical scenarios only. Paid tier: up to 500 positions plus custom shocks. | |
| scenarios | No | Historical scenarios to run. Available values: gfc_2008, covid_2020, dot_com_2000, black_monday_1987, taper_tantrum_2013, rate_hike_2022, volmageddon_2018, euro_crisis_2011. Default: [gfc_2008, covid_2020]. | |
| custom_shocks | No | Custom shock definitions. PAID tier only. Each shock specifies ticker-level, sector-level, or market-wide price changes. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses free vs paid tier limits (max positions, custom shocks) but omits whether the operation is read-only, auth requirements, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that front-loads the key purpose and scope (historical crises, paid tier custom shocks), with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description covers main functionality and tier constraints, but lacks explanation of output/return values, which is needed since there is no output schema. For a tool with 3 parameters and nested objects, more context on expected results would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with detailed descriptions. Description adds value by summarizing free/paid tier constraints on positions and custom shocks, complementing the schema's detailed parameter docs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool stress tests a portfolio and lists specific historical crises and custom shocks, differentiating it from sibling tools like monte_carlo_simulation or analyze_risk.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for stress testing against crises but does not explicitly guide when to use vs alternatives like analyze_risk or optimize_portfolio; tier constraints are mentioned in schema but not in description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!