flashalpha
Server Details
Real-time options analytics: GEX, exposure, greeks, volatility, VRP for US equities
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- FlashAlpha-lab/flashalpha-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 23 of 23 tools scored.
Most tools have distinct purposes, but some overlap exists, such as 'get_volatility' and 'get_advanced_volatility' which both cover volatility analytics, potentially causing confusion. However, descriptions clarify differences, and tools like 'calculate_greeks' and 'solve_iv' are clearly separate mathematical functions.
All tools follow a consistent verb_noun naming pattern, primarily using 'get_' or 'calculate_' prefixes, with clear and descriptive names like 'get_option_quote' and 'calculate_kelly'. There are no deviations in style or convention, making the set highly predictable.
With 23 tools, the count is borderline high for a financial analytics server, as it may feel heavy and overwhelming for agents. While the domain is complex, some tools could be consolidated, such as the multiple exposure analytics tools, to reduce redundancy and improve focus.
The tool set provides comprehensive coverage for options trading and volatility analysis, including data retrieval (e.g., quotes, chains), mathematical calculations (e.g., greeks, IV solving), exposure analytics (e.g., GEX, DEX), and advanced features like narratives and historical data. No obvious gaps are present, supporting full agent workflows.
Available Tools
23 toolscalculate_greeksCalculate Option GreeksARead-onlyInspect
Calculate Black-Scholes option greeks (delta, gamma, theta, vega, rho, vanna, charm, speed, zomma, color). Pure math — no market data needed.
| Name | Required | Description | Default |
|---|---|---|---|
| dte | Yes | Days to expiration | |
| spot | Yes | Current stock price | |
| type | Yes | 'call' or 'put' | |
| sigma | Yes | Implied volatility as decimal (0.20 = 20%) | |
| apiKey | Yes | Your FlashAlpha API key | |
| strike | Yes | Strike price |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, and the description adds valuable context: it's a 'pure math' calculation that doesn't require external market data, which clarifies the tool's self-contained nature. However, it doesn't mention computational limits, error handling, or output format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first states the purpose and scope, the second adds critical behavioral context. Every word earns its place, and the information is front-loaded with no unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a calculation tool with read-only annotations and full schema coverage, the description is mostly complete. It clearly explains the tool's mathematical nature and data independence. However, without an output schema, it doesn't describe the return format or structure of the greeks, leaving a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all 6 parameters well-documented in the schema. The description adds no additional parameter semantics beyond what's already in the schema, so it meets the baseline of 3 for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('calculate') and resource ('Black-Scholes option greeks'), listing all 10 greeks by name. It distinguishes itself from siblings by emphasizing 'pure math — no market data needed,' contrasting with data-fetching tools like get_option_quote or get_stock_quote.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool ('no market data needed') and implicitly when not to use it (when live market data is required). The context distinguishes it from siblings that fetch real-time or historical data, providing clear alternatives for different use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calculate_kellyCalculate Kelly SizingARead-onlyInspect
Compute Kelly criterion optimal position sizing for an option trade. Uses BSM expected value vs premium to find edge-maximizing bet size.
| Name | Required | Description | Default |
|---|---|---|---|
| mu | Yes | Expected annual return of underlying as decimal (0.10 = 10%) | |
| dte | Yes | Days to expiration | |
| spot | Yes | Current stock price | |
| type | Yes | 'call' or 'put' | |
| sigma | Yes | Implied volatility as decimal (0.20 = 20%) | |
| apiKey | Yes | Your FlashAlpha API key | |
| strike | Yes | Strike price | |
| premium | Yes | Option premium paid |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the agent knows this is a safe calculation tool. The description adds useful context about the mathematical approach ('Uses BSM expected value vs premium') and objective ('edge-maximizing bet size'), but doesn't disclose computational limits, accuracy constraints, or error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly focused sentences with zero waste. The first sentence states the purpose, the second explains the methodology and objective. Every word earns its place in this efficient description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a calculation tool with readOnlyHint annotation and comprehensive parameter documentation, the description provides adequate context about what it does and how it works. The main gap is the lack of output schema, but the description compensates somewhat by indicating it returns 'optimal position sizing' and 'edge-maximizing bet size' results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, all 8 parameters are well-documented in the schema. The description adds no additional parameter information beyond what's already in the schema, so it meets the baseline of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Compute Kelly criterion optimal position sizing') and resource ('for an option trade'), distinguishing it from siblings like 'calculate_greeks' or 'solve_iv' by focusing on position sizing rather than Greeks or implied volatility calculations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('for an option trade') and mentions the method ('Uses BSM expected value vs premium'), but provides no explicit guidance on when to use this tool versus alternatives like 'calculate_greeks' or other financial calculation tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_accountGet Account InfoARead-onlyInspect
Get your account info: plan, daily quota limit, usage today, remaining calls.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the agent knows this is a safe read operation. The description adds useful context about what specific account information is returned (plan, quota, usage), which goes beyond the annotations. However, it doesn't describe rate limits, authentication requirements beyond the API key parameter, or error behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and lists all returned data points without unnecessary words. Every element serves a clear purpose in informing the user about what the tool provides.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one well-documented parameter and no output schema, the description is reasonably complete. It specifies the exact data returned, which compensates for the lack of output schema. However, it could benefit from mentioning authentication context or typical use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter (apiKey) fully documented in the schema. The description adds no additional parameter information beyond what the schema provides, maintaining the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and resource ('account info'), listing the exact data returned (plan, daily quota limit, usage today, remaining calls). It distinguishes from siblings by focusing on account metadata rather than financial calculations or market data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (like needing an API key), nor does it compare with other account-related tools (none exist in siblings, but still lacks context for when this is appropriate).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_advanced_volatilityGet Advanced VolatilityARead-onlyInspect
Get advanced volatility analytics: SVI parameters, forward prices, total variance surface, arbitrage detection, greeks surfaces (vanna, charm, volga, speed), and variance swap fair values. Alpha tier required.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds value by specifying the 'Alpha tier required' constraint, which is useful context beyond annotations. However, it lacks details on behavioral traits like rate limits, response format, or error handling, leaving gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first clause, followed by a detailed list of outputs and a prerequisite note. It uses two efficient sentences with zero waste, making it appropriately sized and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (advanced analytics with multiple outputs) and lack of output schema, the description does well by listing specific outputs. However, it could improve by mentioning response structure or limitations, as annotations only cover read-only status, leaving some behavioral aspects undocumented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with parameters 'symbol' and 'apiKey' fully documented in the schema. The description does not add any parameter-specific semantics beyond what the schema provides, such as format examples or constraints, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and resource ('advanced volatility analytics'), listing detailed outputs like SVI parameters, forward prices, total variance surface, arbitrage detection, greeks surfaces, and variance swap fair values. It distinguishes from siblings by specifying 'advanced' analytics, unlike simpler tools like 'get_volatility' or 'get_option_chain'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states 'Alpha tier required', providing clear context on prerequisites. However, it does not specify when to use this tool versus alternatives like 'get_volatility' or 'calculate_greeks', missing explicit guidance on sibling differentiation beyond the 'advanced' qualifier.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_chexGet Charm Exposure (CHEX)BRead-onlyInspect
Get charm exposure (CHEX) by strike. Shows how dealer delta hedging changes as time passes — reveals time-decay-driven flows.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker | |
| expiration | No | Optional expiration date YYYY-MM-DD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, which the description doesn't contradict. The description adds valuable behavioral context about what the tool reveals ('how dealer delta hedging changes as time passes — reveals time-decay-driven flows'), which goes beyond the annotations. However, it doesn't mention authentication requirements (apiKey parameter), rate limits, or response format details that would be helpful given no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded. The first clause states the core purpose ('Get charm exposure (CHEX) by strike'), and the second clause provides valuable additional context about what the metric reveals. Every word earns its place with zero wasted text or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has good annotations (readOnlyHint) and 100% schema coverage, the description provides adequate purpose explanation. However, with no output schema and multiple similar sibling tools, the description should ideally provide more context about the return format and differentiation from alternatives. The current description is complete enough for basic understanding but leaves gaps in usage guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all three parameters (symbol, apiKey, expiration). The description doesn't add any parameter-specific information beyond what's in the schema. With complete schema coverage, the baseline score of 3 is appropriate as the description doesn't enhance parameter understanding but doesn't need to compensate for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get charm exposure (CHEX) by strike' with additional context about what it reveals ('how dealer delta hedging changes as time passes — reveals time-decay-driven flows'). It distinguishes from siblings by focusing on a specific financial metric (CHEX) rather than other calculations like greeks, volatility, or quotes. However, it doesn't explicitly contrast with the most similar sibling 'get_exposure_summary'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools for financial calculations and data retrieval (e.g., calculate_greeks, get_exposure_summary, get_gex), there's no indication of when CHEX analysis is preferred over other exposure or volatility metrics. The description only explains what the tool does, not when it should be selected.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_dexGet Delta Exposure (DEX)ARead-onlyInspect
Get delta exposure (DEX) by strike. Shows net dealer delta and directional bias from options hedging.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker | |
| expiration | No | Optional expiration date YYYY-MM-DD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the agent knows this is a safe read operation. The description adds useful context about what data is returned (net dealer delta and directional bias), but doesn't disclose rate limits, authentication requirements beyond the apiKey parameter, or response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get delta exposure (DEX) by strike') followed by clarifying details about what data is shown. Every word earns its place with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with good annotations and full parameter documentation, the description provides adequate context about what data is returned. However, without an output schema, it could benefit from more detail about response structure or data format to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get delta exposure (DEX) by strike') and the resource (options hedging data), distinguishing it from siblings like 'get_exposure_summary' or 'get_gex' by focusing on net dealer delta and directional bias from options hedging specifically.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for analyzing options hedging delta exposure, but provides no explicit guidance on when to use this tool versus alternatives like 'get_exposure_summary' or 'get_gex', nor any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_exposure_summaryGet Exposure SummaryARead-onlyInspect
Get full exposure summary: net GEX/DEX/VEX/CHEX, gamma regime (positive/negative), key levels, hedging estimates, zero-DTE breakdown, top strikes.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond annotations by specifying the comprehensive nature of the summary (including gamma regime, key levels, hedging estimates, etc.) and hinting at computational complexity through the detailed output list. It doesn't contradict annotations and provides useful operational insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that efficiently lists all key output components without unnecessary words. It's front-loaded with the main purpose ('Get full exposure summary') and follows with specific details, making it highly concise and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with no output schema, the description provides a comprehensive list of return metrics, which compensates well for the missing output schema. It covers the tool's purpose and output scope effectively, though it could benefit from mentioning data sources or update frequency to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (apiKey and symbol). The description doesn't add any parameter-specific information beyond what's in the schema, such as format examples or constraints. However, given the high schema coverage, a baseline score of 3 is appropriate as the schema adequately documents the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full exposure summary') and lists the exact metrics returned (net GEX/DEX/VEX/CHEX, gamma regime, key levels, hedging estimates, zero-DTE breakdown, top strikes). It distinguishes this tool from siblings like get_gex, get_dex, get_vex, get_chex, and get_zero_dte by indicating it provides a comprehensive summary rather than individual components.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by listing the specific exposure metrics, suggesting this tool is for obtaining a consolidated view of market exposure data. However, it doesn't explicitly state when to use this versus alternatives like get_stock_summary or the individual metric tools (e.g., get_gex), nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_gexGet Gamma Exposure (GEX)ARead-onlyInspect
Get gamma exposure (GEX) by strike. Shows dealer gamma positioning, gamma flip, call/put walls. Reveals where dealer hedging creates support/resistance.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker (e.g. SPY, QQQ) | |
| expiration | No | Optional expiration date YYYY-MM-DD. Omit for all. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond annotations by explaining what the tool reveals ('dealer gamma positioning', 'gamma flip', 'call/put walls', 'where dealer hedging creates support/resistance'), which helps the agent understand the analytical nature of the output. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two tightly packed sentences that each earn their place. The first sentence states the core function and key outputs, while the second explains the practical significance. No wasted words, and the most important information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with full parameter documentation but no output schema, the description provides good contextual completeness by explaining what kind of gamma exposure data is returned and its market significance. However, it doesn't describe the return format or structure, which would be helpful given the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all three parameters (symbol, apiKey, expiration). The description doesn't add any parameter-specific semantics beyond what's in the schema, so it meets the baseline expectation without providing extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get', 'Shows', 'Reveals') and resources ('gamma exposure (GEX) by strike', 'dealer gamma positioning', 'gamma flip', 'call/put walls', 'dealer hedging support/resistance'). It distinguishes from siblings by focusing specifically on gamma exposure metrics rather than broader calculations like 'calculate_greeks' or data retrieval like 'get_option_chain'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for analyzing dealer hedging effects and support/resistance levels, but doesn't explicitly state when to use this tool versus alternatives like 'get_exposure_summary' or 'get_vex'. No explicit when-not-to-use guidance or named alternatives are provided, leaving usage context somewhat implied rather than clearly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_historical_option_quoteGet Historical Option QuoteARead-onlyInspect
Get historical option quote from a specific date. Filter by expiry, strike, and type. Data from ClickHouse tick archive.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Date YYYY-MM-DD | |
| time | No | Optional time HH:mm | |
| type | No | 'C' or 'P' | |
| apiKey | Yes | Your FlashAlpha API key | |
| expiry | No | Expiration date YYYY-MM-DD | |
| strike | No | Strike price | |
| symbol | Yes | Underlying ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, indicating safe read operations. The description adds value by specifying the data source ('ClickHouse tick archive'), which gives context about data freshness and reliability. However, it doesn't disclose behavioral traits like rate limits, authentication requirements beyond the apiKey parameter, or error handling. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get historical option quote from a specific date') and includes key details (filtering, data source) without redundancy. Every word earns its place, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (7 parameters, historical data retrieval) and annotations covering read-only safety, the description is adequate but has gaps. It lacks output details (no output schema provided), doesn't explain error cases or data availability, and could better differentiate from siblings. For a read-only tool with good schema coverage, it meets minimum viability but isn't fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented in the schema. The description mentions filtering by 'expiry, strike, and type', which aligns with optional parameters in the schema but doesn't add meaning beyond what's already there (e.g., format details or constraints). With high schema coverage, the baseline score of 3 is appropriate as the description doesn't significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get historical option quote from a specific date' with specific filtering capabilities (expiry, strike, type) and data source (ClickHouse tick archive). It distinguishes from siblings like 'get_option_quote' (likely current quotes) and 'get_historical_stock_quote' (stock vs. option), though not explicitly. The verb+resource combination is specific but could be more explicit about sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'historical' and filtering parameters, suggesting it's for retrieving past option data rather than current quotes. However, it doesn't explicitly state when to use this vs. alternatives like 'get_option_quote' or 'get_historical_stock_quote', nor does it mention prerequisites or exclusions. The guidance is present but not comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_historical_stock_quoteGet Historical Stock QuoteARead-onlyInspect
Get historical stock quote from a specific date and time. Data from ClickHouse tick archive.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Date YYYY-MM-DD | |
| time | No | Optional time HH:mm (e.g. 10:30). Omit for full day. | |
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds useful context about the data source ('ClickHouse tick archive') and clarifies that omitting time returns full-day data, but doesn't disclose rate limits, error conditions, or response format. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste. The first sentence states the core purpose, and the second adds essential context about the data source. It's front-loaded and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (historical data retrieval), annotations cover safety (readOnlyHint), and schema fully documents parameters, but no output schema exists. The description lacks details on return values (e.g., quote fields), error handling, or API limitations, leaving gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in the schema. The description adds minimal value beyond the schema, only implying date/time usage without detailing format or constraints. Baseline 3 is appropriate as the schema carries the burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get historical stock quote') and resource ('from a specific date and time'), distinguishing it from siblings like 'get_stock_quote' (current quote) and 'get_historical_option_quote' (options data). It specifies the data source ('ClickHouse tick archive'), adding precision.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for historical stock data at a specific date/time, but doesn't explicitly state when to use this vs. alternatives like 'get_stock_quote' for current data or 'get_historical_option_quote' for options. No guidance on prerequisites (e.g., API key setup) or exclusions is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_levelsGet Key Options LevelsBRead-onlyInspect
Get key options levels: gamma flip point, call wall, put wall, max pain, highest OI strike. These act as support/resistance from dealer hedging.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, which the description aligns with by using 'Get' (implying a read operation). The description adds context by explaining that the retrieved levels 'act as support/resistance from dealer hedging', which provides behavioral insight beyond the annotations. However, it doesn't disclose other traits like rate limits, error conditions, or data freshness, leaving some gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that lists the key options levels and their functional context. It's front-loaded with the core purpose and avoids unnecessary details. While very concise, it could potentially benefit from slightly more structure (e.g., separating the list for clarity), but it earns its place without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (retrieving calculated options levels), the description covers the purpose and adds some behavioral context. However, with no output schema and annotations only providing readOnlyHint, it lacks details on return format, data types, or example outputs. The description doesn't fully compensate for these gaps, making it adequate but incomplete for optimal agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with clear documentation for both parameters (apiKey and symbol). The description doesn't add any parameter-specific semantics beyond what the schema provides, such as format examples or constraints. According to the rules, with high schema coverage (>80%), the baseline score is 3, which is appropriate here as the description doesn't compensate but doesn't need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves specific options levels (gamma flip point, call wall, put wall, max pain, highest OI strike) and mentions their function as support/resistance from dealer hedging. It uses the verb 'Get' with the resource 'key options levels', making the purpose explicit. However, it doesn't explicitly differentiate from sibling tools like get_option_chain or get_advanced_volatility, which might also provide options-related data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, exclusions, or specific contexts for use. Given the many sibling tools related to options and volatility (e.g., get_option_chain, get_advanced_volatility, get_gex), the lack of differentiation leaves the agent without clear usage instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_narrativeGet GEX NarrativeARead-onlyInspect
Get verbal GEX narrative analysis. Describes gamma regime, key levels, dealer positioning, and price action implications in plain English.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds context about the output format (plain English narrative analysis covering specific aspects), which is valuable beyond annotations. However, it doesn't mention rate limits, authentication needs beyond the apiKey parameter, or error behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose and output characteristics. Every word adds value, with no redundant or vague phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (financial analysis output), the description adequately covers what the tool does and its output format. However, without an output schema, it could benefit from more detail on the narrative structure or example output. Annotations help by indicating read-only behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (apiKey and symbol). The description doesn't add any additional parameter semantics beyond what the schema provides, such as format examples or constraints, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'verbal GEX narrative analysis', specifying it describes gamma regime, key levels, dealer positioning, and price action implications in plain English. This distinguishes it from sibling tools like 'get_gex' (likely raw data) and 'get_exposure_summary' (different focus).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a verbal analysis of GEX is needed, but doesn't explicitly state when to use this tool versus alternatives like 'get_gex' (which might provide raw data) or other analysis tools. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_option_chainGet Option ChainARead-onlyInspect
Get option chain metadata: available expirations and strikes for a ticker.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, indicating a safe read operation. The description adds context by specifying the type of metadata returned (expirations and strikes), which is useful beyond the annotations. However, it does not disclose behavioral traits like rate limits, authentication needs (implied by apiKey but not stated), or response format, leaving gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get option chain metadata') and specifies the scope ('available expirations and strikes for a ticker'). There is no wasted verbiage, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema), the description is minimally complete. It states what the tool does but lacks details on output format, error handling, or integration with siblings. With annotations covering safety, it meets basic needs but leaves room for improvement in guiding effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (apiKey and symbol). The description adds no additional meaning beyond the schema, such as format details or usage examples. Baseline score of 3 is appropriate as the schema adequately documents parameters without extra value from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and resource ('option chain metadata'), specifying what metadata is retrieved ('available expirations and strikes for a ticker'). It distinguishes from siblings like 'get_option_quote' (which retrieves quotes rather than chain metadata) and 'get_historical_option_quote' (historical data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention when to choose this over 'get_option_quote' (for specific option data) or 'calculate_greeks' (for derived metrics), nor does it specify prerequisites or exclusions, leaving usage context implied at best.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_option_quoteGet Option QuoteARead-onlyInspect
Get live option quote with bid, ask, mid, IV, greeks, open interest, and volume. Filter by expiry, strike, and type.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | 'C' or 'P' (call or put) | |
| apiKey | Yes | Your FlashAlpha API key | |
| expiry | No | Expiration date YYYY-MM-DD | |
| strike | No | Strike price | |
| symbol | Yes | Underlying ticker (e.g. SPY, AAPL) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, which the description aligns with by describing a data retrieval operation. The description adds valuable behavioral context beyond annotations by specifying it returns 'live' data (real-time/current), listing the specific fields returned (bid, ask, etc.), and mentioning filtering capabilities. It doesn't contradict annotations, but could mention rate limits or authentication requirements more explicitly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core purpose and efficiently lists both the returned data fields and filter parameters. Every element serves a clear purpose with zero wasted words, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with good annotations and full schema coverage, the description provides adequate context by specifying it's for live quotes and listing return fields. However, without an output schema, it could more explicitly describe the response structure or data formats. The sibling context is partially addressed through implicit differentiation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented in the schema. The description adds minimal value beyond the schema by mentioning the three filter parameters (expiry, strike, type) but doesn't provide additional semantic context like format examples for 'symbol' or explain parameter interactions. Baseline 3 is appropriate given the comprehensive schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get live option quote') and specifies the exact data returned (bid, ask, mid, IV, greeks, open interest, volume). It distinguishes from siblings like get_option_chain (which returns multiple options) and get_historical_option_quote (which returns historical data) by focusing on a single live quote with filtering capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through the filter parameters (expiry, strike, type), suggesting this is for retrieving specific option quotes. However, it doesn't explicitly state when to use this versus alternatives like get_option_chain (for multiple options) or get_historical_option_quote (for historical data), nor does it mention prerequisites like needing an API key.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_stock_quoteGet Stock QuoteARead-onlyInspect
Get real-time stock quote (bid, ask, mid, last price) for a ticker symbol.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock ticker (e.g. SPY, AAPL, TSLA) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds value by specifying the real-time nature and the specific quote fields returned (bid, ask, mid, last price), which are not covered by annotations. However, it doesn't disclose behavioral aspects like rate limits, authentication needs beyond the apiKey parameter, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get real-time stock quote') and includes essential details (data fields and resource) without any wasted words. Every part of the sentence contributes directly to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 parameters, no output schema), the description is reasonably complete. It covers the purpose, real-time aspect, and returned data fields. However, without an output schema, it could benefit from more detail on the response structure, but the annotations and schema provide adequate support for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (apiKey and symbol). The description adds no additional parameter semantics beyond what the schema provides, such as format details or usage examples. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get real-time stock quote') and resource ('for a ticker symbol'), with explicit mention of the data fields returned (bid, ask, mid, last price). It distinguishes from siblings like get_historical_stock_quote by specifying 'real-time' and from get_stock_summary by focusing on quote data rather than summary information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'real-time' and 'ticker symbol', suggesting when to use this tool versus historical alternatives. However, it lacks explicit guidance on when not to use it or direct alternatives among the many sibling tools, leaving some ambiguity for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_stock_summaryGet Stock SummaryBRead-onlyInspect
Get comprehensive stock summary: price, ATM IV, historical vol, VRP, skew, term structure, options flow, exposure data, and macro context (VIX, Fear & Greed, yield curve).
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF/index ticker (e.g. SPY, AAPL, SPX) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds value by specifying the scope of data returned (e.g., price, VRP, macro context), which isn't covered by annotations. However, it doesn't disclose behavioral traits like rate limits, error handling, or data freshness, leaving gaps despite the annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key action ('Get comprehensive stock summary') and lists data components without unnecessary words. It could be slightly more structured by grouping related data points, but it's appropriately sized and avoids redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (returning multiple data types) and lack of output schema, the description provides a good overview of what data to expect. However, it doesn't specify format, units, or how data is organized, which could be crucial for an AI agent. With annotations covering safety, it's adequate but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear documentation for both parameters (symbol and apiKey). The description doesn't add any parameter-specific details beyond what the schema provides, such as format examples or constraints. Baseline 3 is appropriate since the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('comprehensive stock summary'), listing detailed data components like price, ATM IV, VRP, and macro context. It distinguishes itself from simpler siblings like 'get_stock_quote' by emphasizing comprehensiveness, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for comprehensive stock analysis by listing extensive data points, suggesting it's for detailed summaries rather than basic quotes. However, it lacks explicit guidance on when to use this vs. alternatives like 'get_stock_quote' or 'get_exposure_summary', and doesn't mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tickersList Available TickersBRead-onlyInspect
List all available stock/ETF tickers with live options data.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds context by specifying 'live options data', which hints at real-time or current data, but doesn't disclose behavioral traits like rate limits, pagination, or data freshness. With annotations covering safety, it adds some value but not rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and every part earns its place, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, read-only), no output schema, and good annotations, the description is adequate but incomplete. It doesn't cover aspects like return format (e.g., list structure, data fields) or potential limitations, which could help the agent use it more effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'apiKey' fully documented in the schema. The description doesn't add any meaning beyond the schema, such as explaining why the API key is needed or how it's used, so it meets the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'stock/ETF tickers with live options data', making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_stock_summary' or 'get_option_chain', which might also involve ticker information, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites like needing an API key or compare it to siblings such as 'get_stock_quote' or 'get_option_chain', leaving the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vexGet Vanna Exposure (VEX)BRead-onlyInspect
Get vanna exposure (VEX) by strike. Shows how dealer hedging changes with volatility moves.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker | |
| expiration | No | Optional expiration date YYYY-MM-DD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds context about what VEX measures ('how dealer hedging changes with volatility moves'), which is valuable beyond the annotations. However, it doesn't disclose behavioral traits like rate limits, authentication needs beyond the apiKey parameter, or what happens with invalid inputs. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two clear sentences that front-load the core purpose. Every word earns its place, with no redundant information. The structure moves from the basic function to the specific insight provided, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with good annotations and full schema coverage, the description provides adequate basic context. However, without an output schema, it doesn't explain what the tool returns (e.g., data format, structure, or sample values). Given the specialized financial nature of VEX and many sibling alternatives, more context about the output and use cases would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters (symbol, apiKey, expiration). The description adds no additional parameter semantics beyond what's in the schema. It mentions 'by strike' which might imply strike-related filtering, but this isn't reflected in the parameters, creating potential confusion rather than clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get vanna exposure (VEX) by strike' with the specific action 'shows how dealer hedging changes with volatility moves.' It distinguishes from siblings by focusing on VEX rather than other metrics like Greeks, volatility, or quotes. However, it doesn't explicitly contrast with similar exposure tools like get_gex or get_exposure_summary.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'by strike' but doesn't clarify if this is for single strikes, ranges, or all strikes. With many sibling tools for calculations, exposures, and quotes, there's no indication of when VEX is preferred over other exposure metrics or calculation tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_volatilityGet Volatility AnalysisARead-onlyInspect
Get comprehensive volatility analysis: ATM IV, realized vol (5/10/20/30d), VRP, 25-delta skew, IV term structure, GEX by DTE, theta by DTE, hedging scenarios, liquidity metrics.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable context by specifying the comprehensive set of metrics returned (e.g., ATM IV, realized vol, VRP, skew), which goes beyond annotations to clarify the tool's behavioral output. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that front-loads the purpose ('Get comprehensive volatility analysis') and efficiently lists all key metrics without unnecessary words. Every part earns its place by informing the user of the tool's scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple volatility metrics), annotations cover safety (read-only), and schema fully documents parameters, the description provides a clear overview of what analyses are included. However, without an output schema, it could benefit from more detail on return format or data structure to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (apiKey and symbol). The description doesn't add parameter-specific semantics beyond what the schema provides, such as symbol format examples or apiKey usage details, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Get comprehensive volatility analysis' followed by a detailed list of specific metrics (ATM IV, realized vol, VRP, skew, etc.), clearly indicating it retrieves multiple volatility-related analyses for a given symbol. This distinguishes it from sibling tools like get_vrp (single metric) or get_advanced_volatility (likely more complex).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for volatility analysis needs but doesn't explicitly state when to use this tool versus alternatives like get_vrp (for VRP only) or get_advanced_volatility. It provides context through the listed metrics but lacks explicit guidance on tool selection or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vrpGet VRP DashboardARead-onlyInspect
Get volatility risk premium (VRP) dashboard: live IV vs realized vol, VRP percentiles, term structure, regime classification, strategy scores, and macro context.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, which the description aligns with by using 'Get' (a read operation). The description adds value beyond annotations by detailing the specific components of the dashboard (e.g., live IV vs realized vol, VRP percentiles), which helps the agent understand the behavioral output. However, it does not disclose additional traits like rate limits or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and lists dashboard components without unnecessary words. Every part of the sentence contributes to understanding the tool's functionality, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (providing a detailed dashboard) and lack of output schema, the description does a good job of outlining what the dashboard includes. However, it could be more complete by mentioning the format of the return data or any limitations. The annotations cover the read-only aspect, but additional context on output structure would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (apiKey and symbol). The description does not add meaning beyond the schema, as it focuses on the dashboard content rather than parameter usage. Baseline score is 3 since the schema adequately covers parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'volatility risk premium (VRP) dashboard', with specific details about what the dashboard includes: live IV vs realized vol, VRP percentiles, term structure, regime classification, strategy scores, and macro context. It distinguishes from sibling tools like 'get_vrp_history' by focusing on a current dashboard rather than historical data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining a comprehensive VRP dashboard, but it does not explicitly state when to use this tool versus alternatives like 'get_volatility' or 'get_vrp_history'. It provides context about the dashboard content but lacks explicit guidance on exclusions or prerequisites beyond the required parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vrp_historyGet VRP HistoryARead-onlyInspect
Get historical VRP time series: daily ATM IV, realized vol (5/10/20/30d), VRP, straddle price, and expected move for charting and backtesting.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Number of days of history (default 30, max 365) | |
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds valuable context by specifying the data types returned (time series with specific metrics) and the purpose (charting and backtesting), which helps the agent understand the tool's behavior beyond the annotations, though it lacks details on rate limits or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and lists the returned data points without unnecessary words, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (historical data retrieval with multiple metrics) and the absence of an output schema, the description adequately covers what data is returned and its purpose. However, it could be more complete by mentioning the response structure or any limitations, such as data availability or format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the parameters (symbol, apiKey, days). The description does not add any additional meaning or clarification about the parameters beyond what the schema provides, such as examples or constraints, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get historical VRP time series') and enumerates the exact data points returned (daily ATM IV, realized vol for multiple periods, VRP, straddle price, expected move), distinguishing it from sibling tools like 'get_vrp' or 'get_volatility' by emphasizing historical data for charting and backtesting purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for 'charting and backtesting,' which provides some context, but it does not explicitly state when to use this tool versus alternatives like 'get_vrp' or 'get_historical_stock_quote,' nor does it mention any exclusions or prerequisites beyond the required parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_zero_dteGet Zero-DTE AnalyticsARead-onlyInspect
Get zero-days-to-expiration (0DTE) analytics: intraday gamma, time decay acceleration, pin risk, dealer hedging pressure for contracts expiring today.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | Yes | Your FlashAlpha API key | |
| symbol | Yes | Stock/ETF ticker | |
| strike_range | No | Strike range as decimal fraction of spot (default 0.03 = 3%) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable context beyond annotations by specifying the analytics types returned (gamma, time decay acceleration, etc.) and the expiration constraint ('contracts expiring today'). It doesn't mention rate limits, authentication requirements (though apiKey is in schema), or response format details, but provides meaningful behavioral information about what data to expect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and key details. Every word earns its place: it defines the acronym (0DTE), lists the specific analytics returned, and specifies the expiration constraint. There's no wasted verbiage or redundancy with the title or schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with good annotations and 100% schema coverage, the description provides adequate context about what analytics are returned and their scope. The main gap is the lack of output schema, so the description doesn't explain the structure or format of returned data. However, given the tool's relatively straightforward purpose and good parameter documentation, the description is reasonably complete for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain symbol format or strike_range implications). With complete schema coverage, the baseline score of 3 is appropriate as the description doesn't enhance parameter understanding but doesn't need to compensate for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and resource ('zero-days-to-expiration (0DTE) analytics') with precise scope ('for contracts expiring today'). It lists specific analytics types (intraday gamma, time decay acceleration, pin risk, dealer hedging pressure) that distinguish it from sibling tools like 'get_advanced_volatility' or 'get_volatility' which likely focus on different volatility metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'for contracts expiring today' and the analytics types mentioned, suggesting this is for intraday analysis of near-expiration options. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_option_chain' or 'get_advanced_volatility', nor does it mention any prerequisites or exclusions beyond the expiration timeframe.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
solve_ivSolve Implied VolatilityARead-onlyInspect
Solve for implied volatility from option market price. Reverse-engineers BSM to find what vol is priced in.
| Name | Required | Description | Default |
|---|---|---|---|
| dte | Yes | Days to expiration | |
| spot | Yes | Current stock price | |
| type | Yes | 'call' or 'put' | |
| price | Yes | Option market price | |
| apiKey | Yes | Your FlashAlpha API key | |
| strike | Yes | Strike price |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, confirming this is a safe read operation. The description adds behavioral context by specifying it 'reverse-engineers BSM to find what vol is priced in,' which clarifies the computational method. However, it doesn't disclose rate limits, error handling, or output format, leaving gaps in behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and highly concise, consisting of two efficient sentences that directly state the tool's purpose and method. Every word contributes value, with no wasted information, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mathematical calculation with 6 parameters) and lack of output schema, the description is minimally adequate. It explains the core function but omits details on return values, error conditions, or integration with sibling tools, leaving room for improvement in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters clearly documented in the input schema. The description adds no additional parameter semantics beyond implying the use of BSM model inputs, so it meets the baseline of 3 without compensating for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('solve for implied volatility') and resource ('from option market price'), distinguishing it from siblings like 'calculate_greeks' or 'get_volatility' by focusing on reverse-engineering the Black-Scholes-Merton (BSM) model. It precisely defines the tool's mathematical purpose without redundancy.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'get_volatility' or 'get_advanced_volatility', nor does it mention prerequisites like needing an API key or valid option data. Usage is implied only through the mathematical context, lacking explicit when/when-not instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!