Skip to main content
Glama
rezmeplxrf

InsightSentry MCP

by rezmeplxrf

Server Quality Checklist

58%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Tools are generally well-distinguished by asset class and action. The main ambiguity is between get_symbol_history (deep archive) and get_symbol_series (recent data), which have subtle distinctions that could cause misselection. Options endpoints (get_options_expiration, get_options_strike, list_options) are actually well-differentiated by filtering approach.

    Naming Consistency4/5

    Follows a consistent verb_noun pattern with 'get_' prefix dominating (14 tools). Minor deviations include 'list_options' (should be 'get_options' to match 'get_documents') and 'search_symbols' (semantically distinct from 'screen_' but slightly inconsistent). The 'screen_' and 'get_*_screener_params' patterns are consistently applied across asset classes.

    Tool Count3/5

    At 29 tools, this exceeds the 25+ threshold where toolsets typically become unwieldy. While the financial data domain is broad, the count feels heavy—screeners and their param getters for each asset class (8 tools total) could potentially be consolidated, as could the two historical data endpoints. Agents may struggle to navigate the full surface area efficiently.

    Completeness4/5

    Provides comprehensive coverage for financial market data: four asset classes with screeners (stocks, bonds, ETFs, crypto), options chains, fundamentals (meta, current, time-series), corporate calendars (earnings, dividends, IPOs), documents, and news. Minor gaps include the lack of a futures screener (despite having futures contract support) and no dedicated technical indicator endpoint outside of screening.

  • Average 4.2/5 across 29 of 29 tools scored. Lowest: 3.1/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v1.3.3

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 29 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read operation ('Retrieve') and lists the specific data fields returned, which helps compensate for the missing output schema. However, it lacks details on rate limits, data freshness, error conditions, or what 'session corrections' specifically entails.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two sentences, but the first ('Trading session information.') is a sentence fragment that merely restates the concept without adding value. The second sentence contains the actual operative content. The structure is front-loaded with fluff rather than substance.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a tool with only two parameters and no output schema, the description is minimally adequate. It compensates for the missing output schema by listing the data fields returned (holidays, timezone, etc.), but given the lack of annotations, it should provide more operational context or usage prerequisites.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, documenting both the symbol format (Exchange:Symbol) and the JSONata filter syntax. The description adds no additional parameter semantics, meeting the baseline expectation for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Retrieve') and clearly identifies the resource (trading session details). It enumerates specific data points returned (holidays, trading hours, timezone, session corrections), which distinguishes it from siblings like get_quotes or get_symbol_history. However, it lacks explicit differentiation from get_symbol_info.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like get_symbol_info or get_symbol_history. There are no prerequisites, exclusions, or contextual recommendations provided.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and successfully discloses the complete return structure (pagination fields, data array shape, field types), which compensates for the missing output schema. However, it omits other behavioral traits like caching duration or data freshness guarantees.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is information-dense and front-loaded with the return structure, which is valuable. However, the opening 'News feed.' fragment wastes space, and embedding the full JSON return type inline reduces readability compared to separating purpose from return value documentation.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of output schema, the description adequately compensates by detailing the return structure including pagination metadata and article fields. It covers the tool's functionality well for a 4-parameter read operation, though it could mention pagination behavior or data volume limits.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema fully documents all 4 parameters (keywords, limit, page, filter). The description mentions 'optional keyword filtering' which aligns with the keywords parameter, but adds no additional semantic context beyond what the schema already provides, warranting the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves 'latest financial news' (specific verb + resource), distinguishing it from sibling screening and fundamental data tools. However, the prefix 'News feed.' is tautological given the tool name, preventing a perfect score.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description mentions 'optional keyword filtering' but provides no explicit guidance on when to use this tool versus alternatives like 'search_symbols' or document retrieval tools, nor any prerequisites or rate limit warnings.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden effectively. It discloses the return structure (inline JSON), batch constraints for downstream get_quotes calls ('up to 10 codes'), and behavioral nuances like last_price inclusion when range is used. Missing auth requirements and rate limits prevent a 5.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is information-dense and front-loaded with purpose, but suffers from redundancy—mentioning the get_quotes 10-code limitation twice in slightly different ways. The inline return schema JSON is bulky but necessary given the lack of structured output schema.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For an 8-parameter options tool with no output schema, the description is nearly complete. It provides the full return structure inline, explains domain-specific fields (Greeks), and maps the workflow to sibling tools. The contradiction with schema regarding strike/range exclusivity slightly undermines completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (baseline 3), but the description adds significant value: concrete option code examples (OPRA:AAPL270617C230.0), clarification that range represents ±N% of current price, and workflow context for using the returned codes. This exceeds the schema's basic descriptions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states it retrieves 'option chain data filtered by strike price or price range' with specific verbs and resources. It distinguishes from siblings by explicitly naming get_quotes (for last price/volume) and get_symbol_series (for historical data), though it could better differentiate from list_options or get_options_expiration.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While it provides explicit alternatives (when to use get_quotes vs this tool), it contains misleading guidance stating strike and range can be provided as 'both', which directly contradicts the schema that specifies 'When provided, strike parameter is ignored.' This could confuse agents about parameter interaction.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Compensates well by disclosing the complete return data structure (total_count, range, data array with specific fields) since no output schema exists. Also notes default temporal scope (current week).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information density is high but structure is fragmented (opening sentence fragment, arrow notation for return type, terse parameter notes). The inline JSON return structure is functional but reduces readability. No wasted words, but could be better structured.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no output schema, the description adequately compensates by detailing the return structure. All 3 parameters are well-documented in schema. Tool scope is simple (data retrieval), and description provides sufficient context for invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage. Description adds value by mapping shorthand parameter names to actions ('w' for looking ahead, 'c' for country filtering) which complements the technical schema descriptions without redundancy.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb (Retrieve) and resource (economic events calendar data) and distinguishes from siblings like get_earnings or get_dividends by specifying 'Economic events' (macro/country-level vs corporate). However, could be clearer about what constitutes an economic event (e.g., central bank meetings, CPI releases).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implicit usage guidance through parameter hints ('Use w to look ahead', 'Filter by country with c') and notes the default behavior (current week). Lacks explicit when-to-use guidance comparing it to sibling tools like get_earnings or get_ipos which also return calendar data.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Since no annotations exist, the description carries the full burden. It excellently compensates for the missing output schema by providing the complete return structure inline. It also discloses default behavior (current week) that is not visible in the input schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Despite being dense due to the inline JSON return structure, every element serves a critical purpose. Information is front-loaded with the purpose statement, followed by return schema, defaults, and parameter usage. The inline JSON is necessary given the lack of structured output schema.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of annotations and structured output schema, the description provides comprehensive coverage by including the return type structure, default behaviors, and parameter filtering capabilities. It adequately prepares an agent to invoke the tool successfully.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description largely restates schema content (e.g., 'NASDAQ:AAPL' example duplicates schema). It adds 'Default: current week' which is behavioral context for the 'w' parameter, but does not significantly expand on parameter semantics beyond the schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states it retrieves 'earnings calendar data' with a specific time range scope. The prefix 'Earnings calendar' distinguishes it from sibling tools like get_dividends, get_ipos, and get_events by specifying the exact financial dataset.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implied usage guidance through default behavior ('Default: current week') and parameter hints ('Use 'w' to look ahead'), but lacks explicit guidance on when to choose this over get_events or get_fundamentals_series for earnings-related data.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. Compensates well by detailing return structure (available_fields, available_exchanges, sortOrder), data format (flat string arrays vs objects), and domain constraints (no country filter). Could improve by mentioning caching, rate limits, or if results are static vs dynamic.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense with minimal waste. Front-loads purpose, details return structure inline using arrow notation (slightly informal but efficient), notes format constraints, and ends with actionable next step. Each sentence provides distinct value (purpose, return shape, data types, constraints, workflow).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for a single-parameter metadata tool. Compensates for missing output schema by explicitly describing return object structure and array contents. Covers crypto-specific domain logic (no country filtering). Lacking only operational details like rate limiting or freshness of data.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, providing complete documentation of the optional 'filter' JSONata parameter. Description does not mention the filter parameter, but baseline score of 3 is appropriate since schema fully documents inputs without needing supplementary text.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (get/retrieve) and resource (crypto screener parameters/fields). Distinguishes from sibling stock/bond/etf screener param tools by explicitly mentioning 'crypto' and noting the 'No country filter for crypto' constraint. Slightly vague on 'parameters' meaning without reading the return structure, but clarified in subsequent text.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly directs next step: 'Next: use screen_crypto with these fields,' establishing clear workflow sequencing. Implies when to use (before screening) and implicitly distinguishes from direct screening tools. Lacks explicit 'when not to use' exclusions, but the crypto-specific scope provides clear context for selection.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and successfully discloses key behavioral traits: it includes the complete return schema inline (compensating for no output_schema), explains that last_price is only returned when using date ranges, and clarifies that this tool returns greeks/theoretical prices but not real-time market data (which requires a separate get_quotes call).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description contains redundant sentences (repeating the expiration/from/to logic twice) and starts with a tautological phrase ('Option chain by expiration' followed by 'Retrieve option chain data by expiration'). However, the inline return schema, while verbose, is useful given the lack of output_schema, and the overall length is appropriate for the complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a complex financial tool with 9 parameters and no annotations, the description is nearly complete. It includes the return structure, documents the two-step workflow (chain retrieval then quote retrieval), and references relevant siblings. Minor gaps include missing rate limit information and no mention of the filter parameter's JSONata syntax examples, but these are acceptable omissions given the schema coverage.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description largely restates the mutual exclusivity logic already present in the schema (expiration vs from/to). It adds minimal semantic value beyond the schema, though the example option code format (OPRA:AAPL270617C230.0) in the context of get_quotes usage indirectly helps clarify the code parameter format.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves option chain data by expiration, distinguishing it from siblings like get_quotes (for last price/volume) and get_symbol_series (for historical data). However, it doesn't explicitly differentiate from similar sibling get_options_strike, and the opening 'Option chain by expiration' is somewhat telegraphic.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit guidance on when to use alternatives, specifically directing users to get_quotes for last price/volume and get_symbol_series for historical data. It clearly explains the mutual exclusivity logic between the expiration parameter and from/to date range parameters, and notes the 10-code limit for subsequent quote requests.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full disclosure burden and notably includes the complete return structure schema (→ Returns {...}), revealing the data shape, field types, and nesting. However, it omits rate limits, caching behavior, or explicit read-only safety declarations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense and efficiently structured using the arrow symbol (→) to separate action from return value. The front-loaded 'IPO calendar' fragment and compact JSON return specification maximize information per token, though the return type string is lengthy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequately compensates for the missing output schema by detailing the return structure inline, and covers the primary filtering parameters. For a read-only retrieval tool with four optional parameters, the description provides sufficient context for correct invocation despite minor gaps (filter param, rate limits).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. The description adds value by providing a concrete example ('NASDAQ:AAPL') for the 'code' parameter and simplified usage guidance for 'w' and 'c'. It fails to mention the 'filter' parameter (JSONata expression) in the prose, though the schema documents it.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves 'IPO calendar data' (specific verb + resource) and specifies the time range scope. 'IPO calendar' distinctly identifies this tool's domain among financial data siblings like get_dividends and get_earnings.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides useful default context ('Default: current week') and hints for parameter usage ('Use 'w' to look ahead'), but lacks explicit guidance on when to prefer this over similar market data tools like get_earnings or get_events, and doesn't specify prerequisites or constraints.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Without annotations, the description carries the full burden. It successfully discloses the 'relatively recent' scope limitation and provides the complete return structure {base_code: string, contracts: [...]} since no output schema exists. It implies read-only behavior via 'Retrieve' and 'List', though an explicit safety declaration would strengthen it further.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The first two sentences are highly redundant ('List of contracts...' vs 'Retrieve list of contracts...'). While the return structure JSON is valuable given no output schema, the description could be tightened by merging the first two sentences. The third sentence is efficient and high-value.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of output schema, the description adequately compensates by documenting the return structure. For a 2-parameter tool with full schema coverage, it provides sufficient domain context (futures-specific) and workflow integration. It appropriately focuses on what the tool returns rather than repeating parameter details already covered by the schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description adds value by providing a futures-specific symbol example (CME_MINI:NQH2024) that clarifies the expected format for the 'symbol' parameter beyond the generic 'Exchange:Symbol' schema description. It does not add significant context for the 'filter' parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly identifies the resource (futures contracts and settlement dates) and distinguishes from siblings by specifying 'futures symbols' (vs generic get_symbol_* tools). However, the opening phrase 'List of contracts' is a grammatical fragment; the clear purpose statement ('Retrieve list...') appears in the second sentence, slightly muddying the front-loading.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Excellent explicit guidance: 'Use a specific contract code (e.g., CME_MINI:NQH2024) with get_symbol_history for deep history.' This clearly delineates when to use this tool (listing available contracts) versus the sibling get_symbol_history (deep history on specific contracts), establishing a proper workflow.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and discloses the return structure ({last_update, last_price?, codes}), conditional behavior (last_price included only when range is provided), and filtering capabilities. It mentions the 10-code limit for subsequent quote requests.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is front-loaded with the core purpose but suffers from redundancy, mentioning the get_quotes workflow twice (once referencing an endpoint URL and later referencing the tool by name) and repeating the return structure information. It could be tightened significantly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of a formal output schema, the description compensates effectively by documenting the return structure and field conditions. It adequately covers the 6-parameter complexity and positions the tool correctly within the ecosystem of sibling options tools.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Although the input schema has 100% coverage, the description adds value by explaining the relationship between the 'range' parameter and the conditional inclusion of 'last_price' in the output. It also provides concrete examples of option code formats (OPRA:AAPL270617C230.0) that supplement the schema descriptions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly identifies the resource (option contracts) and action (retrieve list) in the second sentence, and distinguishes this tool from siblings like get_options_expiration and get_options_strike by clarifying this returns codes without Greeks. However, the opening 'List available options' is somewhat tautological with the tool name.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly names alternative tools and their use cases: directs users to get_quotes for pricing/volume, and to get_options_expiration or get_options_strike for chains with Greeks. It establishes a clear workflow: use this tool first to get codes, then other tools for enrichment.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, but description carries the burden well by disclosing dual return formats (JSON vs binary), PDF text extraction behavior, and the dependency on get_documents. Lacks rate limit or error handling details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Suffers from noticeable repetition: 'Retrieve a specific document' appears twice, and return format documentation is duplicated (before and after the arrow symbol). Information is front-loaded but could be consolidated into fewer sentences.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Compensates well for lack of output schema by detailing return structures ({title, published_at, content}) and format variations. Covers the 4-parameter complexity adequately with workflow context and PDF handling specifics.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% coverage, establishing baseline 3. Description adds significant value beyond schema by strongly recommending text=true for readability and explaining the id/code workflow dependency, which aids agent decision-making.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states the tool retrieves a specific document's content and explicitly distinguishes itself from sibling 'get_documents' by stating 'Get document IDs from get_documents first,' establishing the list-then-retrieve workflow.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear workflow guidance (obtain IDs from get_documents first) and strong operational recommendations (**Always use text=true**). Lacks explicit 'when-not-to-use' exclusions (e.g., for listing/browsing), though this is partially implied.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure and succeeds by documenting the return structure: 'Returns {available_fields: string[], available_exchanges: string[], available_countries: string[], sortOrder: string[]}' and clarifying data format with 'All arrays are flat string arrays (field names, not objects).' This compensates for the missing output schema, though it omits details like caching behavior or rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with zero waste: it opens with purpose ('Get available stock screener parameters'), details the return payload structure, clarifies data types ('flat string arrays'), and concludes with actionable next steps ('Next: use screen_stocks'). Every sentence delivers distinct value without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a metadata discovery tool with one optional parameter and no annotations, the description is comprehensive. It compensates for the absent output schema by explicitly documenting return fields and types, explains the data shape, and provides workflow closure by referencing the downstream tool (screen_stocks). It lacks only explicit safety hints (read-only nature), which would be ideal but are inferable from the 'Get' verb.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage for its single 'filter' parameter (documenting the JSONata expression purpose), establishing a baseline score of 3. The description text does not elaborate on parameter semantics, but given the schema's completeness, no additional compensation is necessary.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description explicitly states 'Get available stock screener parameters' and 'Retrieve the list of available fields and parameters for stock screening,' clearly identifying the verb (Get/Retrieve) and resource (stock screener parameters). It effectively distinguishes from siblings like get_bond_screener_params and get_crypto_screener_params by specifying 'stock,' and differentiates from screen_stocks by clarifying this retrieves configuration metadata rather than performing the screening.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit workflow guidance with 'Next: use screen_stocks with these fields to filter the market,' establishing the correct sequence for market screening operations. While it clearly indicates the relationship to screen_stocks, it does not explicitly contrast with other asset-class screener parameter tools (bond/crypto/ETF variants), relying instead on the 'stock' qualifier in the name for differentiation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure, stating it returns a 'PNG image' and accepts a Chart.js configuration object. However, it lacks operational details such as error handling for invalid JSON configurations, rate limiting, or idempotency characteristics.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description efficiently delivers essential information across three sentences covering output format, input requirements, supported chart types, and usage context without redundant phrasing. Every clause serves a distinct purpose, from technical specification to workflow integration.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complex nature of Chart.js configurations and the absence of annotations or output schemas, the description adequately covers the essential contract by specifying the PNG output format and configuration requirements. It appropriately relies on the schema's embedded example for syntax details, though it could benefit from mentioning error handling for malformed configurations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3, but the description adds significant value by detailing the Chart.js configuration structure ('type, data, options') and enumerating supported chart types (line, bar, pie, etc.). This semantic context helps agents construct valid configuration strings beyond the raw schema requirements.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states 'Render a Chart.js chart and return the PNG image,' providing a specific verb, resource, and output format. It clearly distinguishes itself from financial data siblings (get_quotes, screen_stocks, etc.) by explicitly stating 'Use this after fetching market data,' clarifying its role as a visualization utility rather than a data retrieval tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance with 'Use this after fetching market data to visualize trends, comparisons, or distributions,' establishing the correct sequence of operations relative to data retrieval siblings. While it implies the tool requires pre-fetched data, it could strengthen guidance by explicitly stating it does not perform data retrieval itself.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It compensates by documenting the complete return structure inline ('→ Returns {current_page...}'), including pagination behavior (has_more, current_page) and the platform-specific 'EXCHANGE:SYMBOL' format. It warns against guessing codes but does not mention error states or rate limiting, preventing a perfect score.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The opening sentence is unnecessarily repeated. The use of arrow notation (→) and ALL CAPS sections creates a somewhat cluttered, stream-of-consciousness structure. However, every sentence beyond the initial repetition earns its place by conveying workflow, return types, or platform-specific formatting rules.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite lacking an output schema and annotations, the description is comprehensive. It documents return values, explains the pagination mechanism, clarifies the platform-specific symbol format (InsightSentry's EXCHANGE:SYMBOL), and maps the tool's relationship to 20+ sibling tools. Sufficient for a complex search utility with multiple filter dimensions.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description adds value by explaining parameter interaction logic ('leave query empty' to filter by type/country) and providing concrete query syntax examples ('EXCHANGE:') that supplement the schema's generic 'Search query string' description for the query parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly identifies this as a symbol search/lookup tool and distinguishes it from sibling data-retrieval tools (get_quotes, get_symbol_series) by positioning it as the necessary first step to find codes. The opening repetition ('Search for symbols. Search for Symbols.') is tautological and slightly reduces clarity, but the overall purpose is well-established through the workflow guidance and return value documentation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use ('ALWAYS start here to find the correct symbol code') and when-not-to-use ('unless you already know the correct symbol code'). Names specific sibling tools to use afterwards ('Use the returned code with get_quotes, get_symbol_series...'). Includes specific query syntax examples ('EXCHANGE:') for different use cases, creating a complete usage map.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and succeeds by disclosing the complete return data structure (JSON with total_count, data array, dividend fields) and default behavior (current week). Does not mention rate limits or auth, but covers the critical output contract missing from the schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Every sentence earns its place, but the structure is dense with a large JSON blob embedded in the text. The information is necessary given the lack of output schema, but readability suffers from the lack of formatting or separation between return structure and usage instructions.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Excellent completeness for a 4-parameter data retrieval tool with no output schema. The description manually documents the complex nested return structure (compensating for missing output_schema), explains defaults, and provides parameter examples, leaving no critical gaps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the schema has 100% coverage, the description adds valuable semantic context beyond the schema's technical definitions, such as explaining that 'w' is used to 'look ahead' and providing concrete examples (w=4 for a month out, 'US' for country, 'NASDAQ:AAPL' for code).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves 'dividend calendar data' with a specific verb and resource, distinguishing it from siblings like get_earnings or get_ipos through the explicit 'dividend calendar' focus.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context that this is for dividend calendar data with specific time ranges, and explains parameter usage patterns (w=2 for next week, w=4 for month out). Lacks explicit 'when not to use' statements or named alternatives, but the domain is clearly delineated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It excellently specifies the return structure ({available_fields: string[], ...}) and clarifies data format ('All arrays are flat string arrays'), though it omits error handling or rate limit details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Efficiently front-loaded with the core action, followed by return type specification and workflow instruction. Minor redundancy between 'Get available...' and 'Retrieve the list...' prevents a perfect 5, but overall structure is tight with every sentence earning its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Completely compensates for the lack of output schema by explicitly documenting the return object structure and field types. Given the tool's simple purpose (parameter discovery) and single optional parameter, the description provides sufficient context for correct invocation and result handling.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage for the single 'filter' parameter (documented as a JSONata expression). The description adds no additional parameter context, but with complete schema coverage, the baseline score of 3 is appropriate.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description explicitly states the tool retrieves 'available fields and parameters for bond screening' using specific verbs (Get/Retrieve) and clearly distinguishes itself from sibling screener tools (get_stock_screener_params, get_etf_screener_params) by specifying 'bond' in the purpose and return structure.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance with 'Next: use screen_bonds with these fields,' clearly indicating this is a prerequisite discovery tool that must be called before the sibling screen_bonds tool, establishing the correct sequence and relationship.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It compensates by disclosing the return structure: 'Returns {available_fields: string[], available_exchanges: string[], available_countries: string[], sortOrder: string[]}' and clarifies the data format ('All arrays are flat string arrays'). It does not mention rate limits or caching, but successfully documents the output shape.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is front-loaded with purpose and efficiently structured. There is minor redundancy between the first two sentences ('Get available ETF sccreener parameters' vs 'Retrieve the list of available fields and parameters'), but subsequent sentences earn their place by documenting return types, array formats, and next steps.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description comprehensively documents the return structure and data types. Combined with the workflow guidance (linking to screen_etfs) and the well-documented input parameter, the description provides complete context for this simple discovery tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage for its single 'filter' parameter, establishing a baseline of 3. The description does not add parameter-specific semantics, but none are needed given the comprehensive schema documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description explicitly states the tool 'Get[s] available ETF screener parameters' and 'Retrieve[s] the list of available fields and parameters for ETF screening.' It specifically mentions 'ETF' twice, clearly distinguishing it from sibling tools like get_stock_screener_params and get_bond_screener_params.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance with 'Next: use screen_etfs with these fields,' establishing the correct sequence (discovery before screening) and linking to the sibling tool screen_etfs. This clarifies that this tool is for metadata discovery, not the actual screening operation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full disclosure burden. It successfully documents the return structure inline (last_update, base, fundamental_series objects), clarifies 'No values — schema only,' and implies read-only behavior. Minor gap: doesn't explicitly state idempotency or caching characteristics.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense and front-loaded with purpose. The inline JSON return structure is compressed but functional. Examples (e.g., 'find cash flow fields, balance sheet metrics') efficiently communicate use cases. Slightly dense but every sentence serves tool selection or invocation.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite no formal output schema, the description comprehensively documents return values inline. Combined with clear sibling differentiation and discovery workflow examples, it provides complete context for an agent to use this metadata discovery tool effectively.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with the 'filter' parameter fully documented as a JSONata expression. The description adds no explicit parameter guidance, but none is needed given complete schema documentation, warranting the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description explicitly states the tool retrieves 'the fundamentals schema and metadata' and clearly distinguishes it from siblings by specifying it returns 'layout and definitions only' and 'does not include actual data, which are provided by /v3/symbols/{symbol}/fundamentals' (get_symbol_fundamentals).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance: 'Use this to discover and search available fields when you're unsure which field IDs exist... Then call get_symbol_fundamentals...' and references get_fundamentals_series for historical data, creating a clear decision tree for agent tool selection.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses key behavioral traits: the non-real-time nature of data and that 'Only code is guaranteed' with other fields being asset-type dependent. It also embeds the complete return structure. Minor gap: lacks mention of rate limits, caching, or error behaviors.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Despite length due to the inline return schema JSON, the description is well-structured and front-loaded: summary → usage constraint → return schema → field guarantee note → alternatives. Every section earns its place, particularly the embedded return structure which compensates for the lack of a formal output schema field.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool returns complex nested data and no structured output schema exists, the description provides comprehensive completeness by embedding the full return object structure, noting field variability ('Only code is guaranteed'), and specifying sibling alternatives. Nothing critical is missing.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage (symbol format and filter JSONata syntax are well documented in the schema). The description adds no additional parameter semantics, but with such high schema coverage, it doesn't need to. Baseline score appropriate.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves detailed symbol information (type, currency, market data, splits, options) and explicitly distinguishes itself from siblings by directing users to 'get_symbol_fundamentals' for fundamentals and 'list_options' for option chains. It also clarifies the non-real-time nature of the data.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when NOT to use the tool ('Not intended for real-time data') and provides clear alternatives for related use cases ('For fundamentals use get_symbol_fundamentals', 'For option chains use list_options'). This gives the agent clear decision criteria.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Documents complete return structure inline (array of objects with field types), discloses document types covered (filings, transcripts, reports), and explains the relationship between returned IDs and subsequent tool calls.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three distinct information-dense segments: purpose declaration, return structure specification, and usage guidance. No redundant words; front-loaded with action verb and compensates for missing output schema efficiently.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 100% input schema coverage and absence of output schema, description provides complete inline documentation of return values, parameter relationships, and sibling tool coordination. Adequate for a 2-parameter listing tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (both 'code' and 'filter' fully documented in schema). Description mentions 'symbol' aligning with the code parameter but adds no additional semantic detail beyond what the schema already provides. Baseline score for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb-resource combination ('List documents for a symbol') with clear scope. Explicitly distinguishes from sibling 'get_document' by stating this returns a list of metadata vs. content retrieval.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly directs to sibling tool 'get_document' for reading content ('Use the id field with get_document to read content'), clearly delineating when to use the listing tool versus the content retrieval tool.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses the return structure (code, total_items, data array) since no output schema exists, notes the max 5 indicator constraint, and warns that 'some parameters may not apply to certain Indicator IDs.' Deducted one point for omitting rate limits or error behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Contains redundant repetition: 'A maximum of 5 Indicator IDs' appears twice, and 'not all indicators are available for every symbol' is restated. The arrow notation '→ Returns' is informal. However, all content serves the agent's understanding despite the redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Excellent compensation for missing output schema by detailing the return JSON structure inline. Given no annotations and no output schema, the description provides prerequisites, constraints, return format, and sibling relationships—everything needed for correct invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description adds value by explaining the dependency workflow for the 'ids' parameter (must fetch valid IDs from sibling endpoints first) and noting parameter interaction edge cases ('some parameters may not apply to certain Indicator IDs').

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Opens with resource type ('Fundamental Data in Time Series format') and specific action ('Retrieve historical data for specific indicators'). Explicitly distinguishes from siblings get_fundamentals_meta and get_symbol_fundamentals by stating this tool requires pre-existing Indicator IDs while those provide the ID catalog.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit prerequisite workflow: 'If you don't know the available IDs, call get_fundamentals_meta or get_symbol_fundamentals first.' Also clarifies constraints that govern when the tool works ('not all indicators are available for every symbol') and parameter applicability limits.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses the return structure in detail (total_items, data array with specific fields) and notes the 'up to 10 symbols' limit. It does not mention rate limits, error behavior on exceeding 10 symbols, or latency characteristics, but covers the essential behavioral contract.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Front-loaded with the core purpose ('Real-time quote data. Retrieve...'). The inline JSON return structure is verbose but necessary given the lack of structured output schema. The sibling tool references are efficiently placed at the end. Slightly dense but information-dense.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Compensates well for lack of annotations and structured output schema by providing detailed return structure inline. Covers the 10-symbol limit and points to related tools. Could be improved with error handling or rate limit notes, but sufficient for a data retrieval tool of this complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3. The description adds the critical constraint 'up to 10 symbols' which is not documented in the codes parameter schema. However, it does not elaborate on the boolean flags (settlement, badj, extended, dadj) beyond the schema descriptions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states 'Retrieve real-time quote data for up to 10 symbols' - a specific verb (retrieve), resource (quote data), and scope constraint (10 symbols, real-time). It clearly distinguishes from siblings by naming get_symbol_series for historical data and get_symbol_info for company details.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly directs users to alternatives: 'For historical data use get_symbol_series. For company details use get_symbol_info.' This provides clear when-not-to-use guidance and names the correct sibling tools for those use cases.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full behavioral disclosure burden. It details the complex return structure (code, data array with hundreds of fields, last_update timestamp) and explains the filtering mechanism to reduce token usage. Lacks explicit mention of rate limits or caching, but comprehensively describes the payload structure and filtering capabilities.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense but well-structured with clear progression: summary → return format → data volume warning → filter examples → prerequisite tools → sibling alternatives. Every sentence provides actionable guidance. Slightly dense but justified by the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool returns hundreds of fields with no structured output schema available, the description adequately compensates by documenting the return structure inline. It covers the discovery workflow (get_fundamentals_meta prerequisite), filtering strategies, and appropriate presentation patterns. Could mention pagination or rate limiting, but substantially complete for the complexity level.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the schema has 100% coverage (baseline 3), the description adds concrete JSONata syntax examples for the filter parameter: 'data[category='Valuation']' and '$distinct(data.category)'. These practical examples significantly enhance understanding of how to use the optional filter parameter effectively.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states it retrieves 'comprehensive fundamental data' and lists specific categories (valuation ratios, profitability metrics, balance sheet, etc.). It explicitly distinguishes from siblings by directing users to get_fundamentals_series for historical data and get_fundamentals_meta for field discovery.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use guidance: 'For historical fundamentals use get_fundamentals_series' and 'If you're unsure which fields exist, call get_fundamentals_meta first.' Also includes prescriptive guidance on presentation: 'Present only the fields relevant to the user's question — do NOT dump the entire response.'

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden and successfully discloses return structure (pagination fields, data array contents) and workflow dependencies. However, it omits explicit declaration of read-only safety, rate limits, or error handling behavior beyond the schema's ignore_invalid parameter.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense but well-structured with clear logical flow: purpose → return format → workflow → example. Every element serves a necessary function, particularly the inline return structure JSON which compensates for missing output schema. Slightly cluttered but justified by information value.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 8 parameters, no output schema, and no annotations, the description adequately compensates by documenting the complete response structure, pagination behavior, and required prerequisite call. Minor gap in not describing error states when prerequisite is skipped or rate limiting concerns.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% coverage (baseline 3). The description adds significant value through a concrete JSON example showing valid field values and structure, plus clarifies the discovery workflow dependency for populating the fields parameter, which aids agent reasoning beyond raw schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description explicitly states 'Retrieve bond data based on specified filter criteria' with specific resource (bonds) and action (retrieve/screen). The 'Bond Screener' header and reference to bond-specific parameters clearly distinguishes from sibling screeners (screen_stocks, screen_crypto, screen_etfs).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit two-step workflow: '1) Call get_bond_screener_params to discover available fields. 2) POST with your chosen fields.' This clearly dictates when to use the sibling discovery tool versus this screening tool, establishing a mandatory prerequisite relationship.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses critical behavioral traits: 30k data point limit, that abbr format 'does not affect response speed — only changes the output format,' and warns that 'Not all bar types include the same fields (e.g., tick data may only have [time, type, close]).' Could improve by stating idempotency/safety explicitly.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense and well-structured: opens with scope/limit, details return formats (necessary since no output schema exists), provides data quality warnings, then sibling alternatives. Slightly verbose due to inline JSON return structure documentation, but justified given missing output schema.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Excellent completeness compensating for missing output schema and annotations. Documents return structure for both abbr modes, specifies field availability variations by bar type, states explicit limits (30k bars), and provides clear migration paths to sibling tools.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (baseline 3). Description adds crucial semantic context: explains abbr parameter is for 'reduced LLM token usage,' links 'up to 30k bars' to dp parameter, and warns that tick bar_type has different field availability than standard OHLCV.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb ('Retrieve') + resource ('recent historical OHLCV data') + scope ('up to 30k bars'). Explicitly distinguishes from sibling get_symbol_history by specifying the 30k bar limit vs 'more than recent 30k bars' for the alternative.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicit when-to-use guidance: 'For intra-day historical data (if you need more than recent 30k bars) use get_symbol_history instead.' Also directs to websocket resource for real-time streaming, clarifying this tool's polling-based real-time option (long_poll) vs true streaming.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, description carries full burden by disclosing return structure (pagination fields, data array schema), noting delay_seconds in response data, and explaining the dependency on get_crypto_screener_params. Lacks rate limits or auth details but covers essential behavioral traits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense and well-structured: leads with purpose, uses arrow notation (→) for return values, capitalizes WORKFLOW for scanability, and includes precise example. No wasted words despite covering 8 parameters plus return structure.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 8 parameters, no annotations, and no structured output schema, description comprehensively fills gaps by documenting return object structure, pagination behavior, and explicit workflow. Minor gap regarding error handling or rate limiting prevents a 5.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 100% schema coverage (baseline 3), description adds significant value through concrete JSON example showing valid field combinations and sort syntax, plus workflow context linking fields parameter to the sibling discovery tool.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description explicitly states 'Retrieve cryptocurrency data based on specified filter criteria' and distinguishes from siblings by noting 'country filtering NOT supported for crypto,' clearly delineating its domain against screen_stocks, screen_bonds, and screen_etfs.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit WORKFLOW with numbered steps (1. Call get_crypto_screener_params, 2. POST with chosen fields) and explicitly states limitations ('country filtering NOT supported'), giving clear prerequisites and when-not-to-use guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses the complete return structure including pagination fields (hasNext, total_page) and data array contents, which compensates for the missing output_schema. It also establishes the two-step workflow constraint. Does not mention rate limits or caching, preventing a 5.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense and well-structured: opens with purpose, documents return structure (necessary without output schema), provides workflow steps, and includes a concrete example. The inline JSON for return structure is lengthy but essential; no sentences are wasted.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for an 8-parameter tool with no annotations or output schema. The description documents the return payload structure, pagination behavior, prerequisite workflow steps, and provides a complete usage example, fully addressing the complexity gap.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3. The description adds significant value with a concrete JSON example showing valid field values ('close', 'volume', 'nav'), exchange codes ('NYSE', 'NASDAQ'), and sort configuration, clarifying how parameters interact in practice.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'Retrieve[s] ETF data based on specified filter criteria'—specific verb, resource, and method. It explicitly identifies the domain as 'ETF Screener,' distinguishing it from sibling tools like screen_bonds, screen_crypto, and screen_stocks.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit WORKFLOW with numbered steps: '1) Call get_etf_screener_params to discover available fields. 2) POST with your chosen fields.' This establishes the prerequisite dependency on get_etf_screener_params and the correct sequence of operations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations, description carries full burden and discloses return structure (hasNext, current_page, data array), pagination limits (1000 results/page), and response fields (delay_seconds). Missing explicit rate limits or error scenarios beyond ignore_invalid behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Efficiently structured with purpose first, followed by return type arrow notation, workflow steps, pro tip, and example. Every sentence serves distinct purpose (workflow, filtering advice, pagination limits). No redundancy despite high information density.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for 8-parameter tool with no output schema. Documents return structure, pagination behavior, prerequisite discovery tool, and includes working example. Compensates fully for lack of annotations and output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% so baseline is 3. Description adds value with concrete JSON example showing realistic field values (NYSE, NASDAQ, market_cap) and practical tip about excluding penny stocks, enhancing semantic understanding of exchanges parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Explicitly states 'Stock Screener' with specific verb 'Retrieve stock data'. Clearly distinguishes from siblings (screen_bonds, screen_crypto, screen_etfs) by specifying 'stock' and referencing get_stock_screener_params for discovery.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit WORKFLOW with numbered steps naming the prerequisite sibling tool (get_stock_screener_params). Includes practical 'Tip' on using exchanges parameter to exclude OTC/penny stocks, giving clear when-to-use guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior5/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It comprehensively documents return value structures (both standard and abbr=true formats), warns about field variability ('Not all bar types include the same fields'), discloses pagination limits ('one month of data per call'), and explains archive depth ('20 years+').

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is information-dense and front-loaded with purpose. While lengthy due to inline JSON examples of return formats, this is necessary given the absence of a formal output schema. Every sentence conveys critical behavioral constraints or return format details with minimal redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (10 parameters, nested return objects, pagination requirements) and lack of structured output schema, the description is remarkably complete. It covers return formats, pagination strategy, sibling alternatives, field variability warnings, and archive limitations—all essential for successful agent invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage, establishing a baseline of 3. The description adds valuable usage semantics beyond the schema, specifically the iteration pattern for start_date ('Iterate start_date (YYYY-MM) for longer ranges') and clarifies the relationship between bar_type and available fields in the response series.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb phrase ('Retrieve historical data') and clearly defines the resource scope ('intra-day historical data', 'second/minute/hour'). It explicitly distinguishes itself from sibling 'get_symbol_series' by stating it supports 'second/minute/hour bars only' and contrasting use cases (archive vs. recent data).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit alternative tools with clear conditions: 'for daily/weekly/monthly, use get_symbol_series' and 'For recent data (up to 30k bars) use get_symbol_series instead.' Also includes pagination guidance ('Returns one month of data per call. Iterate start_date...') which is critical for correct invocation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

insight_mcp MCP server

Copy to your README.md:

Score Badge

insight_mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rezmeplxrf/insight_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server