Fintel Discovery — Financial Intelligence for AI Agents
Server Details
Delivers public regulatory and market data from 11 key sources such as FINRA, SEC, Census, FRED
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 32 of 32 tools scored.
Most tools have distinct purposes targeting specific financial data sources or workflows, such as SEC filings, market data, or regulatory databases. However, some overlap exists between GetFundFees and GetFundProfile for expense ratios, and between LookupTicker and SearchFigiInstruments for symbol lookup, which could cause minor confusion.
Tool names follow a highly consistent verb_noun pattern throughout, with clear prefixes like 'Get', 'Search', or 'Lookup' followed by descriptive nouns. All names use PascalCase uniformly, making them predictable and easy to parse.
With 32 tools, the count feels excessive for a single server, even given the broad financial intelligence domain. Many tools could be consolidated or grouped, such as multiple SEC filing tools or overlapping market data functions, leading to potential cognitive overload for agents.
The tool set provides comprehensive coverage across financial domains, including market data, regulatory filings, fund analysis, economic indicators, and entity lookups. It supports end-to-end workflows with search and detail pairs, leaving no obvious gaps for the stated purpose of financial intelligence.
Available Tools
32 toolsGet13FHoldingsGet 13F Holdings — Full Parsed InfotableARead-onlyIdempotentInspect
Fetch and parse the complete equity holdings table from a specific SEC 13F-HR
filing. Any institution managing more than $100M in US equities must file
quarterly — this reveals their exact portfolio positions.
Returns one record per position:
- name_of_issuer — company name (e.g. 'APPLE INC')
- cusip — 9-character CUSIP identifier
- title_of_class — share class (e.g. 'COM', 'ADR')
- value_thousands — market value in thousands USD
- value_usd — market value in USD
- shares_or_principal — number of shares (SH) or principal amount (PRN)
- investment_discretion — SOLE, SHARED, or OTHER
- put_call — 'Put' or 'Call' for options; null for equities
- voting_sole/shared/none — voting authority breakdown
PRIMARY USE: Step 2 of institutional holdings workflow. Obtain cik and
accession_no from SearchEdgar13F or GetEdgarCompanyFilings, then call this
tool to get the actual positions.
Use min_value_thousands to filter noise (e.g. 1000 = positions ≥ $1M).
Use sort_by='value_desc' to see the largest positions first.
Use limit (default 100) and offset for pagination — large filers can
have 3,000+ positions. Check _has_more in the response to know if more
pages exist.
Source: SEC EDGAR Archives (13F infotable XML). No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering basic safety. The description adds valuable behavioral context beyond annotations: it explains pagination behavior ('Use limit and offset for pagination — large filers can have 3,000+ positions. Check _has_more in the response'), discloses the data source ('SEC EDGAR Archives'), and notes 'No API key required.' It doesn't mention rate limits or authentication requirements, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and efficiently organized. It starts with the core purpose, lists return fields, provides primary usage guidelines, explains key parameters, and ends with source information. Every sentence serves a clear purpose: no redundant information, no unnecessary elaboration. The bulleted list of return fields is appropriately formatted for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (fetching and parsing SEC filings with multiple parameters) and the presence of an output schema (which handles return value documentation), the description is largely complete. It covers purpose, workflow integration, key parameters, and behavioral context. The main gap is incomplete parameter coverage, but with an output schema available, the description doesn't need to explain return values. It provides sufficient context for effective tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden for parameter documentation. It provides meaningful context for several parameters: it explains min_value_thousands ('filter noise'), sort_by options, and limit/offset usage for pagination. However, it doesn't cover all parameters (e.g., wholesaler_ids, exclude_fillers, additional_display_fields) that appear in the schema. The description adds value but doesn't fully compensate for the complete lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Fetch and parse the complete equity holdings table from a specific SEC 13F-HR filing.' It specifies the exact resource (SEC 13F-HR filing holdings) and distinguishes it from siblings like SearchEdgar13F (which discovers filings) and GetEdgarCompanyFilings (which gets filing metadata). The description clarifies this is for 'Step 2 of institutional holdings workflow' after obtaining identifiers from those other tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'PRIMARY USE: Step 2 of institutional holdings workflow. Obtain cik and accession_no from SearchEdgar13F or GetEdgarCompanyFilings, then call this tool to get the actual positions.' It names specific alternative tools for preceding steps and clearly defines the workflow sequence, leaving no ambiguity about when this tool should be invoked.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetAdvisorBenchmarksKitces Advisor Practice BenchmarksARead-onlyIdempotentInspect
Return Kitces Research advisor practice benchmark data for independent
and RIA-affiliated financial advisors. Covers median and top-quartile
metrics across five categories:
- revenue: revenue per client, total firm revenue, growth rates
- fees: AUM fee schedules, retainer and hourly rates
- technology: software adoption rates and tech spend
- staffing: headcount, capacity, and support ratios
- clients: household counts, AUM per client, retention rates
Set category='all' (default) to retrieve all categories at once.
Source: Kitces Research annual advisor benchmarking survey (2023–2024).
No API key required — data is embedded as curated static reference.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable context beyond annotations: it discloses that 'No API key required — data is embedded as curated static reference', which clarifies data freshness and access requirements. It also mentions the source and years (2023–2024), giving behavioral insight into data limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by category details and key behavioral notes. Every sentence adds value: the first states the action, the second lists categories, the third explains the default parameter, and the final sentences provide source and access context. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has annotations (read-only, non-destructive, idempotent) and an output schema (implied by context signals), the description is mostly complete. It covers purpose, data scope, source, and access method. However, it could better integrate with the input schema by acknowledging other parameters or explaining output format, though the output schema reduces this need.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains the 'category' parameter with options and default ('all'), covering the single parameter's semantics effectively. However, it doesn't address other schema parameters like 'wholesaler_ids' or 'exclude_fillers', leaving some gaps in parameter understanding despite the tool having only one primary parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('return', 'retrieve') and resources ('Kitces Research advisor practice benchmark data'), and distinguishes it from siblings by focusing on independent and RIA-affiliated financial advisor benchmarks. It explicitly lists the five categories covered, making the scope unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning the data source (Kitces Research annual survey) and that it's 'curated static reference', but it lacks explicit guidance on when to use this tool versus alternatives. No sibling tools are mentioned for comparison, and there's no advice on prerequisites or exclusions beyond the default category setting.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetAnalystRatingsGet Analyst Ratings & Price TargetsARead-onlyIdempotentInspect
Fetch analyst buy/sell/hold consensus ratings, current price targets
(low, high, mean, median), and the full history of analyst upgrades
and downgrades with firm name, fromGrade, toGrade, and action.
Use this tool when:
- You want to know the current Wall Street consensus on a stock
- You need analyst price target range (upside/downside to target)
- You are tracking rating changes from major research firms
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide. While annotations declare readOnlyHint=true and idempotentHint=true (indicating safe, repeatable reads), the description discloses the data source ('Yahoo Finance via yfinance'), authentication requirements ('No API key required'), and scope of data returned (consensus ratings, price targets, upgrade/downgrade history). This provides practical implementation details that annotations don't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and economical: first sentence states purpose, bulleted list provides usage guidelines, final sentence gives implementation details. Every sentence earns its place with no redundancy or filler content. The information is front-loaded with the core functionality stated immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (readOnlyHint, idempotentHint), and existence of an output schema, the description is complete. It covers purpose, usage scenarios, data source, and authentication - all the contextual information an agent needs to select and invoke this tool correctly. The output schema will handle return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (the schema has detailed parameter descriptions but they're not counted in coverage), the description carries full burden but mentions no parameters. However, the tool has only one top-level parameter ('params'), and the description implies the 'symbol' parameter through examples like 'AAPL' in usage contexts. This provides minimal semantic value beyond what's obvious from the tool name and context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch') and resources ('analyst buy/sell/hold consensus ratings, current price targets, and full history of analyst upgrades and downgrades'). It distinguishes itself from siblings by focusing exclusively on analyst ratings and price targets, unlike tools like GetFinancials or GetEarningsHistory which cover different financial data aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit 'Use this tool when' guidelines with three specific scenarios: knowing Wall Street consensus, needing price target ranges, and tracking rating changes. This gives clear context for when to select this tool over alternatives like GetTickerInfo or GetFinancials, which might provide overlapping but different data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetBrokerCheckDetailGet BrokerCheck Full Profile by CRDARead-onlyIdempotentInspect
Retrieve the full FINRA BrokerCheck profile for one individual using
their CRD number. Returns complete employment history, exam qualifications,
licenses held, and all disclosure details.
Use this tool when:
- You have a CRD (from SearchBrokerCheck) and want full profile detail
- You need employment history, prior firms, or qualification data for a rep
- You are performing due diligence on an individual advisor
Source: FINRA BrokerCheck public API. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies the data source (FINRA BrokerCheck public API), notes that no API key is required, and describes the comprehensive return data (employment history, exam qualifications, licenses, disclosure details). While annotations cover read-only, non-destructive, and idempotent properties, the description provides practical implementation details that help the agent understand the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with three distinct sections: purpose statement, usage guidelines in bullet points, and source information. Every sentence earns its place, providing essential information without redundancy. The front-loaded purpose statement immediately communicates the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters despite only 1 being required) and the presence of an output schema, the description does well by focusing on purpose, usage, and source context. It explains what data is returned and when to use the tool, which complements the structured fields. The main gap is insufficient parameter coverage, but the output schema likely handles return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries full burden for parameter documentation. It only mentions the CRD parameter explicitly ('using their CRD number'), providing some context about its purpose and relationship to SearchBrokerCheck. However, it doesn't address the other 7 parameters in the schema, leaving significant gaps in parameter understanding despite the schema's comprehensive descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Retrieve') and resource ('full FINRA BrokerCheck profile for one individual using their CRD number'), specifying the exact data returned (employment history, exam qualifications, licenses, disclosure details). It distinguishes from sibling tools like SearchBrokerCheck by focusing on detailed profile retrieval rather than search functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides three bullet points detailing when to use this tool: when you have a CRD from SearchBrokerCheck, need employment/qualification data, or are performing due diligence. It names the sibling tool SearchBrokerCheck as the source for obtaining CRDs, giving clear guidance on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetDividendsAndSplitsGet Dividends & Stock SplitsARead-onlyIdempotentInspect
Fetch the full history of cash dividends, stock splits, and combined
corporate actions for a ticker. Returns date, amount/ratio for each event.
Use this tool when:
- You need dividend history or yield calculation inputs
- You are researching dividend growth over time
- You want to verify stock split history for return calculations
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, destructiveHint=false, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations by specifying the data source ('Yahoo Finance via yfinance') and noting 'No API key required', which helps the agent understand accessibility and potential limitations. However, it doesn't mention rate limits or data freshness, leaving some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded, starting with the core purpose, followed by usage guidelines and source information. Each sentence serves a distinct purpose without redundancy, and the bullet points in the guidelines enhance readability while maintaining brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (fetching historical corporate actions), the description is mostly complete: it clarifies the purpose, provides usage guidelines, and discloses the data source. With annotations covering safety and an output schema presumably detailing return values, key aspects are addressed. However, it lacks information on data limitations (e.g., historical depth, availability for all tickers) and error handling, which could be relevant for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, but the tool has only 1 parameter ('params'), which is a nested object. The description does not mention any parameters, so it adds no semantic information beyond what the schema provides. Given the single parameter structure, the baseline is 3, as the schema must carry the full burden of parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch the full history') and resources ('cash dividends, stock splits, and combined corporate actions for a ticker'), distinguishing it from siblings like GetPriceHistory or GetFinancials that focus on different financial data types. It explicitly mentions what is returned ('date, amount/ratio for each event'), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an explicit 'Use this tool when' section with three bullet points detailing specific scenarios (e.g., 'dividend history or yield calculation inputs', 'researching dividend growth over time', 'verify stock split history for return calculations'). This provides clear guidance on when to use this tool versus alternatives like GetFinancials or GetPriceHistory that might not cover corporate actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetEarningsHistoryGet Earnings History & EstimatesARead-onlyIdempotentInspect
Fetch earnings history (EPS actual vs estimate, surprise %) and upcoming
earnings dates with consensus estimates. Also returns forward EPS estimates
by quarter and fiscal year.
Use this tool when:
- You want to see how a company has performed vs EPS expectations
- You need the next earnings date and the consensus estimate
- You are analyzing earnings surprise trends or growth trajectory
Returns three sections: earnings_history, earnings_dates, earnings_estimate.
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations by specifying the data source (Yahoo Finance via yfinance) and noting 'No API key required.' While annotations already indicate read-only, non-destructive, and idempotent behavior, the description provides practical implementation details that help the agent understand data provenance and access requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose statement, usage guidelines, return format, and source information. Each sentence adds value with no redundancy. The bulleted usage guidelines are particularly effective for quick scanning while maintaining comprehensive coverage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema (which handles return values), annotations covering safety profile, and clear purpose/usage guidance, the description is mostly complete. The main gap is the lack of parameter explanation, which is significant since the schema has 0% description coverage. However, the description excels at explaining what the tool does and when to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the schema provides no parameter documentation. However, the description doesn't mention any parameters at all - it doesn't explain that a symbol parameter is required or describe the limit parameter. While the description outlines what data is returned, it fails to address how to specify which company's earnings to fetch, leaving a significant gap in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches earnings history (EPS actual vs estimate, surprise %) and upcoming earnings dates with consensus estimates, plus forward EPS estimates. It specifies the exact data returned and distinguishes itself from siblings like GetFinancials or GetTickerInfo by focusing exclusively on earnings performance metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an explicit 'Use this tool when:' section with three specific scenarios: analyzing EPS performance vs expectations, needing next earnings date with consensus, and analyzing earnings surprise trends or growth trajectory. This provides clear guidance on when this tool is appropriate versus other financial data tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetEdgarCompanyFilingsGet SEC EDGAR Filings by CIKARead-onlyIdempotentInspect
Retrieve all SEC filings for a company or institution using its CIK
(Central Index Key). Returns every filing on record: form type, date,
accession number, and description. Useful for tracking all regulatory
disclosures from a specific institution over time.
Use this tool when:
- You have a CIK and want to see all filing activity for a company
- You want to track 13F, ADV, or ownership disclosure history
- You need accession numbers to pull specific filing documents
Find a CIK at: https://www.sec.gov/cgi-bin/browse-edgar?action=getcompany
Source: SEC EDGAR data API. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable context beyond this: it specifies 'No API key required' (auth information), mentions the data source ('SEC EDGAR data API'), and clarifies the return format ('form type, date, accession number, and description'), which enhances transparency without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose, usage guidelines, and additional notes. Each sentence adds value, such as explaining the return format and providing practical tips. It's slightly verbose but efficiently organized, with no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has annotations (readOnlyHint, etc.) and an output schema (implied by context signals), the description provides good context: it explains the purpose, usage, data source, and authentication. However, it doesn't detail behavioral aspects like rate limits or error handling, which could be useful given the external API dependency, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden for parameter meaning. It explains the CIK parameter ('Central Index Key') and provides a URL to find it, but doesn't address other parameters in the schema (e.g., wholesaler_ids, exclude_fillers). Since there's only 1 parameter (cik) according to context signals, and the description covers it adequately, a baseline of 3 is appropriate, though it doesn't fully compensate for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve all SEC filings'), resource ('company or institution using its CIK'), and scope ('Returns every filing on record'). It distinguishes from siblings like SearchEdgar13F by emphasizing comprehensive retrieval rather than specific form searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios with bullet points: when you have a CIK, want to track specific filing types (13F, ADV), or need accession numbers. It also includes a link to find CIKs and notes the data source, giving clear guidance on when to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetFinancialsGet Financial StatementsARead-onlyIdempotentInspect
Fetch income statement, cash flow statement, or balance sheet for a stock.
Returns up to 4 years of annual data or 4 quarters of quarterly data,
transposed so each row is one reporting period.
Use this tool when:
- You need revenue, net income, EPS, or operating margins
- You want cash flow from operations, CapEx, or free cash flow
- You need total assets, debt, equity, or liquidity ratios
- You are doing fundamental analysis on a stock
statement options: 'income', 'cashflow', 'balance'.
freq options: 'yearly', 'quarterly', 'trailing' (TTM, income only).
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, destructiveHint=false, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it specifies the data source ('Yahoo Finance via yfinance'), notes 'No API key required', describes the return format ('transposed so each row is one reporting period'), and indicates data limits ('up to 4 years of annual data or 4 quarters of quarterly data'). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose, usage guidelines, parameter options, and source information. It's appropriately sized, but could be slightly more concise by integrating the parameter options into the initial purpose statement. Every sentence adds value, with no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (fetching financial data with multiple statement types and frequencies), the description is mostly complete. It covers purpose, usage, key parameters, behavioral traits, and source. With annotations providing safety info and an output schema presumably detailing return values, the main gap is incomplete parameter coverage for non-core parameters, but overall it's sufficient for effective tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description compensates by explaining key parameters: it lists 'statement options: 'income', 'cashflow', 'balance'' and 'freq options: 'yearly', 'quarterly', 'trailing' (TTM, income only)'. However, it doesn't cover other parameters like 'symbol' or the many unrelated parameters (e.g., 'wholesaler_ids', 'exclude_fillers') that appear in the schema, leaving gaps in parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('fetch') and resources ('income statement, cash flow statement, or balance sheet for a stock'), distinguishing it from siblings like GetPriceHistory or GetDividendsAndSplits that focus on different financial data types. It explicitly lists the types of financial statements and data returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with a bulleted list of scenarios when to use this tool (e.g., 'when you need revenue, net income, EPS, or operating margins'), and implicitly distinguishes it from siblings by focusing on fundamental financial statements rather than holdings, ratings, or price data. The 'Use this tool when' section offers clear context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetFredSeriesDataGet FRED Series DataARead-onlyIdempotentInspect
Fetch time-series observation data from FRED for a specific economic
series. Returns date + value pairs with series metadata (title, units,
frequency). Use SearchFredSeries first if you don't know the series ID.
Use this tool when:
- You need historical macro data (rates, inflation, GDP, unemployment)
- You want to provide macro context alongside advisor or fund data
- You are comparing economic conditions across time periods
- You need the current value of a key economic indicator
Pass observation_start / observation_end to limit the date range.
Pass frequency to aggregate (e.g. 'm' for monthly, 'q' for quarterly).
Requires FRED_API_KEY environment variable (free at fred.stlouisfed.org).
Source: Federal Reserve Bank of St. Louis FRED API.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide. While annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, the description adds important operational details: authentication requirements ('Requires FRED_API_KEY environment variable'), data source attribution ('Source: Federal Reserve Bank of St. Louis FRED API'), and return format information ('Returns date + value pairs with series metadata'). This significantly enhances the agent's understanding of how to properly invoke and interpret results from this tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and efficiently organized with clear sections: purpose statement, usage guidelines, parameter guidance, and operational requirements. Each sentence adds value, with no redundant information. The front-loaded purpose statement immediately communicates the tool's function, followed by progressively detailed information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (economic data fetching with multiple parameters), the description provides excellent contextual completeness. It covers purpose, usage scenarios, parameter guidance, authentication requirements, data source, and return format. With annotations covering safety aspects and an output schema presumably documenting the return structure, the description fills all necessary gaps to help an agent understand when and how to use this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite having 0% schema description coverage (the schema has no top-level description), the description adds meaningful parameter context. It explains the purpose of observation_start/observation_end ('to limit the date range') and frequency ('to aggregate'), provides examples of frequency values ('m' for monthly, 'q' for quarterly), and gives context about series_id usage. While it doesn't cover all 11 parameters in the nested schema, it addresses the most critical ones for the core functionality.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Fetch time-series observation data') and resources ('from FRED for a specific economic series'), and distinguishes it from sibling tools by explicitly mentioning 'Use SearchFredSeries first if you don't know the series ID'. This provides clear differentiation from other data-fetching tools in the sibling list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with a dedicated 'Use this tool when:' section listing four specific scenarios (historical macro data, macro context, time period comparisons, current indicator values). It also mentions an alternative tool ('SearchFredSeries') for when users don't know the series ID, providing clear when/when-not guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetFundFeesGet Fund Expense Ratios — XBRL rr: TaxonomyARead-onlyIdempotentInspect
Retrieve expense ratios and fee breakdown for a mutual fund or ETF using
its SEC CIK. Reads structured XBRL data filed with prospectuses using the
SEC Risk/Return (rr:) taxonomy. Returns:
- net_expense_ratio — total annual cost to the investor (%)
- gross_expense_ratio — before waivers/reimbursements (%)
- management_fee — advisor/sub-advisor fee (%)
- distribution_12b1_fee — distribution and service fee (%)
- other_expenses — admin, custody, transfer agent fees (%)
- acquired_fund_fees — fees from underlying funds, if any (%)
All values are expressed as percentages (e.g. 0.03 = 0.03%).
PRIMARY USE: Step 2 of fee comparison. Accepts CIKs returned by
SearchFundsByCategory. Run for multiple funds then rank by net_expense_ratio
ascending to find the lowest-cost option in a category.
With include_all_classes=True (default), returns one row per share class
per period — useful for identifying the cheapest share class of a fund.
With include_all_classes=False, returns the single most recent value only.
Note: Not all funds file XBRL rr: data. If this tool returns an error,
use GetFundProfile (yfinance) as a fallback for expense ratio data.
Source: SEC EDGAR XBRL company facts API. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations. While annotations already indicate readOnlyHint=true and idempotentHint=true, the description discloses that 'Not all funds file XBRL rr: data' and may return errors, specifies the data source (SEC EDGAR XBRL company facts API), notes 'No API key required,' and explains the return format (percentages). This provides important operational context that annotations don't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It starts with the core purpose, lists return values, explains usage context, provides parameter-specific guidance, notes limitations, and ends with source information. Every sentence adds value without redundancy, and information is logically organized for easy scanning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial data retrieval with multiple parameters) and the presence of an output schema (which handles return value documentation), the description is complete enough. It covers purpose, usage workflow, behavioral constraints, data source, and key parameter effects. The output schema will document the return structure, so the description appropriately focuses on operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (the schema has no parameter descriptions), the description must compensate but only partially succeeds. It explains the include_all_classes parameter's effect on results, but doesn't address other parameters like cik, wholesaler_ids, or exclude_fillers. The description adds some value for one parameter but leaves most undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieve expense ratios and fee breakdown for a mutual fund or ETF using its SEC CIK.' It specifies the exact resource (mutual fund/ETF), verb (retrieve), and data source (SEC XBRL rr: taxonomy). It distinguishes from sibling tools like GetFundProfile by focusing on XBRL-based fee data rather than general profile data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'PRIMARY USE: Step 2 of fee comparison. Accepts CIKs returned by SearchFundsByCategory.' It specifies when to use this tool (after SearchFundsByCategory) and when to use alternatives ('If this tool returns an error, use GetFundProfile as a fallback'). It also explains parameter-specific usage scenarios for include_all_classes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetFundProfileGet Fund Profile (ETF / Mutual Fund)ARead-onlyIdempotentInspect
Fetch ETF or mutual fund specific data: top holdings with weight %,
sector allocations, expense ratio, bond credit quality ratings,
and equity style characteristics.
Use this tool when:
- You need the top 10 holdings and their weights for an ETF or fund
- You want sector allocation breakdown (tech %, financials %, etc.)
- You need bond rating distribution for a fixed-income fund
- You are comparing fund profiles for advisor recommendations
section options: 'overview', 'holdings', 'sectors', 'bond_ratings',
'equity_holdings', 'all'.
Only works for ETFs and mutual funds. For stocks, use GetTickerInfo.
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, and idempotent operations, the description discloses the data source ('Yahoo Finance via yfinance'), authentication requirements ('No API key required'), and functional limitations ('Only works for ETFs and mutual funds'). It doesn't contradict annotations and provides practical implementation details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose statement, usage guidelines, parameter options, limitations, and source information. Each sentence adds value, though the 'section options' line could be more integrated with the usage guidelines. It's appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has annotations covering safety aspects (read-only, non-destructive) and an output schema exists, the description provides good contextual completeness. It explains what data is returned, when to use the tool, limitations, and source information. The main gap is insufficient parameter documentation given the 0% schema coverage, but otherwise it's quite comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (the schema has no top-level description), the description carries the full burden of explaining parameters. It mentions 'section options' with specific values but doesn't explain the 'symbol' parameter or other schema parameters. The description provides some parameter context but doesn't fully compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch ETF or mutual fund specific data') and enumerates the exact data points retrieved (top holdings with weight %, sector allocations, expense ratio, bond credit quality ratings, equity style characteristics). It explicitly distinguishes this tool from sibling GetTickerInfo by stating 'Only works for ETFs and mutual funds. For stocks, use GetTickerInfo.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios in a bulleted list ('Use this tool when...') covering four specific use cases. It also clearly states exclusions ('Only works for ETFs and mutual funds') and names the alternative tool ('For stocks, use GetTickerInfo'). This gives comprehensive guidance on when to use this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetHoldersGet Holders & Ownership DataARead-onlyIdempotentInspect
Fetch ownership data for a stock: top institutional holders, mutual fund
holders, and recent insider transactions (buys/sells by executives).
Use this tool when:
- You want to know which institutions or funds own a stock
- You are checking for insider buying or selling activity
- You need institutional ownership concentration data
holder_type options: 'institutional', 'mutualfund', 'insider', 'all'.
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, and idempotent operations, the description discloses the data source ('Yahoo Finance via yfinance'), authentication requirements ('No API key required'), and clarifies what specific data types are returned (institutional holders, mutual fund holders, insider transactions). This provides practical implementation details the agent needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized with four distinct sections: purpose statement, usage guidelines, parameter clarification, and implementation details. Each sentence earns its place by providing essential information without redundancy. The bulleted list for usage guidelines enhances readability while maintaining efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (fetching ownership data from external sources) and the presence of both annotations and an output schema, the description provides good contextual completeness. It explains what data is returned, when to use it, source details, and key parameter options. The main gap is the lack of explanation for most input parameters, but the output schema handles return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions 'holder_type options' with four specific values, which adds meaning beyond the schema. However, with 0% schema description coverage (the schema has no parameter descriptions), the description doesn't explain the other 8 parameters in the input schema. While it provides some parameter guidance, it doesn't fully compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('fetch ownership data') and resources ('stock', 'top institutional holders, mutual fund holders, and recent insider transactions'). It distinguishes itself from siblings like Get13FHoldings or GetFinancials by focusing specifically on ownership data rather than holdings, financials, or other market data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with a bulleted list of three specific scenarios: wanting to know which institutions/funds own a stock, checking insider activity, and needing institutional ownership concentration data. This gives clear context for when to use this tool versus alternatives like Get13FHoldings or GetFinancials.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetIAPDFirmDetailGet SEC Form ADV Detail by CRDARead-onlyIdempotentInspect
Retrieve the full Form ADV filing detail for one RIA firm by its CRD number.
Returns all Form ADV Part 1 fields: client types, advisory activities, fee
arrangements, custody information, office locations, and affiliated entities.
Use this tool when:
- You have a firm CRD (from SearchIAPDFirm) and want complete ADV detail
- You need office locations, custodians, or affiliated BD information
- You are building a detailed profile for a prospect RIA firm
Source: SEC IAPD public API. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations by specifying the source ('SEC IAPD public API'), authentication requirements ('No API key required'), and the scope of returned data ('all Form ADV Part 1 fields'). While annotations cover read-only and idempotent characteristics, the description provides practical implementation details that help the agent understand the tool's operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by usage guidelines and implementation details. Every sentence adds value without redundancy, and the bullet points make the guidelines easily scannable. The entire description is appropriately sized for its complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and existence of an output schema, the description provides complete contextual information. It covers purpose, usage guidelines, data source, authentication, and data scope, leaving return value details to the output schema. This is sufficient for the agent to understand when and how to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the single parameter (crd), the description partially compensates by mentioning 'CRD number' and referencing SearchIAPDFirm as the source. However, it doesn't explain the parameter format or constraints beyond what's implied. The baseline is 3 since schema coverage is low but the description adds some context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Retrieve') and resource ('full Form ADV filing detail for one RIA firm by its CRD number'). It distinguishes from sibling tools by specifying it's for ADV detail by CRD, unlike SearchIAPDFirm which finds firms, and GetIAPDIndividualDetail which focuses on individuals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with three bullet points stating when to use this tool, including referencing a sibling tool (SearchIAPDFirm) as a prerequisite. It also specifies use cases like building detailed profiles, which helps differentiate from simpler search tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetIAPDIndividualDetailGet SEC IAPD Individual DetailARead-onlyIdempotentInspect
Retrieve the full SEC IAPD profile for one individual investment advisor
representative using their CRD number. Returns complete registration history,
exam qualifications, employment history, and any disclosures.
Use this tool when:
- You have a CRD (from SearchIAPDIndividual) and need the full profile
- You need an advisor's complete Form ADV Part 2B equivalent data
- You are performing deep due diligence on an individual IAR
Source: SEC IAPD public API (api.adviserinfo.sec.gov). No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies the data source ('SEC IAPD public API'), notes no API key requirement, and describes the comprehensive return content ('complete registration history, exam qualifications, employment history, and any disclosures'). While annotations cover read-only/idempotent safety, the description enriches understanding of scope and accessibility.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections: purpose statement, usage guidelines, and source information. Every sentence adds value without redundancy. The bulleted list for usage scenarios improves readability while maintaining brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, the description provides complete context: clear purpose, specific usage guidelines, source transparency, and parameter explanation. With annotations covering safety aspects and an output schema presumably detailing return values, the description focuses appropriately on operational guidance without needing to explain technical outputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the single parameter (crd), the description fully compensates by explaining the CRD parameter's purpose ('CRD number of the individual'), how to obtain it ('Obtain from SearchIAPDIndividual'), and providing an example ('6753609'). It adds essential meaning that the schema lacks.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve the full SEC IAPD profile'), target resource ('one individual investment advisor representative'), and key identifier ('using their CRD number'). It distinguishes from sibling tools by specifying this is for individual detail (vs. SearchIAPDIndividual for searching or GetIAPDFirmDetail for firm data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios in a bulleted list: when you have a CRD from SearchIAPDIndividual, need Form ADV Part 2B equivalent data, or are performing deep due diligence. It names the sibling tool (SearchIAPDIndividual) as the source for CRD numbers, creating clear workflow guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetLEIDetailGet GLEIF LEI DetailARead-onlyIdempotentInspect
Retrieve the full GLEIF LEI record for one legal entity using its
20-character LEI code. Returns legal name, registration status, legal
address, headquarters address, managing LOU, and renewal dates.
Use this tool when:
- You have a LEI (from SearchLEI) and need full entity details
- You want to verify the registration status and renewal date
- You need the exact legal address and jurisdiction of an entity
Source: GLEIF API (api.gleif.org). No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds valuable context beyond annotations: it specifies the data source ('GLEIF API (api.gleif.org)'), authentication requirements ('No API key required'), and details what fields are returned ('legal name, registration status, legal address, headquarters address, managing LOU, and renewal dates'). This provides practical implementation details that annotations don't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and front-loaded: first sentence states the core purpose, second sentence lists returned fields, then clear usage guidelines, and finally source/authentication details. Every sentence earns its place with no wasted words. The bulleted list enhances readability while maintaining conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single-purpose lookup with rich output), the description provides complete context: purpose, usage guidelines, returned data, source, and authentication. With annotations covering safety/idempotency and an output schema presumably detailing the return structure, the description fills all necessary gaps. It's particularly strong in distinguishing when to use this versus SearchLEI.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden for parameter documentation. It clearly explains the single required parameter ('20-character LEI code') and provides usage context ('Obtain from SearchLEI'). While it doesn't detail all 8 parameters in the schema (most of which appear to be generic filler parameters unrelated to the core LEI lookup), it adequately documents the essential 'lei' parameter that drives the tool's functionality.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve the full GLEIF LEI record'), resource ('for one legal entity'), and scope ('using its 20-character LEI code'). It distinguishes from sibling tools by focusing on detailed retrieval for a single entity, unlike SearchLEI which presumably finds multiple entities. The verb+resource+scope combination is precise and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios in a bulleted list: 'Use this tool when: - You have a LEI (from SearchLEI) and need full entity details - You want to verify the registration status and renewal date - You need the exact legal address and jurisdiction of an entity'. This clearly defines when to use this tool versus alternatives, including referencing the sibling SearchLEI tool for obtaining LEIs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetMultiTickerHistoryGet Price History — Multiple TickersARead-onlyIdempotentInspect
Fetch OHLCV price history for multiple tickers in a single call.
Returns a flattened table with columns like 'AAPL_Close', 'SPY_Volume', etc.
Use this tool when:
- You are comparing performance across multiple securities
- You need correlated price data for a portfolio or basket of tickers
- You want to compute relative performance or correlation matrices
Pass symbols as a space-separated or comma-separated string:
'AAPL MSFT GOOGL' or 'SPY,QQQ,IWM'.
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the annotations, including the data source ('Yahoo Finance via yfinance'), authentication requirements ('No API key required'), and output format details ('flattened table with columns like AAPL_Close, SPY_Volume'). While annotations cover safety (readOnlyHint=true, destructiveHint=false), the description provides practical implementation details that help the agent understand what to expect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, usage guidelines, parameter format, source information), uses bullet points for readability, and every sentence adds value without redundancy. It's appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and presence of an output schema, the description provides good context about what the tool does, when to use it, and implementation details. The main gap is incomplete parameter coverage, but the output schema will handle return values, making this reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (the schema's 'description' field is empty), the description partially compensates by explaining the 'symbols' parameter format ('space-separated or comma-separated string') with examples. However, it doesn't cover other parameters like start/end dates, period, or interval, leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Fetch') and resource ('OHLCV price history for multiple tickers'), and distinguishes it from the sibling tool 'GetPriceHistory' by emphasizing the multi-ticker capability and flattened table output format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios ('when you are comparing performance across multiple securities', 'need correlated price data for a portfolio', 'want to compute relative performance or correlation matrices') and distinguishes this from single-ticker alternatives by emphasizing the multi-ticker capability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetOptionsChainGet Options Chain (Calls & Puts)ARead-onlyIdempotentInspect
Fetch the full options chain (calls and puts) for one expiry date.
Returns strike price, bid, ask, last price, implied volatility, open
interest, and volume for every contract.
Use this tool when:
- You are researching options strategies for a stock or ETF
- You need implied volatility across strikes for a specific expiry
- You want to see open interest to gauge market sentiment
Call GetOptionsExpirations first to get valid expiry dates.
If expiry_date is omitted, returns the nearest available expiry.
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the data source ('Yahoo Finance via yfinance'), notes 'No API key required,' and describes the default behavior for missing expiry_date. Annotations cover safety (readOnlyHint=true, destructiveHint=false), so the bar is lower, but the description enhances understanding without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded, starting with the core purpose, followed by usage guidelines, prerequisites, and source details. Each sentence adds value without redundancy, and it uses bullet points for clarity, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial data fetching), annotations provide safety info, and an output schema exists (so return values need not be explained). The description covers purpose, usage, prerequisites, source, and parameter behavior, but does not address all parameters (e.g., 'symbol' specifics). It is mostly complete but has minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description compensates by explaining the 'expiry_date' parameter's format and behavior (e.g., 'YYYY-MM-DD format,' 'If omitted, returns the nearest available expiry'). However, it does not cover other parameters like 'symbol' or additional fields, leaving gaps. With high schema coverage, baseline would be 3; here, partial compensation justifies a 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('fetch') and resources ('full options chain for one expiry date'), distinguishing it from siblings like GetOptionsExpirations (which lists dates) and GetPriceHistory (which provides price data). It explicitly mentions what data is returned (strike price, bid, ask, etc.), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with bullet points for when to use it (e.g., 'researching options strategies,' 'need implied volatility across strikes'), includes a prerequisite ('Call GetOptionsExpirations first to get valid expiry dates'), and specifies behavior when a parameter is omitted ('If expiry_date is omitted, returns the nearest available expiry'). This gives clear context and alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetOptionsExpirationsGet Options Expiry DatesARead-onlyIdempotentInspect
List all available options expiry dates for a ticker. Use this before
calling GetOptionsChain to find a valid expiry date.
Use this tool when:
- You want to know which options contracts exist for a stock or ETF
- You need a specific expiry date to pass into GetOptionsChain
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the annotations by disclosing the data source ('Yahoo Finance via yfinance') and authentication requirements ('No API key required'). While annotations cover read-only, non-destructive, and idempotent characteristics, the description provides practical implementation details that help the agent understand the tool's operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with three distinct sections: purpose statement, usage guidelines with bullet points, and implementation details. Every sentence earns its place, there's zero waste, and the information is front-loaded with the core purpose first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has comprehensive annotations (read-only, non-destructive, idempotent), an output schema exists, and the description covers purpose, usage guidelines, and behavioral context, this description is complete for a simple lookup tool. It provides everything an agent needs to understand when and how to use this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description doesn't provide any parameter information beyond what's implied by 'for a ticker' (which suggests the 'symbol' parameter). However, the schema itself has comprehensive descriptions for all parameters, so the baseline of 3 is appropriate since the structured schema carries the parameter documentation burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all available options expiry dates') and resource ('for a ticker'), and explicitly distinguishes it from its sibling tool GetOptionsChain by explaining its preparatory role ('Use this before calling GetOptionsChain to find a valid expiry date'). This provides precise differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios with bullet points ('when you want to know which options contracts exist' and 'when you need a specific expiry date'), and clearly names the alternative tool (GetOptionsChain) for the next step. This gives comprehensive guidance on when to use this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetPriceHistoryGet Price History (OHLCV)ARead-onlyIdempotentInspect
Fetch OHLCV (Open, High, Low, Close, Volume) price history for one ticker.
Returns daily, weekly, monthly, or intraday bars over any period.
Use this tool when:
- You need historical price or volume data for a stock, ETF, or crypto
- You want to analyze performance over a specific time range
- You need to compute returns, volatility, or trend analysis
Interval options: 1d (daily), 1wk (weekly), 1mo (monthly),
1h (hourly, max 730 days), 5m/15m/30m (intraday, max 60 days).
Period options: 1mo, 3mo, 6mo, 1y, 2y, 5y, 10y, ytd, max.
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, covering basic safety. The description adds valuable context beyond annotations: it discloses the data source ('Yahoo Finance via yfinance'), authentication requirements ('No API key required'), and practical constraints ('max 730 days' for hourly, 'max 60 days' for intraday intervals). No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose, usage guidelines, parameter details, and source information. Each sentence adds value without redundancy, and it's front-loaded with the core functionality. The bullet points enhance readability while maintaining efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (fetching financial data with multiple parameters), the description is mostly complete: it covers purpose, usage, key parameters, and behavioral context. With an output schema present, it doesn't need to explain return values. However, it could better address the many schema parameters not mentioned, slightly reducing completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description compensates by explaining key parameters: it clarifies 'interval options' with specific values and constraints, and 'period options' with available lookback periods. However, it doesn't cover all 13 parameters from the schema (e.g., 'actions', 'auto_adjust', 'wholesaler_ids'), leaving gaps despite the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch OHLCV price history') and resource ('for one ticker'), distinguishing it from sibling tools like 'GetMultiTickerHistory' which handles multiple tickers. It explicitly mentions the data type (OHLCV) and the granularity options (daily, weekly, monthly, intraday).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit 'Use this tool when' guidelines with three specific scenarios (historical data needs, time range analysis, performance computations). It also distinguishes from alternatives by specifying 'for one ticker' versus the sibling 'GetMultiTickerHistory' for multiple tickers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetTerritoryWealthProfileGet Territory Wealth Profile — Census ACSARead-onlyIdempotentInspect
Retrieve US Census American Community Survey (ACS) income and wealth proxy
data for a ZIP code or state. Returns median household income, median home
value, total household count, and the count and share of households earning
$100k or more — useful for scoring territory opportunity for financial advisors.
Key metrics returned:
- median_hh_income: Median household income (B19013)
- median_home_value: Median owner-occupied home value (B25077)
- total_households: Total household count (B11001)
- hh_100k_plus: Households earning $100k+ (derived)
- hh_100k_plus_pct: Share of households earning $100k+ (derived)
Use this tool when:
- You are scoring a territory for wealth potential by ZIP code
- You want to compare household income distribution across territories
- You need a demographic wealth proxy before overlaying advisor AUM data
Requires cenpy Python package and optionally a free Census API key
(api.census.gov/data/key_signup.html).
Source: US Census Bureau ACS 5-Year estimates. Free with optional API key.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and idempotentHint=true. The description adds valuable behavioral context beyond annotations: it discloses external dependencies (cenpy Python package, optional Census API key), rate-limiting implications ('falls back to unauthenticated access (rate-limited)'), and data source details (US Census Bureau ACS 5-Year estimates). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, key metrics, usage guidelines, requirements). It's appropriately sized for the tool's complexity, though some sentences about dependencies could be more concise. Most content earns its place by adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and existence of an output schema (implied by 'Has output schema: true'), the description is complete. It covers purpose, usage scenarios, behavioral context, dependencies, and data source, providing sufficient guidance without needing to explain return values (handled by output schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description provides no parameter-specific information beyond what's in the schema. The schema already thoroughly documents all parameters (state, zip_code, census_api_key, etc.). The description adds value by explaining the tool's purpose and usage but doesn't enhance parameter understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve US Census American Community Survey income and wealth proxy data'), resource ('for a ZIP code or state'), and key metrics returned. It distinguishes this tool from siblings by focusing on census data for territory wealth profiling, unlike financial data tools in the sibling list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides three 'Use this tool when' scenarios with specific contexts (scoring territory wealth potential, comparing income distribution, demographic proxy before overlaying AUM data). It clearly guides when to use this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetTickerInfoGet Ticker Info & ProfileARead-onlyIdempotentInspect
Fetch the full Yahoo Finance profile for a stock, ETF, mutual fund, crypto,
or index. Returns name, sector, industry, market cap, P/E ratio, 52-week
range, beta, dividend yield, description, and 60+ other metadata fields.
Use this tool when:
- You need a quick summary of what a company or fund is and its valuation
- You want sector/industry classification for a ticker
- You need current price metadata like market cap, float, or short ratio
Works for: stocks (AAPL), ETFs (SPY), mutual funds (VFINX),
crypto (BTC-USD), indices (^GSPC), forex (EURUSD=X).
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the data source ('Yahoo Finance via yfinance'), notes no API key requirement, and lists the types of assets supported. Annotations cover safety (readOnlyHint=true, destructiveHint=false, idempotentHint=true), so the description appropriately supplements with operational details without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by usage guidelines, supported asset types, and source details. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and presence of an output schema, the description is largely complete. It covers purpose, usage, and behavioral context adequately. However, it could improve by explicitly mentioning the output schema's role or noting parameter details beyond the symbol.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the single parameter (a nested object with multiple sub-parameters), the description compensates by explaining the tool's core input ('symbol' examples like 'AAPL', 'SPY') and clarifying it works for various asset types. However, it doesn't detail other parameters in the schema, leaving some ambiguity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('fetch', 'returns') and resources ('Yahoo Finance profile for a stock, ETF, mutual fund, crypto, or index'), including detailed examples of what it returns. It effectively distinguishes itself from siblings by focusing on comprehensive profile data rather than specific financial metrics like earnings, dividends, or historical prices.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with bullet points detailing when to use this tool (e.g., for quick summaries, sector classification, price metadata) and lists specific asset types it supports. It implicitly distinguishes from siblings by not covering specialized data like holdings, earnings, or history, though it doesn't name alternatives directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
LookupTickerLookup Ticker Symbol by NameARead-onlyIdempotentInspect
Search for a Yahoo Finance ticker symbol by company name, fund name,
or keyword. Returns matching symbols with exchange and asset type.
Use this when you have a name but need the ticker symbol.
Use this tool when:
- You know a company name but not its ticker symbol
- You want to find the ticker for a specific ETF or mutual fund
- You are disambiguating between similarly named securities
asset_type options: 'stock', 'etf', 'mutualfund', 'index',
'cryptocurrency', 'currency', 'future', 'all'.
Source: Yahoo Finance via yfinance. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the annotations. While annotations already indicate read-only, non-destructive, and idempotent operations, the description discloses that the source is 'Yahoo Finance via yfinance' and that 'No API key required' - important implementation details. It also specifies the return format ('matching symbols with exchange and asset type'), which isn't covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose statement, usage guidelines, parameter details, and source information. Each sentence adds value, though the asset_type options list could be more concise. The information is front-loaded with the core purpose stated first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (lookup operation with multiple parameters) and the presence of an output schema (which handles return value documentation), the description provides good contextual coverage. It explains the tool's purpose, when to use it, data source, and some parameter details. The main gap is incomplete parameter documentation given the 0% schema coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries full burden for parameter documentation. It only covers the 'asset_type' parameter by listing its enum values, but doesn't explain the 'query' parameter's semantics or mention other parameters like 'max_results', 'wholesaler_ids', etc. The description adds some value for one parameter but leaves most undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search for', 'Returns') and resources ('Yahoo Finance ticker symbol', 'company name, fund name, or keyword'). It explicitly distinguishes this as a lookup tool for converting names to symbols, differentiating it from sibling tools like GetTickerInfo or GetPriceHistory that work with existing tickers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios in a bulleted list ('Use this tool when...'), clearly stating when to use it (when you have a name but need the ticker symbol) and giving specific examples (company name, ETF/mutual fund, disambiguation). It effectively guides the agent away from using sibling tools that require ticker symbols as inputs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
MapInstrumentIdsMap Instrument IDs via OpenFIGIARead-onlyIdempotentInspect
Map financial instrument identifiers between different ID systems using
Bloomberg's OpenFIGI service. Converts between ticker symbols, ISINs,
CUSIPs, and FIGIs in a single call.
Use this tool when:
- You have a ticker and need the ISIN or CUSIP (or vice versa)
- You are normalizing instrument IDs when combining data from EDGAR,
Yahoo Finance, and other sources that use different ID schemes
- You need to identify what exchange a security trades on
Supported idType values:
- 'TICKER': Stock ticker symbol (e.g. 'AAPL')
- 'ID_ISIN': ISIN (e.g. 'US0378331005')
- 'ID_CUSIP': CUSIP (e.g. '037833100')
- 'ID_FIGI': Bloomberg FIGI
Include 'exchCode': 'US' to target US exchanges for ticker lookups.
Source: Bloomberg OpenFIGI API. No API key required (optional key raises rate limits).
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations. Annotations indicate read-only, non-destructive, and idempotent operations. The description adds that this uses 'Bloomberg's OpenFIGI service,' specifies 'No API key required (optional key raises rate limits),' and mentions exchange targeting with 'exchCode': 'US' for ticker lookups. This provides practical implementation details not covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose. It uses bullet points for usage guidelines and supported idType values, making it scannable. Every sentence adds value: purpose, usage scenarios, parameter details, and source/rate limit information. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (ID mapping across financial systems) and the presence of an output schema, the description is mostly complete. It explains the tool's purpose, usage, key parameters, and behavioral context (source, rate limits). However, it doesn't fully address all schema parameters, and while annotations cover safety, the description could mention more about error handling or limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description compensates by explaining key parameter semantics. It lists supported idType values with examples (TICKER, ID_ISIN, ID_CUSIP, ID_FIGI) and mentions the 'exchCode' parameter for US exchanges. However, it doesn't cover all parameters from the schema (e.g., mcp_prompt_id, wholesaler_ids, exclude_fillers), leaving some undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Map financial instrument identifiers between different ID systems using Bloomberg's OpenFIGI service. Converts between ticker symbols, ISINs, CUSIPs, and FIGIs in a single call.' It specifies the exact action (map/convert), resources (instrument identifiers), and distinguishes from siblings by focusing on ID mapping rather than data retrieval or search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios in a bulleted list: 'Use this tool when: - You have a ticker and need the ISIN or CUSIP (or vice versa) - You are normalizing instrument IDs when combining data from EDGAR, Yahoo Finance, and other sources that use different ID schemes - You need to identify what exchange a security trades on.' This gives clear context for when to use this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchBrokerCheckSearch FINRA BrokerCheck — IndividualsARead-onlyIdempotentInspect
Search FINRA BrokerCheck for registered individual brokers and financial
representatives by name. Returns CRD number, current firm, registration
status, and whether the individual has any disclosures on record.
Use this tool when:
- You need to find the CRD number for a named advisor or rep
- You want to verify registration status for a specific individual
- You are enriching a rep record that is missing a CRD
Geographic workflow: if you don't know the rep's name, first use
SearchBrokersByPlace to discover firms in an area, then use
SearchBrokerCheckFirm to find the firm's CRD, then use this tool
to find individuals at that firm.
Narrow results with the optional 'state' parameter (2-letter code).
To get the full profile after finding a CRD, use GetBrokerCheckDetail.
Source: FINRA BrokerCheck public API. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable context beyond annotations: it discloses the data source (FINRA BrokerCheck public API), authentication requirements ('No API key required'), and mentions pagination behavior ('Narrow results with optional state parameter'). It doesn't contradict annotations, but could mention rate limits or API constraints more explicitly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections: purpose statement, usage guidelines, workflow, parameter hint, and source info. Every sentence adds value—no fluff or repetition. Front-loaded with the core purpose, followed by practical guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (11 parameters, nested object) and rich annotations, the description does well on purpose, guidelines, and behavioral context. However, it lacks parameter explanations, which is a gap since schema coverage is 0%. The presence of an output schema means return values needn't be described, but parameter semantics are under-addressed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description only mentions one parameter ('state') briefly. It doesn't explain the 11 other parameters in the nested object (like name, rows, start, wholesaler_ids, etc.), leaving significant gaps. The description adds minimal value beyond what the schema structure implies, failing to compensate for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search FINRA BrokerCheck'), target resource ('registered individual brokers and financial representatives'), and key return fields (CRD number, current firm, registration status, disclosures). It distinguishes from sibling tools like SearchBrokerCheckFirm (for firms) and GetBrokerCheckDetail (for full profiles).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Use this tool when' section lists three specific scenarios (finding CRD number, verifying registration, enriching records). It provides a geographic workflow alternative (SearchBrokersByPlace → SearchBrokerCheckFirm → this tool) and names an alternative for full profiles (GetBrokerCheckDetail). Clear guidance on when to use and when to use alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchBrokerCheckFirmSearch FINRA BrokerCheck — FirmsARead-onlyIdempotentInspect
Search FINRA BrokerCheck for broker-dealer firms by name. Returns firm
CRD, registration status, city, state, and disclosure flag.
Use this tool when:
- You need the CRD number for a broker-dealer firm (e.g. UBS, Raymond James)
- You want to distinguish between similarly named firms by location
- You are building a territory map of BD firms in a state
Source: FINRA BrokerCheck public API. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and idempotentHint=true, which the description doesn't repeat. The description adds valuable context about the data source (FINRA BrokerCheck public API), authentication requirements (none needed), and the specific fields returned. It doesn't mention rate limits or pagination behavior beyond what's in the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with a clear purpose statement, bulleted usage guidelines, and source information in just four sentences. Every sentence adds value without repetition or fluff, and the information is front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has comprehensive annotations (read-only, non-destructive, idempotent), a detailed input schema with full parameter descriptions, and an output schema exists (though not shown), the description provides excellent contextual completeness. It covers purpose, usage scenarios, data source, and authentication requirements without needing to explain parameters or return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the input schema has extensive descriptions for all parameters. The tool description doesn't mention any parameters beyond implying 'name' searching, so it adds minimal value beyond what's already documented in the schema. With comprehensive schema documentation, the baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches FINRA BrokerCheck for broker-dealer firms by name and returns specific fields (CRD, registration status, city, state, disclosure flag). It distinguishes from sibling 'SearchBrokerCheck' (which appears to be a generic search) by specifying it's for firms only, and from 'GetBrokerCheckDetail' (likely for detailed records) by being a search function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit 'Use this tool when' guidance with three specific scenarios: getting CRD numbers, distinguishing similarly named firms by location, and building territory maps. It also mentions the source (FINRA BrokerCheck public API) and that no API key is required, which helps understand accessibility constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchEdgar13FSearch SEC EDGAR — 13F Institutional HoldingsARead-onlyIdempotentInspect
Search SEC EDGAR for 13F-HR institutional holdings filings by institution
name. Returns filing date, entity name, period of report, and accession
number. Any institution managing more than $100M in equity must file
quarterly 13Fs — this reveals their fund strategies and product usage.
Use this tool when:
- You want to see what funds or ETFs a firm holds in their portfolios
- You are researching an institution's investment strategy from public filings
- You need a list of 13F filings for a specific manager over a date range
Supports start_date and end_date filtering (YYYY-MM-DD format).
Source: SEC EDGAR full-text search API. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true. The description adds valuable context beyond this: it explains the $100M filing requirement, mentions the SEC EDGAR full-text search API source, and states 'No API key required' which is important operational context not covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose statement, usage guidelines, and technical details. Each sentence adds value, though the regulatory explanation about $100M requirement could be considered slightly extraneous. Overall efficient with good front-loading of core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has annotations covering safety (readOnly, non-destructive), idempotency, and world scope, plus an output schema exists, the description provides good contextual completeness. It explains when to use the tool, what it returns, and some technical constraints. The main gap is insufficient parameter guidance given the complex schema with 0% description coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description mentions 'Supports start_date and end_date filtering (YYYY-MM-DD format)' which provides some parameter guidance. However, it doesn't explain the complex nested parameters like wholesaler_ids, exclude_fillers, or source_resource_id that appear in the schema. The description adds minimal value beyond what's implied by the tool name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches SEC EDGAR for 13F-HR institutional holdings filings by institution name, specifying the exact resource (13F filings) and verb (search). It distinguishes from siblings like GetEdgarCompanyFilings by focusing specifically on 13F institutional holdings rather than general company filings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit 'Use this tool when' guidance with three specific scenarios: seeing what funds/ETFs a firm holds, researching investment strategy, and getting filings for a specific manager over a date range. This clearly tells the agent when this tool is appropriate versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchFigiInstrumentsSearch Instruments via OpenFIGIARead-onlyIdempotentInspect
Search Bloomberg OpenFIGI for financial instruments by name or keyword.
Returns FIGI, ticker, exchange, security type, and composite FIGI for
each matching instrument.
Use this tool when:
- You know the company name but not the ticker or FIGI
- You want to find all instruments (ETFs, options, futures) for a name
- You need to discover what securities are associated with a company
Use MapInstrumentIds instead if you already have a specific ID to convert.
Source: Bloomberg OpenFIGI API. No API key required (optional key raises rate limits).
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a read-only, non-destructive, idempotent operation. The description adds valuable context beyond annotations by specifying the data source ('Bloomberg OpenFIGI API'), authentication details ('No API key required'), and rate limit information ('optional key raises rate limits'), which helps the agent understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by usage guidelines and source details. Every sentence adds value without redundancy, making it efficient and easy to parse for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple parameters with 0% schema coverage) and the presence of annotations and an output schema, the description does a good job explaining the tool's purpose, usage, and source. However, it lacks details on parameter meanings and interactions, which is a gap considering the schema's poor coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, but the description does not provide any parameter-specific information beyond the general 'by name or keyword' mention. It doesn't explain what parameters like 'wholesaler_ids' or 'exclude_fillers' mean or how they affect the search. Given the schema's lack of descriptions, the description fails to compensate adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search Bloomberg OpenFIGI for financial instruments by name or keyword') and resource ('financial instruments'), distinguishing it from sibling tools like MapInstrumentIds. It precisely defines what the tool does without being vague or tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool (e.g., 'when you know the company name but not the ticker or FIGI') and when to use an alternative ('Use MapInstrumentIds instead if you already have a specific ID to convert'). This gives clear context and exclusions for proper tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchFredSeriesSearch FRED Economic SeriesARead-onlyIdempotentInspect
Search the Federal Reserve Bank of St. Louis FRED database for economic
data series by keyword. Returns series ID, title, frequency, units,
seasonal adjustment, and date range.
Use this tool when:
- You need to find the right FRED series ID before fetching data
- You want to discover what macro data is available for a topic
- You are looking for interest rates, inflation, GDP, unemployment, or
money supply series to provide macro context for financial analysis
Common series IDs (use GetFredSeriesData after finding one):
- DGS10: 10-Year Treasury Yield
- CPIAUCSL: Consumer Price Index (CPI-U)
- UNRATE: Unemployment Rate
- GDP: Gross Domestic Product
- FEDFUNDS: Federal Funds Rate
- M2SL: M2 Money Supply
Requires FRED_API_KEY environment variable (free at fred.stlouisfed.org).
Source: Federal Reserve Bank of St. Louis FRED API.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true, covering key behavioral traits. The description adds valuable context beyond this: it specifies the return format (series ID, title, frequency, etc.), mentions the FRED_API_KEY requirement (auth needs), and notes the source (FRED API). It does not contradict annotations, as searching is consistent with read-only behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, usage guidelines, common series IDs, requirements, source) and is front-loaded with key information. It is appropriately sized but includes some extraneous details (e.g., specific series examples) that, while helpful, slightly reduce conciseness. Every sentence contributes to understanding, but it could be more streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search function with annotations and output schema), the description is complete. It explains the purpose, usage, return format, authentication requirements, and source. With annotations covering safety and idempotency, and an output schema likely detailing return values, no critical information is missing for effective tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, meaning parameters are undocumented in the schema. The description does not mention any parameters or their semantics, failing to compensate for the coverage gap. However, since there is only one parameter (params), which is a nested object with default null, the baseline is adjusted to 3 due to low parameter complexity, but the description adds no value beyond what the schema lacks.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search the Federal Reserve Bank of St. Louis FRED database for economic data series by keyword') and resource ('FRED database'). It distinguishes from sibling tools by explicitly mentioning its role in finding series IDs before using GetFredSeriesData, which is a direct sibling tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with a bulleted list of when to use this tool (e.g., 'when you need to find the right FRED series ID before fetching data'), and it names the alternative tool ('use GetFredSeriesData after finding one'). This clearly differentiates it from other tools and provides actionable context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchFundsByCategorySearch Funds by Category — EDGAR ProspectusARead-onlyIdempotentInspect
Search SEC EDGAR for mutual fund and ETF filers by investment category or
keyword. Queries N-1A and 485BPOS (and N-2 for closed-end) prospectus filings.
Returns entity name, CIK, form type, and filing date.
PRIMARY USE: Step 1 of fee comparison. Feed the returned CIKs directly into
GetFundFees to retrieve expense ratios for each fund.
Example queries:
- keywords='commodity', fund_type='etf' → commodity ETF universe
- keywords='emerging markets equity' → EM equity funds
- keywords='short duration bond' → short-term fixed income
- keywords='S&P 500 index', fund_type='etf' → S&P 500 index trackers
Results are de-duplicated by CIK (one record per fund filer).
Supports date filtering to restrict to recently updated prospectuses.
Source: SEC EDGAR full-text search API. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains result de-duplication by CIK, mentions date filtering capabilities, specifies the data source (SEC EDGAR full-text search API), and notes no API key is required. While annotations cover read-only and idempotent properties, the description provides practical implementation details that help the agent understand how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose. Each sentence adds value: the search scope, primary use case, concrete examples, behavioral details (de-duplication, date filtering), and source information. There's no wasted text, and the example queries are directly relevant to understanding tool usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (searching SEC EDGAR with multiple parameters) and the presence of an output schema, the description provides strong context about the tool's purpose, workflow integration, and behavioral characteristics. It explains the relationship to GetFundFees and provides concrete usage examples. The main gap is incomplete parameter coverage, but otherwise the description gives the agent sufficient understanding to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries the full burden of parameter documentation but only mentions keywords, fund_type, and date filtering. It provides helpful example queries showing how to use keywords and fund_type together, but doesn't address many other parameters in the schema like max_results, wholesaler_ids, or exclude_fillers. The description adds some value but doesn't fully compensate for the schema coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches SEC EDGAR for mutual fund and ETF filers by investment category or keyword, specifying the exact forms queried (N-1A, 485BPOS, N-2) and what information is returned (entity name, CIK, form type, filing date). It distinguishes itself from sibling tools by focusing on fund prospectus searches rather than general EDGAR filings or other financial data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states the primary use case ('Step 1 of fee comparison') and directs users to feed returned CIKs into GetFundFees. It provides multiple concrete example queries showing when to use this tool, and distinguishes it from other EDGAR search tools by focusing specifically on fund prospectuses.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchIAPDFirmSearch SEC IAPD — RIA FirmsARead-onlyIdempotentInspect
Search the SEC Investment Adviser Public Disclosure (IAPD) database for
registered investment advisor (RIA) firms by name. Returns firm CRD,
registration status, AUM, employee count, state, and office city.
Use this tool when:
- You need the CRD or AUM for a named RIA firm
- You are looking up Form ADV data for a firm
- You want to distinguish between RIA firms (use IAPD) vs BD firms (use BrokerCheck)
Geographic workflow: if you have a firm name from SearchBrokersByPlace and
the firm is an RIA (registered investment advisor), search here to get the
CRD, AUM, and regulatory status. Then use GetIAPDFirmDetail for full ADV data.
Note: IAPD covers RIAs registered with the SEC. For broker-dealers,
use SearchBrokerCheckFirm instead.
Source: SEC IAPD public API. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a read-only, non-destructive, idempotent operation. The description adds valuable behavioral context beyond annotations: it specifies the data source ('SEC IAPD public API'), authentication requirements ('No API key required'), and return format ('Returns firm CRD, registration status, AUM, employee count, state, and office city'). However, it doesn't mention rate limits or pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections for purpose, usage guidelines, workflow, and source information. It's appropriately sized for the tool's complexity, though some sentences could be more concise (e.g., the geographic workflow paragraph is somewhat verbose). Overall, it's front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and existence of an output schema, the description provides complete context. It covers purpose, usage scenarios, alternatives, workflow integration, data source, and authentication. The output schema will handle return value documentation, so the description appropriately focuses on operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, but the tool has only 1 parameter ('params' object). The description doesn't provide any information about parameters beyond what's implied by the search functionality. While the low parameter count reduces the need for detailed parameter semantics, the description doesn't compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search the SEC Investment Adviser Public Disclosure database'), resource ('registered investment advisor firms'), and scope ('by name'). It explicitly distinguishes this tool from sibling tools like SearchBrokerCheckFirm and SearchIAPDIndividual, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool (e.g., 'when you need the CRD or AUM for a named RIA firm') and when not to use it ('For broker-dealers, use SearchBrokerCheckFirm instead'). It includes a detailed geographic workflow example and names specific alternatives, giving comprehensive context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchIAPDIndividualSearch SEC IAPD — Individual AdvisorsARead-onlyIdempotentInspect
Search SEC IAPD (Investment Adviser Public Disclosure) for individual
investment advisor representatives (IARs) by name. Returns CRD number,
current employer, registration states, and exam history.
Use this tool when:
- You need to look up an individual financial advisor (not a firm)
- You want to verify an advisor's IA registration status
- You are doing due diligence on a named investment advisor representative
For firm lookups, use SearchIAPDFirm instead.
For broker/dealer individuals, use SearchBrokerCheck instead.
Source: SEC IAPD public API (api.adviserinfo.sec.gov). No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide: it discloses the data source ('SEC IAPD public API'), authentication requirements ('No API key required'), and clarifies this is for individual advisors only. While annotations cover read-only, non-destructive, and idempotent characteristics, the description provides practical implementation details that help the agent understand how the tool works in practice.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose statement, usage guidelines, alternative tools, and implementation details. Each sentence adds value without redundancy. The information is front-loaded with the core purpose, followed by practical guidance, making it efficient for an agent to parse and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search functionality with many parameters) and the presence of an output schema (which handles return values), the description does well on purpose, usage, and behavioral context. However, it falls short in explaining the parameter semantics, which is significant given the 10 parameters in the nested schema. The description provides excellent guidance on when to use the tool but insufficient guidance on how to use it effectively with all available parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (the schema has no descriptions for the top-level 'params' property), the description carries the full burden of explaining parameters. However, the description only mentions searching 'by name' without addressing the 10 parameters in the nested schema. While it establishes the core purpose, it doesn't provide meaningful guidance about the various filtering and configuration options available through the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search SEC IAPD'), target resource ('individual investment advisor representatives'), and key return fields ('CRD number, current employer, registration states, and exam history'). It explicitly distinguishes this tool from sibling tools SearchIAPDFirm and SearchBrokerCheck, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios ('when you need to look up an individual financial advisor', 'verify an advisor's IA registration status', 'due diligence on a named investment advisor representative') and clear alternatives ('For firm lookups, use SearchIAPDFirm instead. For broker/dealer individuals, use SearchBrokerCheck instead'). This gives the agent comprehensive guidance on when to use this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchLEISearch GLEIF for Legal Entity Identifier (LEI)ARead-onlyIdempotentInspect
Search the Global Legal Entity Identifier Foundation (GLEIF) database
for Legal Entity Identifiers (LEIs) by entity name. Returns the 20-character
LEI code, legal name, registration status, legal address, and jurisdiction.
Use this tool when:
- You need the LEI for a financial institution or fund company
- You want to verify the legal registration of a firm
- You are cross-referencing SEC EDGAR entities with their global LEI
- You need to look up parent/subsidiary relationships (use GetLEIDetail)
LEIs are required for regulatory reporting under MiFID II, EMIR, and Dodd-Frank.
Cover 2M+ legal entities globally across 200+ jurisdictions.
Source: GLEIF API (api.gleif.org). No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| params | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it discloses the data source (GLEIF API), that no API key is required, and coverage statistics (2M+ entities, 200+ jurisdictions). While annotations already indicate read-only, non-destructive, and idempotent operations, the description provides practical implementation details that help the agent understand the tool's scope and limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose, usage guidelines, regulatory context, coverage statistics, and source information. Every sentence adds value, and it's front-loaded with the most important information. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex schema with 10 parameters (0% documented) and the presence of an output schema, the description provides excellent context about what the tool does, when to use it, and practical implementation details. However, the complete lack of parameter guidance is a significant gap that prevents a perfect score, even with the output schema handling return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries full burden but provides no parameter information. It mentions searching 'by entity name' which hints at the 'name' parameter, but doesn't explain any of the 10 parameters in the schema. The baseline is 3 since the description doesn't compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches the GLEIF database for LEIs by entity name and specifies the exact data returned (20-character LEI code, legal name, registration status, legal address, jurisdiction). It distinguishes from sibling GetLEIDetail by noting this tool is for searching while GetLEIDetail is for parent/subsidiary relationships.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit 'Use this tool when' guidance with four specific scenarios, including when to use the alternative GetLEIDetail tool. It also includes regulatory context (MiFID II, EMIR, Dodd-Frank) and coverage statistics to help determine applicability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!