Valuein — SEC EDGAR Fundamentals
Server Details
Point-in-time, survivorship-bias-free SEC EDGAR fundamentals for AI agents. 1994 to present.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- valuein/valuein
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 15 of 15 tools scored. Lowest: 3.7/5.
Tools have mostly distinct purposes (e.g., compare_periods vs. get_company_fundamentals vs. get_financial_ratios), but there is some overlap between get_financial_ratios and get_valuation_metrics (both return ratios). The naming and descriptions help distinguish, but the overlap is notable.
All tool names follow a consistent verb_noun pattern (e.g., search_companies, get_sec_filing_links, screen_universe). No mixing of styles or irregular naming.
15 tools is well-scoped for a comprehensive SEC EDGAR data server. Each tool serves a clear purpose in the data exploration and analysis workflow, from searching to retrieving fundamentals to bulk data access.
The tool set covers searching, fundamentals, ratios, filings, valuation, capital allocation, peer comparison, universe creation, schema discovery, and data lineage. Minor gaps: no tool for updating/inserting data (read-only server), and no tool for bulk filing text search (coming soon).
Available Tools
15 toolscompare_periodsCompare Financial PeriodsARead-onlyIdempotentInspect
Compare a company's core financial metrics across two fiscal periods side-by-side. Shows absolute and percentage changes with significance classification (minor < 5%, notable 5–15%, significant > 15%). The response includes a material_changes count: this is the number of metrics whose significance ∈ {notable, significant} (i.e. absolute percentage change > 5%). Use it as a quick scalar to triage filings — anything > ~3 typically signals a material event worth deeper review. Use period format: 'FY2024' for annual, 'Q1-2024' for quarterly. Pass period_a as the EARLIER period and period_b as the LATER one — if you invert them the server auto-swaps and sets swapped: true in the response so deltas always carry the correct sign (rather than silently flipping). Point-in-time safe via as_of_date. Available on all plans.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes | Stock ticker symbol, e.g. AAPL, MSFT, BRK.B | |
| period_a | Yes | Earlier fiscal period. Format: 'FY2023' for annual or 'Q1-2023' for quarterly. | |
| period_b | Yes | Later fiscal period. Format: 'FY2024' for annual or 'Q1-2024' for quarterly. | |
| as_of_date | No | Point-in-time date (YYYY-MM-DD). Only returns facts with accepted_at on or before this date — eliminates look-ahead bias for backtesting. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Disclosure of significance classification thresholds (minor, notable, significant) adds behavioral detail beyond readOnlyHint and idempotentHint annotations. Could elaborate on default sort or metric scope but sufficient for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences packed with essential info: purpose, output details, usage format, safety, and availability. No filler. Front-loaded with action and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 simple parameters, no output schema needed (tool clearly returns comparison metrics), and strong annotations (readOnlyHint, idempotentHint), the description fully covers what an agent needs: what it does, how to use it, and behavioral constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description does not add new parameter info beyond schema, but the period format examples and point-in-time explanation are consistent with schema descriptions. No contradiction.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'Compare', specific resource 'core financial metrics across two fiscal periods', and unique differentiation: side-by-side with absolute/percentage changes and significance classification. Distinguishes from sibling tools like get_company_fundamentals or get_financial_ratios.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly describes when to use (comparing financial metrics across periods), provides period format examples, and implicitly distinguishes from sibling tools (e.g., get_financial_ratios doesn't compare periods). Also notes point-in-time safety and plan availability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
describe_schemaDescribe Data SchemaARead-onlyIdempotentInspect
Returns the Parquet schema for all tables in the Valuein SEC data warehouse. Includes table descriptions, column names, types, primary keys, and foreign-key references. Use this tool to understand the data model before querying with other tools. No data reads required — schema is embedded in the manifest. Available on all plans.
| Name | Required | Description | Default |
|---|---|---|---|
| table | No | Filter to a single table name (e.g. 'fact', 'entity', 'references'). Omit to return the full schema for all tables. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds that the schema is embedded in the manifest and requires no data reads, which is consistent. No contradictions. Could further mention that it's fast or returns a large result for all tables, but it's sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: purpose, usage guidance, and availability. No wasted words. Key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (one optional parameter, no output schema, rich annotations), the description covers purpose, usage, and behavioral aspects completely. No gaps identified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has one optional parameter 'table' with description covering 100% coverage. The description adds context that omitting the parameter returns the full schema for all tables, which is not in the schema description. This adds value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns the Parquet schema for all tables in the Valuein SEC data warehouse, listing the included details (table descriptions, column names, types, primary keys, foreign-key references). This distinguishes it from sibling tools that perform data queries or analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this tool to understand the data model before querying with other tools', providing clear guidance on when to use it. It also notes that no data reads are required and it's available on all plans, implying no restrictions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_capital_allocation_profileCapital Allocation ProfileARead-onlyIdempotentInspect
Get a multi-year capital allocation breakdown for a US public company. Shows how management deploys free cash flow across dividends, share repurchases, and debt management — the key inputs for quality assessment and owner earnings analysis. Data sourced from annual 10-K cash flow statements (SEC EDGAR). Includes: operating cash flow, capex, FCF, dividends paid, buybacks, debt issuance/repayment, net debt change, dividend payout ratio, FCF-to-revenue. NOT yet included (roadmap, see _meta.data_quality.notes): R&D + M&A line items (concepts ResearchAndDevelopment + CFIAcquisitions exist in fact.parquet but aren't surfaced here yet); buyback_yield_implied requires a price × shares market-cap series that is not yet wired into the export. Point-in-time safe via as_of_date. Available on all plans.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes | Stock ticker symbol, e.g. AAPL, MSFT | |
| as_of_date | No | Point-in-time date (YYYY-MM-DD). Only returns facts with accepted_at on or before this date — eliminates look-ahead bias. | |
| lookback_years | No | Number of fiscal years to look back from the most recent filing (1–20). Defaults to 5 years for a full capital allocation cycle. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, openWorldHint, idempotentHint, and destructiveHint=false. The description adds value by clarifying point-in-time safety via as_of_date and specifying the data source (SEC EDGAR), which are not in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured, with a clear first sentence stating the core purpose, followed by details on data source and included metrics, ending with key usage notes. No unnecessary sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description lists the specific metrics included (operating cash flow, capex, FCF, dividends, etc.), making it complete for an analyst to understand what data is returned. The tool is not complex, and the description covers all necessary context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented. The description adds context for lookback_years (default 5 years for a full cycle) and as_of_date (point-in-time, eliminates look-ahead bias), but these are already covered in the schema's descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool retrieves a multi-year capital allocation breakdown for US public companies, specifying the resource (cash flow deployment data) and action (get), and distinguishes it from sibling tools like get_company_fundamentals and get_financial_ratios by focusing on capital allocation inputs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly states the tool is for US public companies and includes data sourced from 10-K filings, providing context for when to use it. However, it does not explicitly state when not to use it or mention alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_company_fundamentalsCompany FundamentalsARead-onlyIdempotentInspect
Retrieve standardized SEC EDGAR fundamental financial metrics for a US public company. Returns revenue, gross profit, operating income, net income, EPS (diluted), total assets, total liabilities, stockholders' equity, cash & equivalents, total debt, operating cash flow, and capital expenditures for one or more fiscal periods. Data sourced from 10-K (annual) and 10-Q (quarterly) filings. Point-in-time: no look-ahead bias.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of periods to return (1–40). Defaults to 5. | |
| period | No | Filing period granularity. Annual uses 10-K; quarterly uses 10-Q. | annual |
| strict | No | When true, fail with PLAN_LIMIT_EXCEEDED if the plan cannot satisfy the requested limit. Default false: return what's available and explain the gap in _meta.truncation. | |
| ticker | Yes | Stock ticker symbol, e.g. AAPL, MSFT, BRK.B | |
| as_of_date | No | Point-in-time date (YYYY-MM-DD). Only returns facts with accepted_at on or before this date — eliminates look-ahead bias for backtesting. Omit for the full dataset. | |
| fiscal_year | No | Fiscal year (YYYY). Omit to return the most recent available years. | |
| lineage_detail | No | Per-period provenance envelope. 'compact' (default) returns source_filing + source_url + restated flag for one-click SEC verification. 'full' adds first_filed_at + accepted_at for Bloomberg-Option-C restatement reasoning. 'off' omits lineage entirely. | compact |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | |
| _meta | Yes | Provenance envelope — data lineage for every MCP response |
| period | Yes | |
| ticker | Yes | |
| as_of_date | Yes | |
| company_name | Yes | |
| years_returned | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. Description adds useful context: point-in-time behavior, no look-ahead bias, and data source. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each adding value: first states purpose and metrics, second specifies source, third clarifies point-in-time property. No fluff, front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with 100% schema coverage, output schema exists, and annotations present, description covers key aspects. Could mention return value shape or how to handle missing data, but not necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds some context beyond schema (e.g., 'point-in-time date eliminates look-ahead bias') but doesn't detail each parameter's interaction. Adequate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves standardized SEC EDGAR fundamental metrics for US public companies, listing specific metrics (revenue, net income, EPS, etc.) and source filings. This distinguishes it from siblings like get_financial_ratios or get_valuation_metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description mentions data source (10-K/10-Q), point-in-time capability, and lack of look-ahead bias. However, it doesn't explicitly state when not to use this tool versus siblings like get_financial_ratios or get_earnings_signals.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_compute_ready_streamCompute-Ready StreamARead-onlyIdempotentInspect
STATUS: pending — direct R2 Parquet access is in private beta (ETA 2026-Q3). Calls return 501 FEATURE_NOT_AVAILABLE today. When live: returns a pre-signed Cloudflare R2 URL for bulk Parquet access that can be piped into Python/DuckDB/Polars for high-throughput computation that exceeds the MCP context window. Datasets: fact (per-entity partition — requires ticker), ratio (all computed ratios), valuation (DCF inputs), filing (SEC filing metadata), references (company universe), index_membership (historical index composition). URL would expire in 15 minutes. TODAY use the Python SDK (pip install valuein-sdk) for the same data via DuckDB.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | No | Required when dataset_type is 'fact'. Resolves to the per-entity fact/{CIK}.parquet partition for that company. | |
| dataset_type | Yes | Dataset to access. 'fact' requires ticker (per-entity partition). All others are full-universe tables. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint=false, covering safety. The description adds value by stating URL expiry (1 hour) and plan availability ('Available on every plan — sample returns a URL scoped to the sample bucket'), which are not in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise (3 sentences) and front-loaded with the core purpose. It covers use case, datasets, expiry, and example without redundancy. Could be slightly more structured (e.g., bullet points for datasets) but currently efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 params, no output schema), the description is complete: it explains the return value (pre-signed URL), use case, datasets, ticker requirement, expiry, and plan info. No output schema exists, so description must cover return; it does adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add parameter details beyond what schema provides (e.g., ticker required for 'fact', dataset_type enum). It lists dataset types and briefly explains 'fact' requiring ticker, but this largely mirrors the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a pre-signed Cloudflare R2 URL for direct Parquet file access, with specific verb ('Get') and resource ('pre-signed Cloudflare R2 URL'). It distinguishes from siblings by explaining the use case (bulk data exceeding MCP context window) and listing datasets, which no sibling tool does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says when to use this tool ('when the query requires bulk data that exceeds the MCP context window') and provides an example usage pattern. It does not explicitly state when not to use or name alternatives, but the sibling tools list and context imply alternatives exist for smaller data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_earnings_signalsEarnings SignalsARead-onlyIdempotentInspect
Trend-based earnings expectations and surprise metrics. EPS trend is the trailing 4-quarter average; surprise_pct shows how actual EPS deviated from trend. PIT-safe: use as_of_date to filter by accepted_at for backtesting. Available on all plans.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of periods to return (1–40), most recent first. Defaults to 8 — covers 2 years of quarterly signals plus their TTM equivalents. earnings_signals.parquet currently emits one row per (entity, period_end); older rows surface here when the pipeline emits historical trend snapshots. | |
| ticker | Yes | Stock ticker symbol, e.g. AAPL, MSFT | |
| as_of_date | No | Point-in-time filter: only return signals with accepted_at on or before this date. Use for backtesting to avoid look-ahead bias. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, idempotentHint, and non-destructive behavior. The description adds context about trend calculation (trailing 4-quarter average) and plan availability, which is helpful but not extensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences with no fluff; each sentence adds distinct value (definition, usage guidance, availability).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the key metrics (EPS trend, surprise_pct). However, it could mention expected output format or additional fields. Still adequate for a simple data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents parameters well. The description adds brief context about as_of_date for backtesting but does not add significant new meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Trend-based earnings expectations and surprise metrics' and explains EPS trend and surprise_pct, which distinguishes it from sibling tools like get_company_fundamentals or get_financial_ratios that focus on different metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly mentions point-in-time safety for backtesting with as_of_date, but does not explicitly say when NOT to use it or compare to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_financial_ratiosFinancial RatiosARead-onlyIdempotentInspect
Get pipeline-computed financial ratios from ratio.parquet. Covers 7 categories: profitability (margins, ROE, ROA, ROIC), liquidity (current ratio, quick ratio), leverage (D/E, interest coverage, net debt/EBITDA), efficiency (asset turnover, inventory days), per_share (EPS, BVPS, FCF/share), owner_earnings (Buffett FCF, owner yield), and valuation (EV/EBITDA, P/E, P/FCF). Includes TTM (trailing twelve months) rows alongside annual periods. Each row carries is_calendar_aligned (boolean) — TRUE when period_end is actually on the entity's fiscal year boundary (±7 days), FALSE when the pipeline emitted a calendar-quarter row tagged fiscal_period='FY' for a non-December fiscal-year filer. Filter to is_calendar_aligned=TRUE if you're joining ratios with fact-table fundamentals on (entity, fiscal_year). Ratios are pipeline-derived — they're recomputed on each pipeline run with ON CONFLICT DO UPDATE. There is NO accepted_at column; for historical cuts use period_end_before (filters by ratio.period_end), not as_of_date. _meta.pit_safe is structurally false for this tool — for true PIT analysis, use get_company_fundamentals (which carries fact.accepted_at). Use this instead of get_valuation_metrics when you only need ratios (no DCF wiring); use get_valuation_metrics when you also need DCF/DDM. Available on all plans.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of distinct period_end dates to return (1–20). Defaults to 5. Within each period, all matching ratio_names are included. | |
| ticker | Yes | Stock ticker symbol, e.g. AAPL, MSFT | |
| categories | No | Ratio categories to include. Omit to return all categories. Options: profitability, liquidity, leverage, efficiency, per_share, owner_earnings, valuation. | |
| fiscal_period | No | Filter to a specific fiscal period type. Use 'TTM' for trailing twelve months. Omit to return both annual (FY) and TTM rows. | |
| period_end_before | No | Historical cutoff: only return ratios with period_end on or before this date. Use this instead of as_of_date — ratio data has no PIT accepted_at. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | |
| note | Yes | |
| plan | Yes | |
| _meta | Yes | Provenance envelope — data lineage for every MCP response |
| ticker | Yes | |
| periods_returned | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, so the tool's non-destructive, read-only nature is clear. Description adds value by noting ratios are pipeline-derived and that knowledge_at does not apply, but no additional behavioral traits beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise, front-loading the main action and then listing categories. The note about period_end vs knowledge_at is important but could be slightly shorter. No wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich input schema and annotations, the description covers all necessary context: data source, categories, special behavior (TTM, period_end usage), and plan availability. Output schema exists so return values need not be explained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so each parameter is already well-documented in the schema. The description adds no further parameter details, but baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves pipeline-computed financial ratios from the ratio table, covering 7 specific categories. It distinguishes itself from siblings like get_valuation_metrics by focusing on comprehensive ratio categories including TTM rows.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises to filter by period_end for historical analysis, not knowledge_at, clarifying correct usage. Implicitly suggests not using as_of_date. Provides guidance on when to omit categories or fiscal_period.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_peer_comparablesPeer ComparablesARead-onlyIdempotentInspect
Get ratio-based peer comparison for a company and its closest competitors. Peers are selected by matching 2-digit SIC industry code. Returns pipeline-computed ratios from up to 10 peers alongside the subject company for direct benchmarking. Ratio categories: profitability, liquidity, leverage, efficiency, per_share, owner_earnings, valuation. TTM (trailing twelve months) ratios are used when available for the most current view. Use period_end_before to compare peers at a specific historical date. Available on every plan — sample returns the subset covered by the sample bucket.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of peers to return alongside the subject company (1–10). Defaults to 5. | |
| ticker | Yes | Subject company ticker, e.g. AAPL. Peers are auto-selected by SIC code. | |
| categories | No | Ratio categories to include in the comparison. Defaults to profitability, valuation, and leverage. | |
| period_end_before | No | Historical cutoff: only include ratios with period_end on or before this date. Use this for historical peer benchmarking snapshots. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, openWorldHint, idempotentHint, and non-destructive behavior. The description adds context about data sources (pipeline-computed ratios, TTM ratios) and plan availability. It does not contradict annotations; annotation_contradiction is false.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise at 4 sentences, covering purpose, peer selection, ratio categories, TTM usage, historical comparison, and plan availability. It is front-loaded with the primary purpose. Minor redundancy could be trimmed, but it is well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains what is returned (ratios from up to 10 peers alongside subject company) and mentions categories. It also covers plan availability and sampling behavior. It could add details about return format (e.g., data structure), but completeness is high for a tool with strong annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining that peers are auto-selected by SIC code and that TTM ratios are used, but it does not add new meaning beyond the schema for each parameter. For example, 'limit' and 'categories' are well-described in schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves ratio-based peer comparisons for a company and its closest competitors, selecting peers by 2-digit SIC code. It distinguishes from sibling tools by mentioning peer selection and specific ratio categories, making it unique among sibling tools like get_financial_ratios or get_valuation_metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly mentions when to use it (for peer benchmarking) and provides guidance on historical comparisons via period_end_before. However, it does not explicitly state when not to use it or mention alternatives for peer selection (e.g., if user wants custom peers), which would elevate it to a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pit_universePoint-in-Time UniverseARead-onlyIdempotentInspect
Get a survivorship-free universe of companies valid on a specific historical date. Critical for bias-free quantitative research: returns only companies that existed and were index members on the exact as_of_date — no hindsight contamination. Supports SP500, RUSSELL1000, RUSSELL2000, RUSSELL3000 via index_membership.parquet (accurate join/leave dates with [) interval semantics). Returns CIK, ticker, name, sector, industry, SIC code, plus per-row membership confidence (high/medium/low) sourced from index_membership. The _meta.pit_safe flag aggregates these: true only when every matched row is high-confidence; medium/low rows downgrade the response. NOTE: sector is SIC-derived (GICS-aligned labels via configs/sic_to_sector.csv), not licensed GICS — industrial conglomerates may map differently. Treat as a screening bucket, not an authoritative GICS label. Use as the first step of a quantitative backtest before calling get_compute_ready_stream to pull Parquet data for the universe. Available on every plan — sample returns the subset covered by the sample bucket.
| Name | Required | Description | Default |
|---|---|---|---|
| index | No | Index filter. 'sp500' (~500 large caps), 'russell1000' (~1000 large/mid), 'russell2000' (~2000 small caps), 'russell3000' (~3000 broad market). Omit for no index filter (sector-only or full universe queries). | |
| limit | No | Maximum number of companies to return (1–3500). Defaults to 100. Sized to fit Russell 3000 (~3000 members) plus headroom for delisted/historical members. Universe is deduped to one row per CIK by default, so set near the index size (SP500 ~505, Russell 1000 ~1010, Russell 3000 ~3050) — no need for share-class headroom. | |
| sector | No | Sector filter (case-insensitive substring match) over the SIC-derived, GICS-aligned sector label. E.g. 'Technology', 'Health Care', 'Energy'. Not licensed GICS — see tool description for caveat. | |
| is_active | No | Filter to active (currently trading) companies only. Omit to include all. WARNING: setting this to true on a HISTORICAL query reintroduces survivorship bias — companies that were active on as_of_date but later went bankrupt or got acquired will be filtered out. Leave unset for true PIT backtests. | |
| as_of_date | No | Historical date (YYYY-MM-DD) for survivorship-free universe construction. For an index: uses index_membership join/leave dates — companies that entered after this date are excluded; companies later removed are kept. For sector: uses security valid_from/valid_to. Omit for the current universe. | |
| as_of_basis | No | Which date column to filter on for historical universe construction. 'effective' (default) — match against effective_date / removal_date (first trading day as a member; passive index replication). 'announcement' — match against announcement_date / removal_announcement_date (the day S&P publicly announced the change; use for inclusion-arb backtests). 'announcement' rows with NULL announcement_date are silently skipped — pre-2015 changes generally lack press-release coverage. | effective |
| include_share_classes | No | When false (default), the universe is collapsed to one row per CIK — matches how index providers count constituents (BRK is one S&P 500 member, not two for BRK-A + BRK-B). When true, every share-class row is returned (GOOG and GOOGL emit separate rows, etc.). Use only for security-level (not company-level) analysis. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, destructiveHint=false, so the safety profile is clear. Description adds valuable context: explains how historical data is constructed (using index_membership.parquet with accurate join/leave dates), which is beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise yet comprehensive: 6 sentences covering purpose, usage, key implementation detail, output fields, recommended workflow, and availability. Front-loaded with the most critical information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters with 100% schema coverage, no output schema, and annotations present, the description fully addresses what the agent needs: purpose, when to use, how it works internally, output fields, and relationship to sibling tools. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds meaning by explaining the purpose of 'as_of_date' (historical date for survivorship-free universe) and 'index' (returns S&P 500 constituents). However, it doesn't detail all parameter behaviors (e.g., sector substring match). Slight additional value over schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get a survivorship-free universe of companies valid on a specific historical date.' It specifies the verb (get), resource (universe of companies), and key property (survivorship-free, point-in-time). Differentiates from siblings like 'screen_universe' and 'get_compute_ready_stream' by highlighting its role as the first step in a backtest and its historical validity focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use: 'Use as the first step of a quantitative backtest before calling get_compute_ready_stream.' Also states what not to expect: no survivorship bias, uses historical data, not current snapshots. Provides context for bias-free research and mentions plan availability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sec_filing_linksSEC Filing LinksARead-onlyIdempotentInspect
Get direct links to original SEC EDGAR filings for any US public company. Returns two per-filing deep links: sec_url (the EDGAR filing-index page listing every document) and viewer_url (the SEC iXBRL inline-viewer for the specific accession). Supported form_types (enum): 10-K, 10-Q, 8-K, 20-F, 10-K/A, 10-Q/A. Other forms (6-K, DEF 14A, Form 4, 13F) are NOT yet exposed by this tool — use describe_schema to confirm the parquet has them, then read raw via the SDK. 8-K item codes are filterable via event_types (e.g. ['2.02'] for earnings, ['1.01'] for material agreements, ['5.02'] for officer changes). PIT-safe — filings are filtered by accepted_at, never by report_date alone. Use this instead of verify_fact_lineage when you want a list of filings; use verify_fact_lineage when you want one specific fact-to-filing trace. Available on all plans.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of filings to return (1–50). Defaults to 10. | |
| ticker | Yes | Stock ticker symbol, e.g. AAPL, MSFT | |
| end_date | No | Inclusive upper bound on filing_date (YYYY-MM-DD). E.g. '2023-12-31'. | |
| form_types | No | Filing form types to include. Defaults to 10-K and 10-Q. | |
| start_date | No | Inclusive lower bound on filing_date (YYYY-MM-DD). E.g. '2023-01-01'. | |
| event_types | No | 8-K item codes to filter by. E.g. ['1.01'] for material agreements, ['2.01'] for asset acquisitions, ['5.02'] for director/officer changes. Only relevant when form_types includes '8-K'. |
Output Schema
| Name | Required | Description |
|---|---|---|
| _meta | Yes | Provenance envelope — data lineage for every MCP response |
| ticker | Yes | |
| filings | Yes | |
| filings_returned | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and non-destructive nature. The description adds transparency by specifying the exact return content (EDGAR interactive viewer URL and raw filing index URL) and availability on all plans, complementing the annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at three sentences, front-loaded with the core purpose, and every sentence adds value: purpose, output details, and availability. No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, return values need no further explanation. The description covers the tool's scope, constraints (US public companies, EDGAR links), and availability. However, it could briefly mention that the tool is limited to EDGAR filings, which is already implied, making it largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description adds some context (e.g., form types listed, 8-K event codes explained) but does not significantly enhance beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns direct links to original SEC EDGAR filings for US public companies, specifying form types (10-K, 10-Q, etc.) and output details (interactive viewer URL and raw index URL). This differentiates it from siblings like search_filing_text (which searches content) and get_company_fundamentals (which provides metrics).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates when to use this tool (when needing links to filings) and mentions availability on all plans, but does not explicitly state when not to use it or compare to alternatives. For example, it doesn't contrast with search_filing_text for content search or get_company_fundamentals for financial data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_valuation_metricsValuation MetricsARead-onlyIdempotentInspect
Get comprehensive valuation and profitability metrics for a US public company. Returns per-period data combining computed ratios (gross_margin, operating_margin, net_margin, ROE, ROA, ROIC, debt_to_equity, FCF, FCF margin) with optional pre-computed DCF model inputs (WACC, fcf_base, stage1_growth_rate, terminal_growth_rate, dcf_value_per_share, ddm_value_per_share). Profitability + cash flow + leverage fields are sourced from fact.parquet (PIT-safe via accepted_at). DCF/DDM fields are sourced from valuation.parquet (pipeline-computed; recomputed each pipeline run, so NOT strictly PIT-safe). DCF/DDM fields are commonly null — for newer tickers, transition periods, or before the valuation pipeline has run. Each null carries a null_reasons[field] entry with one of: VALUATION_NOT_COMPUTED, INPUT_MISSING, INPUT_NEGATIVE, PIPELINE_PENDING. ALWAYS check null_reasons before assuming a field is zero — null != 0. For agents needing strict PIT DCF reasoning, use the SDK or compute DCF inputs from get_company_fundamentals directly. Use this instead of get_financial_ratios when DCF/intrinsic value matters; use get_financial_ratios when you only need the ratio table without DCF wiring. Available on all plans.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of periods to return (1–40). Defaults to 5. | |
| period | No | Filing period granularity. Annual uses 10-K; quarterly uses 10-Q. | annual |
| ticker | Yes | Stock ticker symbol, e.g. AAPL, MSFT | |
| as_of_date | No | Point-in-time date (YYYY-MM-DD). Only returns data with accepted_at on or before this date. Eliminates look-ahead bias for backtesting. | |
| fiscal_year | No | Fiscal year (YYYY). Omit to return most recent periods. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | |
| _meta | Yes | Provenance envelope — data lineage for every MCP response |
| period | Yes | |
| ticker | Yes | |
| as_of_date | Yes | |
| periods_returned | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, so the description's behavioral burden is lower. The description adds value by disclosing point-in-time safety with as_of_date and that data is 'per-period' and includes both computed ratios and pre-computed DCF inputs. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured paragraph. The first sentence front-loads the core purpose. Every sentence adds unique information: metrics list, as_of_date behavior, plan availability. No redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 params, output schema present), the description is complete. It explains what data is returned (computed ratios + DCF inputs), the point-in-time feature, and plan availability. With an output schema, return value details are not needed. The description covers all necessary aspects for an agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the purpose of as_of_date ('eliminates look-ahead bias for backtesting') and clarifying that limit controls 'maximum number of periods'. However, it does not elaborate on all parameters beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'comprehensive valuation and profitability metrics for a US public company' and lists specific metrics (gross margin, ROE, DCF, etc.). The verb 'Get' plus the resource 'valuation metrics' precisely defines purpose. It distinguishes itself from siblings like get_company_fundamentals and get_financial_ratios by being comprehensive and including DCF/DDM values.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly mentions that as_of_date enables 'historical analysis and backtesting without look-ahead bias', guiding when to use that parameter. It also says 'Available on all plans', clarifying no plan restrictions. However, it does not explicitly contrast with sibling tools or state when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
screen_universeScreen Universe by Factor ScoresARead-onlyIdempotentInspect
Rank companies by cross-sectional factor scores from factor_scores.parquet. Returns the 10 underlying factors (roe, gross_margin, operating_margin, net_profit_margin, revenue_growth_yoy, fcf_to_assets, debt_to_equity, asset_turnover, current_ratio, piotroski_f_score) plus their percentile ranks (1.0 = best in universe; 0.0 = worst). composite_rank is the equal-weight average of all 10 non-null ranks — a one-number screening shortcut. For single-factor sorts, use the specific rank column (e.g. sort_by=roe_rank); for a balanced multi-factor view, use sort_by=composite_rank (default). Two modes: full-universe (omit ticker) or single-entity (ticker set — useful for spot-checking ONE company's factor profile without scrolling the whole leaderboard). Sector filter is SIC-derived (GICS-aligned labels, not licensed GICS — see get_pit_universe description for the caveat). Use this instead of get_financial_ratios when you want CROSS-SECTIONAL comparison (rank vs peers); use get_financial_ratios when you want one company's ratios over time. Available on every plan — sample returns the subset covered by the sample bucket.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1-100). Defaults to 25. | |
| sector | No | Filter to a specific sector (case-insensitive partial match). E.g. 'Technology', 'Healthcare'. | |
| ticker | No | If provided, show only this ticker's factor scores (single-entity mode). Omit to screen the full universe. | |
| sort_by | No | Which factor rank to sort by. Options: roe_rank, gross_margin_rank, operating_margin_rank, net_profit_margin_rank, revenue_growth_yoy_rank, fcf_to_assets_rank, debt_to_equity_rank, asset_turnover_rank, current_ratio_rank, piotroski_f_score_rank, composite_rank. Defaults to composite_rank. | composite_rank |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint false, so the agent knows it's a safe, idempotent read operation. The description adds value by stating the return format ('percentile ranks where 1.0 = best') and sample bucket behavior, which goes beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the core purpose, then adding key details. Every sentence adds value: first sentence defines the tool, second sentence adds key features and availability. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description compensates by specifying return values (percentile ranks) and key features (filtering, sorting, single-entity mode). With 4 parameters all fully described in the schema, the description is complete for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema documents all parameters thoroughly. The description adds context about 'single-entity mode' for ticker and mentions sorting by 'any factor rank column,' which aligns with the sort_by parameter's listed options. However, it doesn't elaborate on each parameter beyond what the schema provides, but this is acceptable given full coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool ranks companies by cross-sectional factor scores, with a specific verb ('rank') and resource ('companies by factor scores'). It distinguishes itself from siblings like 'get_pit_universe' or 'get_company_fundamentals' by focusing on factor-based screening and percentile ranks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly mentions filtering by sector, sorting by factor rank, and single-entity mode via ticker. It also states 'Available on every plan — sample returns the subset covered by the sample bucket,' which provides context on availability. However, it doesn't explicitly say when not to use this tool or compare to alternatives like search_companies.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_companiesSearch CompaniesARead-onlyIdempotentInspect
Search for US public companies by name, ticker symbol, CIK (SEC identifier), or SIC industry code. Returns ticker, company name, sector, industry, exchange, and current S&P 500 membership status. Use this tool to resolve a company name to ticker/CIK before calling get_company_fundamentals, get_valuation_metrics, or other tools that require a ticker — they do not fuzzy-match company names.
Current S&P 500 listings: To list current S&P 500 members, call this tool with no search parameters and filter the results by sp500_member=true. This returns the snapshot as of query time.
Historical index universes: Use get_pit_universe instead when you need a survivorship-free historical universe (S&P 500 / Russell 1000/2000/3000 as of a specific past date), not just the current snapshot.
Data details: sic_code is the 4-digit SIC; industry is the human-readable label. sector is SIC-derived with GICS-style labels — NOT licensed GICS, so industrial conglomerates may map differently from official GICS (e.g. 3M → 'Health Care' by SIC vs Industrials by GICS). S&P 500 membership is sourced from index_membership.parquet (current SP500 = index_name='SP500' AND removal_date IS NULL). Available on all plans.
| Name | Required | Description | Default |
|---|---|---|---|
| cik | No | SEC CIK identifier (exact match). E.g. '0000320193' for Apple. | |
| limit | No | Maximum number of results to return (1–50). Defaults to 25. | |
| query | No | Free-text search over company name and ticker. Case-insensitive. E.g. 'Apple', 'AAPL', 'Microsoft', 'semiconductor'. | |
| is_sp500 | No | Filter to current S&P 500 members only. | |
| sic_code | No | 4-digit SIC industry code. E.g. '7372' for Prepackaged Software. | |
| is_active | No | Filter to active (currently trading) companies only. |
Output Schema
| Name | Required | Description |
|---|---|---|
| _meta | Yes | Provenance envelope — data lineage for every MCP response |
| query | Yes | |
| companies | Yes | |
| results_returned | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, openWorldHint, idempotentHint, non-destructive. The description adds context about plan availability ('Available on all plans') and search capabilities, but doesn't need to repeat annotations. Minor gap: no mention of search behavior (e.g., partial matching) beyond case-insensitivity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, all substantive: what it does, what it returns, when to use it, availability. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (search with filters, no nested objects), the description fully covers the purpose, usage, and outputs. Output schema exists, so return details need not be in description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and each parameter already has a description. The tool description adds no additional parameter meaning beyond what's in the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches US public companies by multiple identifiers (name, ticker, CIK, SIC) and lists returned fields (ticker, company name, sector, etc.). It distinguishes itself from sibling tools by being a prerequisite lookup tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this tool before other tools to resolve a company name to its ticker or CIK,' providing clear guidance on when to use it. No explicit when-not, but the purpose is well-scoped.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_filing_textSearch Filing TextARead-onlyIdempotentInspect
Semantic search over SEC 10-K and 10-Q narrative sections — Risk Factors, MD&A, Business, Legal Proceedings, and Controls & Procedures. Returns the top passages most similar in meaning to the query, with links back to the source filing. STATUS: pending — Vectorize backfill in progress. Calls return a structured FEATURE_NOT_AVAILABLE error with eta=2026-09-01.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max passages to return (1–20). Defaults to 5. | |
| query | Yes | Natural-language query. Phrase it as a statement rather than a question for best recall. | |
| ticker | No | Restrict search to one company. Omit for cross-company search. | |
| section | No | Restrict to a single section label (e.g., 'risk_factors'). Omit for all sections. | |
| form_type | No | Restrict to one form type. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true. The description adds context: it returns top passages by semantic similarity, includes links to source filings, and notes that the feature launches in Q3 2026 (currently unavailable). This goes beyond annotations. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: two sentences. The first sentence clearly states purpose and scope. The second adds a time constraint (coming soon). No fluff. Could be slightly more structured by separating usage guidance, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters (100% schema coverage), good annotations, and no output schema, the description covers the essential: what it searches, what it returns, and a critical caveat (launch date). It omits details like pagination or error behavior, but these are less critical with a read-only tool. A dedicated user might want more, but it's reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds no extra parameter details beyond what the schema provides (e.g., query phrasing hint is in schema). It does mention returning 'top passages' and 'links', which is output behavior, not parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs semantic search over specific SEC filing sections (Risk Factors, MD&A, etc.) and returns top passages with links. It differentiates from siblings like 'get_sec_filing_links' (which retrieves links, not text) and 'screen_universe' (which screens, not searches). However, it doesn't explicitly distinguish from 'search_companies', though the latter searches companies, not filing text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when a user wants to find passages in SEC filings relevant to a query. It does not explicitly say when to use this tool vs. alternatives like 'get_sec_filing_links' or 'search_companies'. No exclusion criteria or prerequisites are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_fact_lineageVerify Fact LineageARead-onlyIdempotentInspect
Trace a specific financial data point back to its source SEC EDGAR filing with full provenance. Use this when you need to verify WHERE a number came from — the exact filing, form type, accession ID, and filing date. Returns the complete lineage: entity, concept, period, value, accession ID, filing URL, and form type.
Two lookup modes: (1) by fact_id (SHA-256 hash of entity_id|accession_id|concept|period_end|unit) for deterministic identity; or (2) by concept name (e.g., TotalRevenue, NetIncome, EPS_Diluted, TotalAssets, OperatingCashFlow) to retrieve the most recently reported fact for a ticker. Optionally pin a point-in-time cutoff via period_end (YYYY-MM-DD format) — returns the latest filing accepted by SEC on or before that date, with no look-ahead bias.
Use this INSTEAD OF get_company_fundamentals when the user explicitly asks which filing reported a number, wants to verify the source document, or needs the accession ID and form type. get_company_fundamentals returns raw metrics across periods; verify_fact_lineage returns the filing metadata that backs a single fact.
Provide either fact_id or concept (required). Point-in-time safe via accepted_at semantics. Available on all plans.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes | Stock ticker symbol, e.g. AAPL, MSFT, BRK.B | |
| concept | No | Standard concept name to look up the most recently known fact for. Use this when you don't have a fact_id. Allowed: TotalRevenue, NetIncome, EPS_Diluted, TotalAssets, TotalLiabilities, OperatingCashFlow, CAPEX. Provide either concept OR fact_id. | |
| fact_id | No | Deterministic fact identity hash: SHA-256(entity_id|accession_id|concept|period_end|unit). 64-char lowercase hex. Use this when you already have the hash from a previous query. Provide either fact_id OR concept (not both required, but at least one must be set). | |
| as_of_date | No | Point-in-time cutoff (YYYY-MM-DD) used with `concept`. Returns the most recently known fact whose 10-K / 10-Q filing was accepted by SEC on or before this date — true PIT, no lookahead bias. Any calendar date works (not limited to fiscal-period closes); the latest preceding filing is returned. This is the canonical name shared by every other PIT tool in the suite; supersedes the legacy `period_end` parameter on this tool. | |
| period_end | No | [DEPRECATED — pass `as_of_date` instead.] Filing-acceptance cutoff (YYYY-MM-DD) used with `concept`. The name is misleading — it actually filters on filing accepted_at, not on the period_end of the returned fact (the returned fact's period_end is whatever the SEC filed, e.g. a Q3 cumulative figure). Kept for one release for backwards compat; new callers MUST use `as_of_date`. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint as false, providing a strong safety profile. The description adds that the fact_id is deterministic and describes the hashing formula, plus confirms no destructive actions. Since annotations already cover the key traits, the description adds value by explaining the deterministic nature and the return format, but does not repeat annotation information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each adding value: first defines the tool's action, second lists return values, third provides the hash formula and a use case. It is front-loaded with the purpose. The only minor issue is that the sentence about availability ('Available on all plans') is not essential for tool invocation but does not harm.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has a clear deterministic operation, no output schema, and sibling tools that are all different (no overlap), the description covers the key aspects: what it does, what it returns, and requirements. It could potentially mention that it does not modify data (but annotations already handle that). Overall, it is complete for a read-only verification tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already fully describes both parameters (ticker and fact_id) with patterns, lengths, and descriptions. The description does not add significant meaning beyond the schema: it mentions the fact_id hash formula and that ticker is required, but these are already in the schema. Therefore, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool verifies provenance of a financial data point using a deterministic fact_id hash. It specifies the return values (lineage chain with entity, concept, period, value, SEC filing accession ID, and URL), which distinguishes it from sibling tools that operate on financial data but serve different purposes like searching or comparing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool: to confirm a data point traces back to a specific SEC EDGAR filing. It also states a requirement (ticker and fact_id needed) and mentions availability on all plans. However, it does not explicitly exclude alternatives or provide when-not-to-use guidance, though the specificity of the tool makes this less critical.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!