Shibui Finance
Server Details
64 years of US stock market data — fundamentals, technicals, and backtests. Ask in plain English.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
7 toolsget_database_schemaARead-onlyIdempotentInspect
REQUIRED for US stock/financial queries, do NOT use web search for market data
Use this tool when the user asks about stock prices, revenue, earnings, earnings surprises (EPS estimates vs actuals), margins, P/E ratios, valuations, dividends, balance sheets, cash flow, technical indicators (RSI, MACD, SMA), stock screening, company comparisons, sector analysis, or any analysis of US public companies.
Covers 5,200+ NYSE and NASDAQ companies with 64 years of daily prices, quarterly financials, and 56 technical indicators.
Must be called once per session before using stock_data_query or any workflow tool.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnly, idempotent, non-destructive). Description adds valuable behavioral context: dataset scope (5,200+ companies, 64 years of data, 56 technical indicators) and session lifecycle constraint ('once per session'). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear paragraph breaks. Front-loaded with critical requirement ('REQUIRED'). Long list of financial terms is justified as it helps the LLM recognize relevant queries. Minor verbosity in the indicators list prevents a 5.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a schema discovery tool: covers data coverage, prerequisites, sibling relationships, and domain scope. Output schema exists per context signals, so description need not explain return values. Zero parameter count means no parameter documentation burden.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, setting baseline 4. Description correctly omits parameter discussion since none exist, and does not invent nonexistent constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly indicates this is a schema retrieval tool for US financial data, though it never explicitly states 'Returns the database schema' or similar verb phrase. The purpose is inferred from 'REQUIRED for... queries' and the tool name, with strong differentiation from siblings via the prerequisite instruction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Exemplary guidance: explicitly states when to use ('US stock/financial queries'), when NOT to use alternatives ('do NOT use web search'), and provides precise sequencing ('Must be called once per session before using stock_data_query or any workflow tool'). Lists specific financial use cases (EPS, P/E ratios, technical indicators) to trigger selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
load_comparison_workflowARead-onlyIdempotentInspect
Load comparative analysis workflow with advanced query patterns. REQUIRES get_database_schema to be called first — this tool has no schema. Call BEFORE writing SQL when the user asks to compare companies, "X vs Y", peer benchmarking, sector ranking, or relative valuation. Can be combined with other workflow tools.
| Name | Required | Description | Default |
|---|---|---|---|
| _content | No | ## Comparative Analysis Workflow ### Persona You are an analyst producing peer comparison reports. You focus on apples-to-apples comparisons — same sector, similar size, comparable business models. You highlight where a company stands out (positively or negatively) relative to peers. ### Workflow 1. **Validate comparability**: Check that companies are in the same or adjacent gic_sector/gic_sub_industry. If not, note that the comparison is cross-sector and metrics may not be directly comparable. 2. **Size context**: Note market cap differences. A $10B company vs a $500B company will naturally differ in growth rates, margins, and multiples. 3. **Snapshot comparison**: Pull highlights + valuation for all symbols in one query. Compare P/E, EV/EBITDA, margins, ROE. 4. **Trend comparison**: Compare quarterly revenue and earnings trajectories. Are they converging or diverging? 5. **Price performance**: Use P2 pattern for returns over same period. 6. **Relative strengths**: Identify what each company does better. Avoid declaring a "winner" — different investors value different attributes. ### Output Format - **Comparison Overview**: Brief context on why these companies are being compared (same industry, competitors, etc.) - **Snapshot Table**: Side-by-side metrics (inline markdown) - **Key Differentiators**: 2-3 sentences on what distinguishes each - **Trend Context**: Are the gaps widening or narrowing? - **Caveats**: Size differences, data staleness, sector mismatches ### Advanced Query Patterns #### C1: Side-by-side snapshot (N companies) ```sql SELECT g.symbol, g.name, g.gic_sub_industry, h.market_capitalization_mln, ROUND(h.pe_ratio, 2) AS pe, ROUND(v.forward_pe, 2) AS fwd_pe, ROUND(v.enterprise_value_ebitda, 2) AS ev_ebitda, ROUND(h.profit_margin * 100, 1) AS margin_pct, ROUND(h.return_on_equity_ttm * 100, 1) AS roe_pct, ROUND(h.quarterly_revenue_growth_yoy * 100, 1) AS rev_growth_pct, ROUND(h.dividend_yield * 100, 2) AS div_yield_pct, ss.percent_insiders, ss.short_percent_float FROM shibui.general_info g INNER JOIN shibui.highlights h ON g.symbol = h.symbol LEFT JOIN shibui.valuation v ON g.symbol = v.symbol LEFT JOIN shibui.share_stats ss ON g.symbol = ss.symbol WHERE g.symbol IN ('AAPL.NASDAQ', 'MSFT.NASDAQ', 'GOOGL.NASDAQ') ORDER BY h.market_capitalization_mln DESC LIMIT 10 ``` #### C2: Revenue and margin trend comparison (last 8 quarters) ```sql SELECT i.symbol, i.date, i.total_revenue, ROUND(i.gross_profit / NULLIF(i.total_revenue, 0) * 100, 1) AS gross_margin_pct, ROUND(i.net_income / NULLIF(i.total_revenue, 0) * 100, 1) AS net_margin_pct, ROUND(i.operating_income / NULLIF(i.total_revenue, 0) * 100, 1) AS op_margin_pct FROM shibui.income_statement_quarterly i WHERE i.symbol IN ('AAPL.NASDAQ', 'MSFT.NASDAQ') AND i.date >= CURRENT_DATE - INTERVAL '2 years' AND i.total_revenue IS NOT NULL ORDER BY i.symbol, i.date DESC LIMIT 20 ``` #### C3: Price performance comparison (multiple timeframes) ```sql WITH prices AS ( SELECT symbol, date, close, ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY date DESC) AS rn, FIRST_VALUE(close) OVER (PARTITION BY symbol ORDER BY date ASC) AS start_90d FROM shibui.stock_quotes WHERE symbol IN ('AAPL.NASDAQ', 'MSFT.NASDAQ', 'GOOGL.NASDAQ') AND date >= CURRENT_DATE - INTERVAL '90 days' ), prices_1y AS ( SELECT symbol, FIRST_VALUE(close) OVER (PARTITION BY symbol ORDER BY date ASC) AS start_1y, ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY date DESC) AS rn FROM shibui.stock_quotes WHERE symbol IN ('AAPL.NASDAQ', 'MSFT.NASDAQ', 'GOOGL.NASDAQ') AND date >= CURRENT_DATE - INTERVAL '1 year' ) SELECT p.symbol, ROUND(p.close, 2) AS current_price, ROUND((p.close - p.start_90d) / NULLIF(p.start_90d, 0) * 100, 1) AS return_90d_pct, ROUND((p.close - y.start_1y) / NULLIF(y.start_1y, 0) * 100, 1) AS return_1y_pct FROM prices p LEFT JOIN prices_1y y ON p.symbol = y.symbol AND y.rn = 1 WHERE p.rn = 1 ORDER BY return_90d_pct DESC LIMIT 10 ``` |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent safety profile; description adds critical behavioral context that this 'has no schema' (requires external schema loading) and exposes composability with other workflows. Captures dependency graph information not present in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four efficient sentences with zero redundancy: (1) purpose definition, (2) hard prerequisite, (3) usage conditions and timing, (4) composability note. Front-loaded with action verb and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema (return values need not be described) and rich annotations, the description adequately covers the tool's role in the workflow chain. Only gap is the lack of explicit parameter documentation given 0% schema coverage, though the default value compensates practically.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage for the _content parameter. Description mentions 'advanced query patterns' which semantically maps to the SQL examples in the parameter's default value, providing minimal grounding. However, it fails to explicitly document that _content contains the workflow definition/instructions or explain the parameter's role in the tool's operation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Load' with resource 'comparative analysis workflow' and clearly distinguishes from siblings by enumerating specific use cases: 'compare companies', 'X vs Y', 'peer benchmarking', 'sector ranking', or 'relative valuation'. Immediately contrasts with load_earnings_workflow, load_fundamental_workflow, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite chain ('REQUIRES get_database_schema to be called first'), temporal ordering ('Call BEFORE writing SQL'), specific trigger conditions (comparison queries), and sibling interaction pattern ('Can be combined with other workflow tools'). Covers when-to-use and prerequisites comprehensively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
load_earnings_workflowARead-onlyIdempotentInspect
Load earnings analysis workflow with advanced query patterns. REQUIRES get_database_schema to be called first — this tool has no schema. Call BEFORE writing SQL when the user asks about earnings results, EPS surprises, beat/miss history, earnings estimates, quarterly reporting, or earnings season analysis. Can be combined with other workflow tools.
| Name | Required | Description | Default |
|---|---|---|---|
| _content | No | ## Earnings Analysis Workflow ### Persona You are an earnings analyst who tracks quarterly results and identifies patterns in surprises. You contextualize EPS numbers with revenue trends and margin shifts — an EPS beat from cost cuts is different from one driven by revenue growth. ### Key Data Notes - `earnings_quarterly`: has eps_actual, eps_estimate, eps_difference, surprise_percent, before_after_market ('AfterMarket'|'BeforeMarket') - `earnings_yearly`: has eps_actual ONLY (no estimates) - Dates in earnings_quarterly are period-end dates (month-end), not announcement dates. Use `report_date` for actual announcement date. - Pair earnings data with income_statement_quarterly for full context (revenue, margins, one-time items) ### Workflow 1. **Recent result**: Latest quarter EPS actual vs estimate + surprise % 2. **Trend**: Last 4-8 quarters of beats/misses (consistency matters) 3. **Revenue context**: Did revenue also beat? Revenue miss + EPS beat = cost-cutting, not organic growth 4. **Margin trajectory**: Are margins expanding or compressing? 5. **Forward estimates**: eps_estimate_current_quarter and next_quarter from highlights table (if available) 6. **Sector context**: How did peers perform in the same quarter? ### Output Format - **Latest Quarter**: EPS actual vs estimate, surprise %, revenue - **Track Record**: Table of last 4-8 quarters with beat/miss - **Quality Assessment**: Revenue-driven vs cost-driven performance - **Forward Look**: Current estimates (if available) and trend context ### Advanced Query Patterns #### E1: Earnings history with revenue context ```sql WITH eps AS ( SELECT symbol, date, report_date, before_after_market, eps_actual, eps_estimate, surprise_percent, ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY date DESC) AS rn FROM shibui.earnings_quarterly WHERE symbol = 'AAPL.NASDAQ' AND date >= CURRENT_DATE - INTERVAL '2 years' AND eps_actual IS NOT NULL ) SELECT e.date, e.report_date, e.before_after_market, ROUND(e.eps_actual, 2) AS actual, ROUND(e.eps_estimate, 2) AS estimate, ROUND(e.surprise_percent, 1) AS surprise_pct, CASE WHEN e.surprise_percent > 0 THEN 'Beat' ELSE 'Miss' END AS result, i.total_revenue, ROUND(i.net_income / NULLIF(i.total_revenue, 0) * 100, 1) AS net_margin_pct FROM eps e LEFT JOIN shibui.income_statement_quarterly i ON e.symbol = i.symbol AND e.date = i.date WHERE e.rn <= 8 ORDER BY e.date DESC LIMIT 8 ``` #### E2: Sector earnings season summary (latest quarter) ```sql WITH latest_eps AS ( SELECT e.symbol, e.date, e.eps_actual, e.eps_estimate, e.surprise_percent, ROW_NUMBER() OVER (PARTITION BY e.symbol ORDER BY e.date DESC) AS rn FROM shibui.earnings_quarterly e WHERE e.date >= CURRENT_DATE - INTERVAL '4 months' AND e.eps_actual IS NOT NULL AND e.eps_estimate IS NOT NULL ) SELECT g.gic_sector, COUNT(*) AS reported, COUNT(*) FILTER (WHERE le.surprise_percent > 0) AS beats, COUNT(*) FILTER (WHERE le.surprise_percent < 0) AS misses, ROUND(AVG(le.surprise_percent), 1) AS avg_surprise_pct, ROUND(MEDIAN(le.surprise_percent), 1) AS median_surprise_pct FROM latest_eps le INNER JOIN shibui.general_info g ON le.symbol = g.symbol WHERE le.rn = 1 AND g.type = 'Common Stock' GROUP BY g.gic_sector ORDER BY avg_surprise_pct DESC LIMIT 15 ``` #### E3: Consecutive beat streak leaders ```sql WITH recent_eps AS ( SELECT symbol, date, surprise_percent, ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY date DESC) AS rn FROM shibui.earnings_quarterly WHERE date >= CURRENT_DATE - INTERVAL '2 years' AND eps_actual IS NOT NULL AND eps_estimate IS NOT NULL ), streaks AS ( SELECT symbol, MIN(CASE WHEN surprise_percent <= 0 THEN rn END) AS first_miss_rn FROM recent_eps WHERE rn <= 8 GROUP BY symbol ) SELECT g.symbol, g.name, g.gic_sector, COALESCE(s.first_miss_rn - 1, 8) AS consecutive_beats, h.market_capitalization_mln FROM streaks s INNER JOIN shibui.general_info g ON s.symbol = g.symbol INNER JOIN shibui.highlights h ON g.symbol = h.symbol WHERE COALESCE(s.first_miss_rn - 1, 8) >= 4 AND g.type = 'Common Stock' AND h.market_capitalization_mln > 1000 ORDER BY consecutive_beats DESC, h.market_capitalization_mln DESC LIMIT 20 ``` |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructiveHint=false. The description adds critical behavioral context: 'this tool has no schema' (explains external dependency), 'Can be combined with other workflow tools' (interaction pattern), and implies it provides methodology rather than raw data. Could better clarify that it returns/guides the SQL workflow content.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste. Front-loaded with purpose, followed by hard requirement (REQUIRES), usage timing (BEFORE), trigger examples, and combinability. Strategic capitalization draws attention to critical sequencing constraints without verbose explanation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex workflow nature and presence of output schema, the description appropriately focuses on usage prerequisites and domain scope (earnings-specific). It correctly notes the schema dependency gap. Could improve by stating that the tool populates the analytical context with SQL patterns/workflows, but the content is largely complete for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (the _content parameter lacks description). The description compensates partially by explaining what the workflow contains ('advanced query patterns', SQL examples implicitly), but never mentions the _content parameter or explains that the workflow definition is passed/loaded via this parameter. Baseline 3 for high schema coverage not met; description explains concept but not parameter mechanics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Load[s] earnings analysis workflow with advanced query patterns' — specific verb (load) and resource (earnings workflow). It distinguishes from siblings (load_comparison_workflow, load_technical_workflow, etc.) by specifying 'earnings' and listing earnings-specific use cases (EPS surprises, beat/miss history). Minor deduction because 'load' could ambiguously mean 'execute' vs 'retrieve instructions', though context clarifies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent guidance: explicit prerequisite ('REQUIRES get_database_schema to be called first'), clear timing ('Call BEFORE writing SQL'), specific trigger conditions ('when the user asks about earnings results, EPS surprises...'), and combinability note ('Can be combined with other workflow tools'). Provides complete decision framework for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
load_fundamental_workflowARead-onlyIdempotentInspect
Load fundamental analysis workflow with advanced query patterns. REQUIRES get_database_schema to be called first — this tool has no schema. Call BEFORE writing SQL when the user asks about company valuation, financial health, investment quality, earnings trends, profitability, "is X a good buy", or any deep-dive company analysis. Can be combined with other workflow tools.
| Name | Required | Description | Default |
|---|---|---|---|
| _content | No | ## Fundamental Analysis Workflow ### Persona You are a senior equity research analyst. Your analysis should be structured, evidence-based, and balanced. Lead with conclusions, support with data, and always note limitations and risks. ### Workflow (follow in order) 1. **Identify**: Look up symbol via general_info. Note gic_sector, gic_sub_industry, full_time_employees, ipo_date. 2. **Current Snapshot**: Pull highlights + valuation + share_stats for the symbol. Note pe_ratio, profit_margin, roe, market_cap. 3. **Financial Trends** (last 8 quarters): - Revenue + margin trajectory (income_statement_quarterly) - Balance sheet health: current ratio, debt/equity, cash position - Cash flow: operating CF, FCF, capex intensity, SBC as % of revenue 4. **Earnings Quality**: - EPS surprise history (earnings_quarterly, last 4-8 quarters) - Revenue growth vs earnings growth (divergence = red flag) - SBC relative to net income (>50% = dilution concern) 5. **Peer Comparison**: - Find 3-5 peers in same gic_sub_industry with similar market cap - Compare: P/E, profit margin, ROE, revenue growth, FCF yield - Use sector benchmark pattern (P9) for context 6. **Valuation Assessment**: - P/E vs peers and sector average - EV/EBITDA vs peers (from valuation table) - PEG ratio if available - Note: this data cannot produce a DCF — no forward estimates beyond 1-year EPS. Be honest about this limitation. ### Output Format Structure your response as: - **Summary** (2-3 sentences: bull case, bear case, overall lean) - **Key Metrics** (inline markdown table) - **Financial Trends** (what direction are revenues, margins, cash flow heading) - **Peer Context** (where does this company sit vs competitors) - **Risks & Limitations** (data gaps, staleness, what you can't assess) ### Advanced Query Patterns #### F1: Quarterly financial trend with YoY and QoQ growth ```sql WITH quarterly AS ( SELECT symbol, date, total_revenue, gross_profit, net_income, operating_income, research_development, ebitda, LAG(total_revenue, 1) OVER (PARTITION BY symbol ORDER BY date) AS prev_q_rev, LAG(total_revenue, 4) OVER (PARTITION BY symbol ORDER BY date) AS yoy_rev FROM shibui.income_statement_quarterly WHERE symbol = 'AAPL.NASDAQ' AND date >= CURRENT_DATE - INTERVAL '3 years' AND total_revenue IS NOT NULL ) SELECT date, total_revenue, gross_profit, net_income, ROUND(gross_profit / NULLIF(total_revenue, 0) * 100, 1) AS gross_margin_pct, ROUND(net_income / NULLIF(total_revenue, 0) * 100, 1) AS net_margin_pct, ROUND(research_development / NULLIF(total_revenue, 0) * 100, 1) AS rd_pct, ROUND((total_revenue - prev_q_rev) / NULLIF(prev_q_rev, 0) * 100, 1) AS qoq_pct, ROUND((total_revenue - yoy_rev) / NULLIF(yoy_rev, 0) * 100, 1) AS yoy_pct FROM quarterly WHERE prev_q_rev IS NOT NULL ORDER BY date DESC LIMIT 12 ``` #### F2: Cash flow quality assessment ```sql WITH cf AS ( SELECT symbol, date, total_cash_from_operating_activities AS op_cf, capital_expenditures AS capex, free_cash_flow AS fcf, stock_based_compensation AS sbc, dividends_paid, net_borrowings, ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY date DESC) AS rn FROM shibui.cash_flow_quarterly WHERE symbol = 'AAPL.NASDAQ' AND date >= CURRENT_DATE - INTERVAL '3 years' ), inc AS ( SELECT symbol, date, net_income, total_revenue FROM shibui.income_statement_quarterly WHERE symbol = 'AAPL.NASDAQ' AND date >= CURRENT_DATE - INTERVAL '3 years' ) SELECT cf.date, cf.op_cf, cf.fcf, cf.capex, cf.sbc, inc.net_income, ROUND(cf.op_cf / NULLIF(inc.net_income, 0), 2) AS cf_to_earnings_ratio, ROUND(cf.sbc / NULLIF(inc.total_revenue, 0) * 100, 1) AS sbc_pct_of_revenue, ROUND(ABS(cf.capex) / NULLIF(cf.op_cf, 0) * 100, 1) AS capex_intensity_pct FROM cf INNER JOIN inc ON cf.symbol = inc.symbol AND cf.date = inc.date WHERE cf.rn <= 8 ORDER BY cf.date DESC LIMIT 8 ``` #### F3: Peer comparison (same sub-industry, similar size) ```sql WITH target AS ( SELECT g.gic_sub_industry, h.market_capitalization_mln FROM shibui.general_info g INNER JOIN shibui.highlights h ON g.symbol = h.symbol WHERE g.symbol = 'AAPL.NASDAQ' ) SELECT g.symbol, g.name, g.gic_sub_industry, h.market_capitalization_mln, ROUND(h.pe_ratio, 2) AS pe, ROUND(h.profit_margin * 100, 1) AS margin_pct, ROUND(h.return_on_equity_ttm * 100, 1) AS roe_pct, ROUND(h.quarterly_revenue_growth_yoy * 100, 1) AS rev_growth_pct, v.enterprise_value_ebitda AS ev_ebitda FROM shibui.general_info g INNER JOIN shibui.highlights h ON g.symbol = h.symbol LEFT JOIN shibui.valuation v ON g.symbol = v.symbol CROSS JOIN target t WHERE g.gic_sub_industry = t.gic_sub_industry AND g.type = 'Common Stock' AND h.market_capitalization_mln > t.market_capitalization_mln * 0.1 ORDER BY h.market_capitalization_mln DESC LIMIT 15 ``` #### F4: Balance sheet health over time ```sql WITH bs AS ( SELECT symbol, date, total_current_assets, total_current_liabilities, cash_and_short_term_investments, total_liab, total_stockholder_equity, long_term_debt, net_debt, ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY date DESC) AS rn FROM shibui.balance_sheet_quarterly WHERE symbol = 'AAPL.NASDAQ' AND date >= CURRENT_DATE - INTERVAL '3 years' ) SELECT date, ROUND(total_current_assets / NULLIF(total_current_liabilities, 0), 2) AS current_ratio, ROUND(total_liab / NULLIF(total_stockholder_equity, 0), 2) AS debt_to_equity, cash_and_short_term_investments AS cash_stinv, long_term_debt, net_debt, total_stockholder_equity FROM bs WHERE rn <= 8 ORDER BY date DESC LIMIT 8 ``` |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical dependency context beyond annotations: discloses that the tool lacks database schema ('this tool has no schema'), explaining the hard prerequisite. Annotations cover safety (readOnly/idempotent), while description adds functional context about query patterns and workflow structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured and front-loaded: 4 sentences covering purpose, prerequisite, trigger conditions, and combinability. No redundancy, no tautology, every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a workflow tool: covers prerequisites, usage timing, and sibling relationships. Output schema exists (per context signals), so return values need not be explained. Could slightly improve by explicitly stating it returns workflow instructions, but adequate given complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage for the single `_content` parameter. Description partially compensates by characterizing the content ('advanced query patterns', 'workflow'), but does not explicitly explain that `_content` contains the workflow template or how the parameter should be used. Baseline 3 appropriate given some semantic hints provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Load' + resource 'fundamental analysis workflow' + distinguishes from siblings (technical, earnings, comparison, screening workflows via explicit 'fundamental' scope). Mentions 'advanced query patterns' to differentiate content type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Exceptional guidance: explicit prerequisite ('REQUIRES get_database_schema to be called first'), explicit trigger conditions ('when the user asks about company valuation...'), ordering constraint ('Call BEFORE writing SQL'), and combination guidance ('Can be combined with other workflow tools').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
load_screening_workflowARead-onlyIdempotentInspect
Load quantitative screening workflow with advanced query patterns. REQUIRES get_database_schema to be called first — this tool has no schema. Call BEFORE writing SQL when the user asks to find, screen, rank, or filter stocks by multiple criteria — "find stocks that...", "top N by...", sector-wide ranking, or any multi-factor selection. Can be combined with other workflow tools.
| Name | Required | Description | Default |
|---|---|---|---|
| _content | No | ## Quantitative Screening Workflow ### Persona You are a quantitative analyst building stock screens. You think carefully about universe definition, filter interactions, survivorship bias, and result quality. You always explain what the screen found AND what it excluded. ### Workflow 1. **Define universe**: Start with base filters: - `g.type = 'Common Stock'` (excludes ADRs, CEFs, REITs, MLPs) - Optionally add: `g.country_iso = 'US'` for US-only - Optionally add: `h.market_capitalization_mln > X` for size - Always use `general_info` as the base table (4,961 rows, not `symbols` with 5,201) 2. **Apply filters incrementally**: Each filter reduces the universe. Note the reduction at each step. Warn if <10 results remain. 3. **Check NULL exclusion**: Filters on nullable columns silently exclude NULLs. Key columns: - `pe_ratio`: 44% NULL (unprofitable companies excluded) - `dividend_yield`: 60% NULL (non-payers excluded) - `peg_ratio`: heavily NULL - Financial statement columns: variably NULL Tell the user what percentage of the universe was excluded by NULLs. 4. **Apply ranking/sorting**: Use ORDER BY with the primary criterion. 5. **Validate results**: Check for ADRs, data anomalies, stale data. ### Common Screening Pitfalls - **Survivorship bias**: The database includes delisted companies with historical data. Screens on current metrics naturally exclude failed companies. Note this limitation. - **Stale highlights**: For screens >20 symbols, some highlights data may be outdated. Consider computing key metrics from quarterly financial statements instead. - **ADR inflation**: Foreign ADRs on NYSE/NASDAQ can inflate yield, FCF, and margin metrics. Filter with `g.type = 'Common Stock'` to exclude them. - **NULL interaction**: `WHERE pe_ratio < 15 AND dividend_yield > 0.03` silently drops 44% + 60% of the universe. Only ~30% of companies pass both just on data availability. - **Micro-cap noise**: Stocks with market_cap < $100M often have unreliable financial data and low liquidity. Default to `market_capitalization_mln > 300` unless user specifies otherwise. ### Output Format - **Screen Summary**: What you searched for, universe size, result count - **Results Table**: Inline markdown, sorted by primary criterion - **Exclusion Note**: What filters removed and approximate % excluded - **Caveats**: Any data quality concerns with the results ### Advanced Query Patterns #### S1: Multi-factor value screen ```sql SELECT g.symbol, g.name, g.gic_sector, g.gic_sub_industry, h.market_capitalization_mln, ROUND(h.pe_ratio, 2) AS pe, ROUND(v.enterprise_value_ebitda, 2) AS ev_ebitda, ROUND(v.price_book_mrq, 2) AS pb, ROUND(h.return_on_equity_ttm * 100, 1) AS roe_pct, ROUND(h.profit_margin * 100, 1) AS margin_pct, ROUND(h.dividend_yield * 100, 2) AS div_yield_pct FROM shibui.general_info g INNER JOIN shibui.highlights h ON g.symbol = h.symbol LEFT JOIN shibui.valuation v ON g.symbol = v.symbol WHERE g.type = 'Common Stock' AND h.market_capitalization_mln > 1000 AND h.pe_ratio BETWEEN 5 AND 20 AND h.return_on_equity_ttm > 0.12 AND h.profit_margin > 0.08 ORDER BY h.pe_ratio ASC LIMIT 30 ``` #### S2: Growth screen (revenue acceleration + earnings beats) ```sql WITH rev_growth AS ( SELECT symbol, date, total_revenue, LAG(total_revenue, 4) OVER (PARTITION BY symbol ORDER BY date) AS rev_yoy, ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY date DESC) AS rn FROM shibui.income_statement_quarterly WHERE date >= CURRENT_DATE - INTERVAL '2 years' AND total_revenue IS NOT NULL AND total_revenue > 0 ), latest_rev AS ( SELECT symbol, ROUND((total_revenue - rev_yoy) / NULLIF(rev_yoy, 0) * 100, 1) AS yoy_growth_pct FROM rev_growth WHERE rn = 1 AND rev_yoy IS NOT NULL ), recent_beats AS ( SELECT symbol, COUNT(*) FILTER (WHERE surprise_percent > 0) AS beats, COUNT(*) AS quarters FROM shibui.earnings_quarterly WHERE date >= CURRENT_DATE - INTERVAL '1 year' AND eps_actual IS NOT NULL AND eps_estimate IS NOT NULL GROUP BY symbol ) SELECT g.symbol, g.name, g.gic_sector, h.market_capitalization_mln, lr.yoy_growth_pct, rb.beats || '/' || rb.quarters AS beat_rate, ROUND(h.pe_ratio, 2) AS pe FROM latest_rev lr INNER JOIN shibui.general_info g ON lr.symbol = g.symbol INNER JOIN shibui.highlights h ON g.symbol = h.symbol LEFT JOIN recent_beats rb ON g.symbol = rb.symbol WHERE g.type = 'Common Stock' AND h.market_capitalization_mln > 500 AND lr.yoy_growth_pct > 15 AND rb.beats >= 3 ORDER BY lr.yoy_growth_pct DESC LIMIT 30 ``` #### S3: Short squeeze candidates NOTE: In share_stats, only short_percent_float is populated (decimal: 0.15 = 15%). shares_short, shares_short_prior_month, short_ratio, short_percent_outstanding are ALL NULL. percent_insiders and percent_institutions are percentages (50.0 = 50%). ```sql SELECT g.symbol, g.name, g.gic_sector, h.market_capitalization_mln, ROUND(ss.short_percent_float * 100, 2) AS short_pct_float, ROUND(ss.percent_institutions, 1) AS inst_pct, ss.shares_float, ROUND(h.pe_ratio, 2) AS pe FROM shibui.general_info g INNER JOIN shibui.highlights h ON g.symbol = h.symbol INNER JOIN shibui.share_stats ss ON g.symbol = ss.symbol WHERE g.type = 'Common Stock' AND ss.short_percent_float > 0.15 -- decimal: 0.15 = 15% AND h.market_capitalization_mln BETWEEN 300 AND 10000 ORDER BY ss.short_percent_float DESC LIMIT 20 ``` #### S4: Dividend quality screen (yield + coverage + growth) ```sql WITH div_coverage AS ( SELECT c.symbol, c.date, ABS(c.dividends_paid) AS dividends, c.free_cash_flow, ROUND(c.free_cash_flow / NULLIF(ABS(c.dividends_paid), 0), 2) AS fcf_coverage, ROW_NUMBER() OVER (PARTITION BY c.symbol ORDER BY c.date DESC) AS rn FROM shibui.cash_flow_quarterly c WHERE c.date >= CURRENT_DATE - INTERVAL '6 months' AND c.dividends_paid IS NOT NULL AND c.dividends_paid != 0 ) SELECT g.symbol, g.name, g.gic_sector, h.market_capitalization_mln, ROUND(h.dividend_yield * 100, 2) AS yield_pct, h.dividend_share, dc.fcf_coverage, ROUND(h.pe_ratio, 2) AS pe FROM shibui.general_info g INNER JOIN shibui.highlights h ON g.symbol = h.symbol INNER JOIN div_coverage dc ON g.symbol = dc.symbol AND dc.rn = 1 WHERE g.type = 'Common Stock' AND g.country_iso = 'US' AND h.dividend_yield > 0.02 AND dc.fcf_coverage > 1.2 AND h.market_capitalization_mln > 1000 ORDER BY h.dividend_yield DESC LIMIT 20 ``` |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only/idempotent safety. Description adds crucial behavioral context: explains prerequisite chain (why get_database_schema is needed), warns 'this tool has no schema', and clarifies its role in the SQL generation workflow. Could add what the output contains (SQL templates, pitfalls).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured: action statement, prerequisite warning, usage triggers with examples, and integration note. Four sentences with zero redundancy, front-loaded with the most critical information (what it does and requires).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong coverage given complexity: explains when/why to use, prerequisites, and sibling relationships. Output schema exists so return values need not be described. Minor gap: does not summarize what the loaded workflow contains (e.g., 'returns SQL templates and screening guidelines').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (the _content parameter lacks a description field). Description mentions 'advanced query patterns' vaguely but fails to explicitly document that _content contains the workflow document, that it has a default value with embedded SQL templates, or what valid inputs might be. Insufficient compensation for zero schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Load') + resource ('quantitative screening workflow'). Explicitly distinguishes from sibling workflow tools by specifying 'screening' context and noting it 'Can be combined with other workflow tools', placing it within the suite of load_*_workflow tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: states prerequisite 'REQUIRES get_database_schema to be called first', specifies exact trigger phrases ('find stocks that...', 'top N by...'), and defines temporal relationship ('Call BEFORE writing SQL'). Also notes combinatorial usage with other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
load_technical_workflowARead-onlyIdempotentInspect
Load technical analysis workflow with indicator interpretation guide and advanced query patterns. REQUIRES get_database_schema to be called first — this tool has no schema. Call BEFORE writing SQL when the user asks about RSI, MACD, moving averages, support/resistance, overbought/oversold, chart patterns, momentum, trend analysis, or entry/exit signals. Can be combined with other workflow tools.
| Name | Required | Description | Default |
|---|---|---|---|
| _content | No | ## Technical Analysis Workflow ### Persona You are a technical analyst specializing in momentum and trend- following strategies. You interpret indicator combinations, not single signals. You always note the limitations of backward- looking indicators and never claim predictive certainty. ### Indicator Interpretation Guide **Trend indicators** (direction of the trend): - `sma_200`: Long-term trend. Price > SMA-200 = bullish. Below = bearish. - `ema_50` / `ema_20`: Medium/short-term trend. - Golden cross: ema_50 crosses above sma_200. Death cross: below. **Momentum indicators** (strength and speed): - `rsi_14`: >70 overbought, <30 oversold, 40-60 neutral. Divergence (price up, RSI down) = weakening momentum. - `macd` / `macdsignal` / `macdhist`: MACD > signal = bullish. Histogram expanding = momentum increasing. Contracting = fading. - `stoch_k` / `stoch_d`: >80 overbought, <20 oversold. - `cci_20`: >100 overbought, <-100 oversold. - `willr_14`: >-20 overbought, <-80 oversold. **Trend strength:** - `adx_14`: >25 strong trend, <20 weak/no trend. Direction: `plus_di_14` > `minus_di_14` = uptrend. **Volatility:** - `bb_upper` / `bb_middle` / `bb_lower`: Bollinger Bands. Price near upper = overbought, near lower = oversold. Squeeze (bands narrow) = breakout imminent. - `atr_14`: Absolute volatility. Use for position sizing context. **Volume confirmation:** - `volume_ma_20`: Compare daily volume to 20-day average. Breakout on >2x average volume = stronger signal. - `obv`: Rising OBV with rising price confirms trend. Rising OBV with flat price = accumulation. - `mfi_14`: Money Flow Index. >80 overbought, <20 oversold. **Candlestick patterns** (values: 100=bullish, -100=bearish, 0=none): - Reversal: cdl_hammer (bullish), cdl_shootingstar (bearish), cdl_engulfing, cdl_morningstar (bullish), cdl_eveningstar (bearish) - Continuation: cdl_3whitesoldiers (bullish), cdl_3blackcrows (bearish) - Indecision: cdl_doji, cdl_spinningtop **Parabolic SAR** (`sar`): - SAR below price = uptrend. SAR above price = downtrend. - SAR flips = potential trend reversal. ### Signal Combination Framework Strong signals combine multiple confirming indicators: - **Strong bullish**: RSI rising from <30 + MACD crossover + price > SMA-200 + volume > 1.5x average + bullish candlestick pattern - **Strong bearish**: RSI falling from >70 + MACD bearish cross + price < SMA-200 + increasing volume + bearish pattern - **Caution**: ADX < 20 (weak trend) makes all signals less reliable - **Divergence**: Price makes new high but RSI/MACD doesn't = weakening ### Workflow 1. **Trend**: Where is price relative to SMA-200, EMA-50, EMA-20? 2. **Momentum**: RSI, MACD histogram, stochastic readings 3. **Strength**: ADX reading + directional indicators 4. **Volatility**: Bollinger Band position, ATR level 5. **Volume**: Is volume confirming the move? 6. **Patterns**: Any candlestick signals on recent dates? 7. **Synthesis**: Combine into overall signal with confidence level ### Output Format - **Trend Summary**: Bullish/Bearish/Neutral with timeframe context - **Key Signals** (inline table: indicator, value, interpretation) - **Confluences**: Which signals agree? Which disagree? - **Risk Levels**: What would invalidate the current thesis? - **Caveat**: Technical indicators are backward-looking and do not predict future price movement. ### Advanced Query Patterns #### T1: Full technical dashboard (single symbol) ```sql WITH latest AS ( SELECT sq.symbol, sq.date, sq.open, sq.high, sq.low, sq.close, sq.volume, ti.*, ROW_NUMBER() OVER (PARTITION BY sq.symbol ORDER BY sq.date DESC) AS rn FROM shibui.stock_quotes sq INNER JOIN shibui.technical_indicators ti ON sq.symbol = ti.symbol AND sq.date = ti.date WHERE sq.symbol = 'AAPL.NASDAQ' AND sq.date >= CURRENT_DATE - INTERVAL '7 days' ) SELECT date, close, volume, -- Trend ROUND(sma_200, 2) AS sma_200, ROUND(ema_50, 2) AS ema_50, ROUND(ema_20, 2) AS ema_20, CASE WHEN close > sma_200 THEN 'Above' ELSE 'Below' END AS vs_sma200, -- Momentum ROUND(rsi_14, 1) AS rsi, ROUND(macd, 4) AS macd, ROUND(macdsignal, 4) AS signal, ROUND(macdhist, 4) AS hist, ROUND(stoch_k, 1) AS stoch_k, ROUND(stoch_d, 1) AS stoch_d, -- Strength ROUND(adx_14, 1) AS adx, CASE WHEN plus_di_14 > minus_di_14 THEN 'Bullish' ELSE 'Bearish' END AS di_signal, -- Volatility ROUND(bb_upper, 2) AS bb_up, ROUND(bb_lower, 2) AS bb_low, ROUND(atr_14, 2) AS atr, -- Volume ROUND(volume / NULLIF(volume_ma_20, 0), 2) AS vol_ratio, ROUND(mfi_14, 1) AS mfi, -- SAR ROUND(sar, 2) AS sar, CASE WHEN close > sar THEN 'Uptrend' ELSE 'Downtrend' END AS sar_signal FROM latest WHERE rn = 1 LIMIT 1 ``` #### T2: RSI divergence detection (price up, RSI down over N days) ```sql WITH data AS ( SELECT sq.symbol, sq.date, sq.close, ti.rsi_14, FIRST_VALUE(sq.close) OVER (PARTITION BY sq.symbol ORDER BY sq.date ASC) AS start_price, FIRST_VALUE(ti.rsi_14) OVER (PARTITION BY sq.symbol ORDER BY sq.date ASC) AS start_rsi, ROW_NUMBER() OVER (PARTITION BY sq.symbol ORDER BY sq.date DESC) AS rn FROM shibui.stock_quotes sq INNER JOIN shibui.technical_indicators ti ON sq.symbol = ti.symbol AND sq.date = ti.date WHERE sq.date >= CURRENT_DATE - INTERVAL '30 days' AND ti.rsi_14 IS NOT NULL ) SELECT symbol, date, close, ROUND(rsi_14, 1) AS rsi, ROUND(start_price, 2) AS start_price, ROUND(start_rsi, 1) AS start_rsi, CASE WHEN close > start_price AND rsi_14 < start_rsi THEN 'Bearish divergence' WHEN close < start_price AND rsi_14 > start_rsi THEN 'Bullish divergence' ELSE 'No divergence' END AS divergence_signal FROM data WHERE rn = 1 AND symbol IN ('AAPL.NASDAQ', 'MSFT.NASDAQ', 'GOOGL.NASDAQ') LIMIT 20 ``` #### T3: Multi-symbol technical screen (oversold + trend + volume) ```sql WITH latest AS ( SELECT sq.symbol, sq.date, sq.close, sq.volume, ti.rsi_14, ti.sma_200, ti.ema_50, ti.macd, ti.macdsignal, ti.bb_lower, ti.adx_14, ti.volume_ma_20, ti.cdl_hammer, ti.cdl_engulfing, ti.cdl_morningstar, ROW_NUMBER() OVER (PARTITION BY sq.symbol ORDER BY sq.date DESC) AS rn FROM shibui.stock_quotes sq INNER JOIN shibui.technical_indicators ti ON sq.symbol = ti.symbol AND sq.date = ti.date WHERE sq.date >= CURRENT_DATE - INTERVAL '7 days' AND ti.rsi_14 IS NOT NULL ) SELECT l.symbol, g.name, l.date, l.close, ROUND(l.rsi_14, 1) AS rsi, ROUND(l.close / NULLIF(l.sma_200, 0) * 100 - 100, 1) AS pct_vs_sma200, ROUND(l.adx_14, 1) AS adx, ROUND(l.volume / NULLIF(l.volume_ma_20, 0), 2) AS vol_ratio, CASE WHEN l.cdl_hammer = 100 OR l.cdl_engulfing = 100 OR l.cdl_morningstar = 100 THEN 'Bullish pattern' ELSE 'None' END AS pattern FROM latest l INNER JOIN shibui.general_info g ON l.symbol = g.symbol WHERE l.rn = 1 AND l.rsi_14 < 30 AND l.close < l.bb_lower AND g.type = 'Common Stock' ORDER BY l.rsi_14 ASC LIMIT 20 ``` #### T4: Trend change detection (SMA crossovers, last 30 days) ```sql WITH data AS ( SELECT sq.symbol, sq.date, sq.close, ti.ema_50, ti.sma_200, LAG(ti.ema_50) OVER (PARTITION BY sq.symbol ORDER BY sq.date) AS prev_ema50, LAG(ti.sma_200) OVER (PARTITION BY sq.symbol ORDER BY sq.date) AS prev_sma200 FROM shibui.stock_quotes sq INNER JOIN shibui.technical_indicators ti ON sq.symbol = ti.symbol AND sq.date = ti.date WHERE sq.symbol = 'AAPL.NASDAQ' AND sq.date >= CURRENT_DATE - INTERVAL '60 days' AND ti.ema_50 IS NOT NULL AND ti.sma_200 IS NOT NULL ) SELECT date, ROUND(close, 2) AS close, ROUND(ema_50, 2) AS ema_50, ROUND(sma_200, 2) AS sma_200, CASE WHEN ema_50 > sma_200 AND prev_ema50 <= prev_sma200 THEN 'Golden Cross' WHEN ema_50 < sma_200 AND prev_ema50 >= prev_sma200 THEN 'Death Cross' ELSE NULL END AS crossover FROM data WHERE (ema_50 > sma_200 AND prev_ema50 <= prev_sma200) OR (ema_50 < sma_200 AND prev_ema50 >= prev_sma200) ORDER BY date DESC LIMIT 10 ``` |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent hints. The description adds crucial behavioral context not in annotations: this tool has 'no schema' (requires prior schema fetch) and loads interpretive guidance rather than executing queries. It appropriately sets expectations about workflow sequencing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences total, each earning its place: purpose (sentence 1), prerequisite (sentence 2), usage triggers (sentence 3), and sibling relationships (sentence 4). Information is front-loaded and dense without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and read-only annotations, the description appropriately covers workflow purpose, prerequisites, and activation conditions without needing to detail return values. The extensive list of technical indicators covered ensures the LLM knows when this tool is relevant.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0% description coverage and provides only a '_content' string property with a complex default value. The description fails to explain the parameter's purpose, valid values, or that it can be omitted to use the default workflow guidance. With low schema coverage and no compensating description, this is under-specified.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Load') and resources ('technical analysis workflow', 'indicator interpretation guide', 'advanced query patterns'). It clearly distinguishes from siblings like 'load_fundamental_workflow' and 'load_earnings_workflow' by specifying technical analysis domains (RSI, MACD, chart patterns).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisites ('REQUIRES get_database_schema to be called first'), timing ('Call BEFORE writing SQL'), trigger conditions (specific list: RSI, MACD, moving averages, support/resistance, etc.), and combination rules ('Can be combined with other workflow tools').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stock_data_queryARead-onlyIdempotentInspect
Stock prices, earnings, revenue, P/E, dividends, margins, screener, comparisons
Run a SQL query against 64 years of US stock market data.
REQUIRES get_database_schema to be called first — this tool has no schema.
Use this after calling get_database_schema. Supports stock screening, fundamental analysis, technical analysis, company comparisons, and backtesting across 5,200+ public companies.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Read-only SQL query to execute. Requires shibui. table prefix and a LIMIT clause. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the readOnlyHint/idempotentHint annotations, the description adds crucial context: the data scope ('64 years', '5,200+ public companies'), available data types (prices, earnings, P/E), and the prerequisite chain ('this tool has no schema'). It correctly aligns with annotations by emphasizing 'Read-only SQL' in the parameter description. It does not disclose rate limits or pagination behavior, preventing a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is generally efficient but has minor structural issues. The opening line is a keyword list without verbs ('Stock prices, earnings...'), which is less effective than a proper sentence. There is slight redundancy between 'REQUIRES get_database_schema to be called first' and 'Use this after calling get_database_schema,' though the emphasis on prerequisites is appropriate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (covering return values) and clear annotations (covering safety properties), the description appropriately focuses on prerequisites and data scope. It adequately addresses the tool's complexity by warning about the schema dependency and documenting the historical data range. A 5 would require explicit error handling guidance or rate limit disclosure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'query' parameter, the schema already documents the SQL requirements ('Requires shibui. table prefix and a LIMIT clause'). The description does not add parameter syntax, examples, or validation rules beyond what the schema provides, meriting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as a SQL query tool against stock market data ('Run a SQL query against 64 years of US stock market data'). However, while it implicitly distinguishes itself from workflow siblings by mentioning SQL, it doesn't explicitly clarify the relationship between this low-level SQL interface and the higher-level 'load_*_workflow' siblings that cover identical functional domains (screening, comparisons, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit sequencing guidance ('REQUIRES get_database_schema to be called first') which is valuable. However, it fails to differentiate when to use this SQL tool versus the five sibling workflow tools (load_screening_workflow, load_comparison_workflow, etc.) despite listing identical use cases ('Supports stock screening...comparisons'). An agent cannot determine whether to use the workflow wrapper or raw SQL.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!