Server Quality Checklist
- Disambiguation2/5
Severe overlap exists between the catch-all 'veroq_ask' (which claims to handle 41 intents and be the default for any question) and numerous specific tools like 'veroq_ticker_analysis', 'veroq_technicals', 'veroq_full', and 'veroq_research'. Agents will struggle to determine when to use the general AI tool versus specialized endpoints, and boundaries between analysis tools (e.g., 'veroq_full' vs 'veroq_ticker_analysis') are poorly differentiated.
Naming Consistency3/5All tools share the 'veroq_' prefix, but suffixes follow inconsistent patterns: most use nouns ('alerts', 'candles'), while others use verbs ('ask', 'verify', 'crawl', 'extract', 'compare') or verb-noun phrases ('generate_report', 'run_agent'). Though readable, the mixing of grammatical patterns reduces predictability.
Tool Count2/5With 52 tools, the server is severely bloated. Many endpoints could be consolidated (e.g., 'veroq_crypto'/'veroq_crypto_chart', 'veroq_defi'/'veroq_defi_protocol', 'veroq_economy'/'veroq_economy_indicator', 'veroq_generate_report'/'veroq_get_report'). This granularity forces agents to make arbitrary distinctions between highly similar operations.
Completeness4/5The surface comprehensively covers the financial intelligence domain including equities, crypto, DeFi, macroeconomics, SEC filings, insider/congressional trading, sentiment analysis, news/briefs, screening, and alerts. While individual tools overlap, the collective capability supports full research workflows from data retrieval to verification with minimal gaps.
Average 4.5/5 across 52 of 52 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 52 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully reveals the cost ('1 credit') and return structure ('Array of entities with name, type, ticker...'), which are critical for agent decision-making. It lacks details on rate limits or data freshness beyond the 24-hour window, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses clear structural headers (WHEN TO USE, RETURNS, COST, EXAMPLE) that front-load critical information. Every line serves a distinct purpose—purpose, usage context, output format, cost, and example—without redundancy or verbose fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter, simple array output) and lack of output schema/annotations, the description compensates effectively. It documents the return value structure, cost implications, and temporal scope (24h), providing sufficient information for an agent to invoke the tool correctly without external documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with the 'limit' parameter fully documented as 'Max entities to return.' The description provides a helpful JSON example ('{ "limit": 10 }'), but this primarily illustrates syntax rather than adding semantic meaning beyond what the schema already provides. With high schema coverage, the baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'trending entities' (people, orgs, topics) with mention counts from the 'last 24 hours.' It specifies the verb (Get) and resource effectively. However, it does not explicitly differentiate from the sibling tool 'veroq_social_trending,' which could cause confusion about which trending data source to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'WHEN TO USE' section explicitly states this is for discovering 'what's dominating the news cycle right now' and identifies it as a 'good starting point for research.' This provides clear contextual guidance, though it lacks explicit exclusions or named alternatives (e.g., when to use veroq_search instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses COST (2 credits) and RETURN structure (member name, party, state, chamber, ticker, etc.) compensating for missing output schema. Missing time bounds for 'recent' trades and pagination behavior, but covers financial cost and data shape.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structured formatting with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). No wasted words; every sentence delivers distinct value (purpose, usage context, return payload, pricing, invocation pattern).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, description adequately compensates by detailing return fields and cost. However, 'recent' is undefined (temporal scope unclear), and no mention of data latency or volume limits prevents a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete parameter documentation ('Ticker symbol to filter by...Omit for all recent congressional trades'). Description reinforces filtering concept and provides JSON example, but adds minimal semantic value beyond the well-documented schema. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' + clear resource 'stock trades by members of U.S. Congress' + data source 'public disclosure filings'. Unambiguously distinguishes from sibling veroq_insider (corporate insiders) and generic filing tools by specifying congressional scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'WHEN TO USE' section defines context ('track congressional trading activity — politically-informed trading signals') and filtering capability. Lacks explicit comparison to siblings (e.g., 'use veroq_insider for corporate insiders') or exclusion criteria, but usage context is clearly demarcated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It effectively discloses behavioral traits by stating 'COST: 2 credits' (rate limiting) and 'RETURNS: Next earnings date...' (output structure). This compensates well for the lack of output schema and annotations, though it could mention data freshness or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses clear structural headers (WHEN TO USE, RETURNS, COST, EXAMPLE) making it highly scannable. Every sentence earns its place with zero redundancy. The main purpose is front-loaded in the first sentence, followed by contextual metadata.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only 1 parameter and no output schema or annotations, the description provides adequate compensation by explicitly documenting the return values and credit cost. For a simple lookup tool, this is complete enough to be invoked correctly, though edge case handling isn't documented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('Ticker symbol (e.g. AAPL, NVDA, GOOGL)'), establishing a baseline of 3. The description adds an example ('EXAMPLE: { "symbol": "NVDA" }') which reinforces usage but doesn't add semantic meaning beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The opening sentence uses specific verbs ('Get') and resources ('next earnings date, EPS estimates, and revenue estimates') for a stock ticker. Among siblings like veroq_ticker_news or veroq_ticker_price, this clearly distinguishes itself as the specific tool for earnings calendar and estimate data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The explicit 'WHEN TO USE' section provides clear positive guidance ('To check when a company reports earnings and what the Street expects. Useful before earnings season'). While it doesn't explicitly name alternatives or exclusions, the contextual guidance effectively signals when this tool is appropriate versus other market data tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully adds behavioral context by specifying 'COST: 1 credit' and detailing the return structure ('Each index with price, change, and change percent'). However, it omits other behavioral traits like data freshness (real-time vs delayed), rate limits, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses a structured format with clear sections (purpose, WHEN TO USE, RETURNS, COST, EXAMPLE). Every sentence delivers value: the specific indices listed define scope, the cost disclosure is critical for credit-based APIs, and the return structure compensates for the missing output schema. No filler text is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters) and lack of annotations or output schema, the description provides complete coverage: it explains inputs (none), outputs (field structure), usage context, and operational cost. For a parameter-less market data fetch, no additional information is required for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per scoring rules, the baseline for 0 params is 4. The description confirms this with 'No parameters needed' and provides an example of '{}', aligning perfectly with the schema without adding unnecessary elaboration.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] current values for major market indices' and specifically lists S&P 500, Nasdaq, Dow Jones, and VIX. This provides a specific verb and resource. However, it does not explicitly distinguish from siblings like veroq_ticker_price or veroq_market_movers, which could overlap conceptually.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'WHEN TO USE' section explicitly states 'For a quick check on how the overall market is doing' and notes 'No parameters needed.' This provides clear context for invocation. It lacks explicit 'when not to use' guidance or named alternatives (e.g., 'use veroq_ticker_price for individual stocks'), preventing a score of 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description carries the burden well by disclosing COST ('2 credits') and detailing RETURNS ('Consensus rating, mean/high/low price targets...'). Missing only minor traits like rate limits or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Information is front-loaded with the core purpose first. No wasted words; every sentence provides actionable metadata.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool, the description is comprehensive. It compensates for the missing output_schema with a detailed RETURNS section, includes cost information critical for credit-based APIs, and provides a concrete usage example.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the ticker already well-documented ('Ticker symbol (e.g. AAPL, NVDA, TSLA)'). The description includes an EXAMPLE block, but this largely mirrors the schema's examples without adding significant semantic depth or format constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and clear resource ('Wall Street analyst ratings and price targets'), precisely defining the tool's scope. It effectively distinguishes from siblings like veroq_technicals or veroq_earnings by specifying 'analyst' data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit 'WHEN TO USE' section stating the exact scenario ('To see consensus analyst opinion and price target range'). Lacks explicit 'when-not' guidance or named sibling alternatives (e.g., contrasting with veroq_research), preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses 'COST: 2 credits' and details the return structure ('Version range, confidence change, field-level changes...') despite no output schema existing. Does not explicitly declare read-only/safety status, though implied by 'Get'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Information is front-loaded with the core purpose statement. Every line provides distinct value; no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema and lack of output schema, the description is complete. It compensates for missing structured output by detailing return values, documents operational cost, and provides concrete usage example. No significant gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description provides an example call, but this largely duplicates the schema's own examples ('PR-2026-0305-001', '2026-03-18T00:00:00Z') without adding new semantic context about parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The opening sentence provides a specific verb ('Get a diff'), resource ('living brief'), and scope ('since a given time', 'additions, removals, and modifications between versions'). It distinguishes from siblings like veroq_compare (likely entity-to-entity comparison) by emphasizing temporal version comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit 'WHEN TO USE' section stating 'To see exactly what changed in a brief since a specific timestamp' and prerequisites ('Requires a brief ID'). Lacks explicit mention of alternatives (e.g., 'use veroq_brief for current state rather than diff'), preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. It successfully adds cost information ('2 credits'), data source provenance ('Links directly to SEC EDGAR'), and return value structure ('Filing list with form type, title...'), which are critical for agent decision-making.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clearly labeled sections (WHEN TO USE, RETURNS, COST, EXAMPLE). Every sentence earns its place; no filler text. Information is front-loaded with the core action in the first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description is complete. It compensates for the missing output schema by explicitly documenting the return structure (fields included in the filing list) and provides usage context, cost, and examples.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (ticker symbol with examples), establishing a baseline of 3. The description adds an EXAMPLE section showing '{ "ticker": "TSLA" }', which reinforces usage but does not add substantial semantic meaning beyond the well-documented schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and clearly identifies the resource ('SEC filings' with specific examples: 10-K, 10-Q, 8-K). It distinguishes from siblings like veroq_ticker_news or veroq_earnings by explicitly mentioning regulatory filing types and SEC EDGAR source links.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes an explicit 'WHEN TO USE' section stating 'For regulatory filing history and due diligence,' which provides clear context. However, it lacks explicit 'when not to use' guidance or named sibling alternatives (e.g., contrasting with veroq_ticker_news for current news).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and discloses critical operational details: cost ('2 credits'), return structure ('Transaction list with insider name...Plus summary stats'), and data provenance ('SEC Form 4'). Missing only rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Information is front-loaded with the core purpose in the first sentence. No redundant or wasted text; every line provides distinct operational value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read tool, the description is complete. It compensates for the missing output schema by detailing return values, and compensates for missing annotations by disclosing credit cost. No additional context is needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the ticker parameter fully documented in the schema ('Ticker symbol (e.g. AAPL...)'). The description provides a JSON example, but with complete schema coverage, the baseline is 3 and the example offers only marginal additive value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('insider trading activity'), explicitly identifying the data source ('SEC Form 4 filings') and distinguishing from siblings like veroq_congress (congressional trading) and veroq_institutions (institutional holdings).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes an explicit 'WHEN TO USE' section ('To check if insiders are buying or selling a stock') with context about use case ('institutional-grade analysis'). However, it does not explicitly name alternatives like veroq_congress or veroq_filings for different trading data sources.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully adds critical context not found in the schema: 'COST: 1 credit' (rate limiting) and a detailed 'RETURNS' section describing the per-sector data structure (ticker count, brief count, average sentiment). It lacks details on caching or data freshness, but covers the essential behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is excellently structured with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE) that front-load critical information. Every sentence and section earns its place; there is no fluff or redundancy. The formatting makes it easy for an agent to quickly parse intent and constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter) and lack of output schema, the description compensates effectively by detailing the return structure in the 'RETURNS' section. It covers purpose, usage context, cost, input example, and output format—providing complete information necessary for correct invocation despite missing structured output metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'days' parameter ('Lookback period in days (default 7)'). The description provides an example usage '{ "days": 7 }' which reinforces the schema but does not add additional semantic meaning, syntax constraints, or value ranges beyond what the schema already documents. Baseline 3 is appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Get[s] a sector overview with aggregate sentiment scores and brief counts across all market sectors.' This provides a specific verb (Get), resource (sector overview), and scope (across all market sectors, aggregate level), clearly distinguishing it from ticker-specific siblings like veroq_ticker_analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes an explicit 'WHEN TO USE' section stating it's for 'a macro view of which sectors have the most bullish or bearish news coverage' and 'sector rotation analysis.' While this provides clear context for appropriate use, it does not explicitly name alternative tools (e.g., veroq_ticker_analysis) for when users need non-aggregate data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden and excels by specifying COST ('3 credits'), operational CONSTRAINTS ('Max depth 3, max 10 pages'), and RETURN format ('Page content, metadata, and discovered links'). This provides critical behavioral context beyond the input schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structured formatting with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE, CONSTRAINTS). Information is front-loaded with the core purpose, and every line delivers distinct operational metadata without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description comprehensively covers operational constraints, credit costs, and return value structure. The constraints section clarifies tool limits (depth 3, 10 pages) that would otherwise be unknown, making it complete for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value through the EXAMPLE block, which demonstrates parameter usage patterns and JSON structure, helping agents understand how to construct valid inputs beyond the raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The opening sentence clearly states the core action ('Crawl a URL and extract structured content') and key feature ('optional link following'), providing specific verbs and resources. However, it does not explicitly distinguish from siblings like `veroq_extract` or `veroq_web_search` that may overlap in functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'WHEN TO USE' section provides explicit guidance on scenarios ('extract and analyze content from a specific webpage, or crawl a site's link structure'). It lacks explicit naming of alternative tools to use instead, but the context is clear enough for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses 'COST: 2 credits' and details return values in the 'RETURNS' section (total TVL, changes, categories). Does not explicitly confirm read-only status or error behaviors, but cost disclosure adds significant value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structural organization with clear headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Every sentence serves a distinct purpose. No redundancy or fluff despite containing multiple informational sections.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a single-parameter tool lacking output schema. Compensates for missing output schema by detailing return structures in description. Covers cost, usage context, sibling differentiation, and parameter behavior. Nothing critical appears missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds practical context with the example JSON '{ "protocol": "aave" }' and explains the dual-mode behavior (overview vs. single protocol) in prose, adding value beyond the schema's technical definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The opening sentence 'Get DeFi data' is broad, but immediately clarifies with 'No arguments returns TVL overview... pass a slug for one protocol.' It effectively distinguishes from sibling veroq_defi_protocol by specifying this returns overview/basic protocol data versus the sibling's 'deep dive' capability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE' section stating 'For DeFi TVL data across protocols and chains.' Clearly names the alternative tool: 'Use veroq_defi_protocol for a single protocol deep dive,' providing clear guidance on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses resource cost ('2 credits'), return value structure ('Outlook, confidence, time horizon...'), and provides a concrete example. It does not mention side effects, idempotency, or rate limits, but covers the essential behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely well-structured with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Every line delivers unique information. No redundant or filler text. Highly scannable and front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive given the complexity and lack of output schema. The RETURNS section compensates for missing output schema by detailing the forecast components. Covers cost and example usage. Minor gap: does not specify time horizon constraints or confidence level interpretations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds value through the EXAMPLE block, which demonstrates realistic parameter values ('US inflation trajectory', 'deep'), providing concrete semantic context for the 'depth' enum and topic formatting beyond the schema's generic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('generate', 'forecast') and clearly identifies the resource (topic) and methodology (intelligence trends, momentum, historical patterns). It distinguishes itself from sibling tools like veroq_ask or veroq_brief by emphasizing forward-looking predictive analysis rather than current data retrieval or Q&A.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides an explicit 'WHEN TO USE' section defining the specific use case (predictive analysis, likely outcomes, risk factors). However, it does not explicitly name sibling alternatives (e.g., veroq_research vs. veroq_forecast) or provide negative constraints (when not to use).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Adds critical behavioral context missing from schema: COST (5-100 credits), RETURNS structure (execution steps with status/summary), and concrete EXAMPLE. Missing timeout/async behavior or error handling details, but covers financial and output semantics well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with labeled sections (WHEN TO USE, RETURNS, COST, EXAMPLE). Every line delivers unique information. Front-loaded with core action, followed by usage context, then operational details. No redundancy or filler despite covering complex multi-faceted behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by detailing return values (agent name, execution steps, final output, credits). Addresses complexity of nested 'inputs' object through example. Given the tool orchestrates arbitrary multi-step workflows, the description provides sufficient guardrails for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds significant value via concrete JSON example showing nested 'inputs' structure ({ ticker: 'AAPL' }), clarifying how to populate the free-form object parameter. Also notes that inputs vary by agent type, adding semantic context beyond schema constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb+resource ('Run a VEROQ AI agent') with specific scope ('pre-built workflows combining multiple data sources'). Explicitly distinguishes from 50+ siblings by stating agents 'automate what would take many individual tool calls,' positioning it as the orchestration layer versus single-purpose data tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit WHEN TO USE section lists specific scenarios (portfolio reviews, due diligence, market scans). Implicitly defines boundaries by contrasting with individual tool calls. Lacks explicit 'when not to use' (e.g., simple queries), but provides clear positive guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses critical behavioral traits: cost ('2 credits'), return structure ('sentiment score, mention count, per-platform breakdown...'), and data sources. Could mention rate limiting or caching behavior to achieve a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses a clear section-based structure (WHEN TO USE, RETURNS, COST, EXAMPLE) that front-loads the core purpose. Highly information-dense with no wasted sentences, though the all-caps headers create a slightly mechanical readability compared to flowing prose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single required parameter) and absence of an output schema, the description comprehensively documents the return values and cost structure. No gaps remain for an agent to successfully invoke and interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% coverage with a clear parameter description, the description adds practical value by providing a concrete JSON example ('EXAMPLE: { "symbol": "TSLA" }'), which helps the agent understand the expected input format beyond the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get social media sentiment'), target resource ('stock or crypto ticker'), and data sources ('Reddit, Twitter/X, and other platforms'), effectively distinguishing it from sibling tools like veroq_ticker_news (news articles) and veroq_social_trending (general trends).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes an explicit 'WHEN TO USE' section stating the tool is for gauging 'retail investor sentiment and social buzz around a specific ticker,' providing clear selection criteria. Lacks explicit 'when not to use' guidance or named alternative tools (e.g., vs. veroq_social_trending) that would earn a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden and succeeds by documenting the credit cost ('3 credits'), explaining return structures for each action (since no output schema exists), and noting the six valid alert type constraints. It does not mention auth requirements or alert expiration policies, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is excellently structured with clear headers (WHEN TO USE, RETURNS, COST, EXAMPLE, CONSTRAINTS). Every sentence delivers unique information; there is no redundancy with the structured schema data, and the most critical information (the three actions) appears first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and an output schema, the description comprehensively covers the tool's behavior by detailing return values for all three actions, providing a valid usage example, and documenting operational costs. It adequately supports the complexity of a multi-modal tool (create/list/triggered) with conditional parameter requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although the schema has 100% description coverage (baseline 3), the description adds value through a concrete JSON example showing parameter interaction and the 'CONSTRAINTS' section which reinforces the valid enum values for alert_type. The 'RETURNS' section also clarifies that 'threshold' accepts different semantic types (price level vs. sentiment delta) depending on context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a precise summary ('Create, list, or check triggered price/sentiment alerts') using specific verbs and clearly defining the resource domain. It effectively distinguishes this from sibling data-retrieval tools (e.g., veroq_ticker_price, veroq_social_sentiment) by emphasizing the alert/monitoring functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'WHEN TO USE' section explicitly states the tool is for 'automated monitoring' and details the three distinct actions ('create', 'list', 'triggered'). While it provides clear context for each mode, it lacks explicit exclusions or pointers to alternative tools for simple data lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations establish readOnlyHint and openWorldHint, the description adds substantial behavioral context: credit costs (1-5 credits), caching durations (60s/30s), fallback behavior (web search for non-financial queries), auto-discovery constraints (1,061+ tickers), and detailed return structure (composite trade signals, confidence levels, follow-up suggestions). It does not mention rate limits or error handling patterns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLES, CONSTRAINTS) that front-load critical decision-making information. Despite substantial length, every sentence delivers unique value (cost structure, timing constraints, fallback behavior). The 'most important tool' claim at the start is slightly subjective but immediately establishes priority.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high complexity (NLP routing, 41 intents) and absence of an output schema, the description comprehensively compensates by detailing return values (structured data + LLM summary + composite signals + confidence levels), cost model, caching behavior, and coverage constraints. It could be improved by mentioning error handling or rate limit specifics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds quantitative performance semantics for 'fast' parameter ('~500ms vs ~3s', 'save ~2 seconds') and rich contextual examples for 'question' parameter ('NVDA full analysis', 'oversold tech stocks') that clarify expected input patterns beyond the schema's basic type definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool as a natural language interface that routes to 41 specific intents (price, technicals, earnings, etc.), using active verbs ('ask', 'get', 'routes'). It explicitly distinguishes itself from the 40+ specialized sibling tools by stating 'Use this FIRST before reaching for specialized tools,' establishing its role as the default entry point.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'WHEN TO USE' guidance stating it is the 'DEFAULT tool for any financial, market, or economic question' and mandates using it 'FIRST before reaching for specialized tools.' This creates a clear hierarchy versus siblings like veroq_candles or veroq_screener, leaving no ambiguity about selection order.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Provides 'RETURNS' section detailing response structure (price, change, change percent, unit), 'COST' section disclosing rate limits (2 credits), and clarifies read-only nature through 'Get' verb. Could mention data latency (real-time vs delayed) but covers primary behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with clear headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Front-loaded with core action. No redundant text; every line provides distinct value (usage context, return schema, cost constraints, syntax example). Excellent information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description is comprehensive. Compensates for missing output schema by documenting return values in text. Parameter semantics, cost structure, and usage patterns are fully explained. No gaps remain for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline of 3. Description adds value by providing concrete JSON example ('{ 'symbol': 'gold' }') and explicitly clarifying the optional behavior ('No arguments returns all'), which goes beyond the schema's basic parameter definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Get' and resource 'commodity prices', lists concrete examples (gold, silver, oil, natural gas), and clearly distinguishes from siblings like veroq_crypto and veroq_forex by specifying commodity asset classes (precious metals, energy, industrial commodities).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes explicit 'WHEN TO USE' section defining scope (commodity market data, precious metals, energy, industrial). Explains parameter behavior (omit for all, pass slug for one). Lacks explicit 'when not to use' or named sibling alternatives, but domain is distinct enough from the 40+ siblings to prevent confusion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully provides COST ('2 credits') and RETURNS structure ('Array of contradictions with severity, topic, summary...'), compensating for the lack of output schema. However, it omits explicit statements about permissions, idempotency, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses clear structural headers (WHEN TO USE, RETURNS, COST, EXAMPLE) that front-load critical information. Every sentence serves a distinct purpose—defining scope, usage context, output format, billing, and example usage—with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 optional parameter) and lack of output schema, the description is appropriately complete. It compensates for the missing output schema by detailing the return structure, includes cost information critical for an API tool, and provides sufficient context for successful invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by providing a concrete EXAMPLE ('{ 'severity': 'high' }') that demonstrates parameter usage syntax beyond the schema's type description, though it could further clarify if severity values are case-sensitive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Find contradictions across intelligence briefs' with specific focus on 'stories where sources disagree on facts, framing, or conclusions.' This specific verb+resource combination clearly distinguishes it from sibling tools like veroq_search or veroq_compare.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'WHEN TO USE' section explicitly identifies the context: 'To identify conflicting narratives and disputed claims in the news' and specific use cases like 'risk assessment and due diligence.' While clear about when to use it, it does not explicitly name alternative sibling tools for non-contradiction searches (e.g., veroq_search for general queries).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the cost ('2 credits'), return data structure ('Protocol TVL, 1d/7d/30d change percentages, category, and deployed chains'), and implies read-only behavior via 'Get.' However, it lacks explicit mention of rate limits, error conditions, or caching behavior that would constitute exhaustive behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses a structured format with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE) that front-load critical information. Every sentence serves a distinct purpose—defining scope, usage conditions, output, cost, and input format—without redundancy or verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single required parameter, no nested objects) and absence of an output schema, the description adequately compensates by detailing the return values in the RETURNS section. The cost disclosure and example JSON complete the necessary context for an AI agent to invoke this tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the 'protocol' parameter fully documented including examples (aave, uniswap, etc.). The description adds an EXAMPLE section showing JSON structure, but per the rubric, with high schema coverage the baseline is 3. The description doesn't add semantic meaning beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and clearly identifies the resource ('detailed DeFi protocol data') along with specific data points (TVL, chain deployment, performance changes). It explicitly distinguishes itself from sibling tool 'veroq_defi' by stating this is for 'a deep dive into a single DeFi protocol' versus the 'full DeFi market overview'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'WHEN TO USE' section provides explicit guidance: 'For a deep dive into a single DeFi protocol' and explicitly names the alternative tool 'veroq_defi (no args) for the full DeFi market overview.' This clearly delineates when to use this tool versus its sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses cost ('1 credit') and return structure ('Array of briefs... with headline, confidence, category, and summary') which compensates for missing output schema. Could mention rate limits or caching, but covers essential operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Every sentence earns its place; no redundant or filler content. Front-loaded purpose followed by operational metadata.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool with no output schema, the description is comprehensive. It compensates for missing output schema by detailing return fields (headline, confidence, category, summary) and includes cost information critical for API tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with 'Entity name to look up', establishing baseline 3. Description adds value by specifying expected entity types (person, company, location) and providing concrete JSON example '{ "name": "Elon Musk" }' that clarifies input format beyond raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Get' + resource 'intelligence briefs' + scope 'mentioning a specific entity', and clarifies entity types (person, company, location). Distinguishes from sibling tools like veroq_search or veroq_brief by emphasizing entity-specific tracking across 'all briefs'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'WHEN TO USE' section provides clear context: 'When tracking coverage of a specific person, organization, or place across all briefs.' This effectively signals the tool's sweet spot vs alternatives like general search, though it doesn't explicitly name sibling alternatives to avoid.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses 'COST: 2 credits' and details the return structure ('Array of events with type, subject, headline...') which compensates for lack of output schema. Does not mention pagination, rate limits, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Front-loaded with purpose. Every line provides unique information; no repetition of schema details or tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter query tool with no annotations and no structured output schema, the description is complete. It compensates for missing output schema by detailing return fields, provides cost information critical for credit-aware agents, and includes a valid usage example.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with basic descriptions ('Event type to filter by'), so baseline is 3. The EXAMPLE section adds significant value by showing concrete parameter values ('earnings', 'NVDA'), helping the agent understand expected formats and that both filters can be used together.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb+resource ('Get notable events detected across intelligence briefs') and clarifies scope with examples ('significant developments, announcements, and inflection points'). The 'WHEN TO USE' section further distinguishes it from siblings by focusing on 'major events' like product launches and policy changes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'WHEN TO USE' section states the tool is for discovering 'major events like product launches, policy changes, or market-moving announcements' and notes filtering capabilities. Lacks explicit named alternatives from the 40+ sibling tools, but provides clear contextual guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical 'COST: 2 credits' and details return structure ('Rate, change, change percent') since no output schema exists. Missing rate limits or auth requirements, but covers cost and return behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structured format with labeled sections (WHEN TO USE, RETURNS, COST, EXAMPLE). Every sentence serves distinct purpose; no redundancy. Front-loaded core purpose in first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without output schema or annotations, description is comprehensive. Compensates for missing output schema by detailing return fields and timestamp behavior. Cost transparency completes the operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description elevates this by adding concrete example '{ "pair": "EURUSD" }' and clarifying the omit-for-all-pairs usage pattern beyond the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb-resource combination ('Get current foreign exchange rates') that precisely defines scope. Clearly distinguishes from siblings like veroq_crypto and veroq_commodities by focusing exclusively on FX market data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'WHEN TO USE' section clarifies domain (currency exchange rates and FX market data). Explains the two invocation patterns (no args for all pairs, specific code for single pair), though it doesn't explicitly contrast with similar data tools like veroq_crypto.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses critical behavioral traits: the 'COST: 2 credits' (rate limiting), data source (13F filings), and return structure. It could improve by mentioning data freshness or caching behavior, but covers the essential operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear semantic headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Information is front-loaded with the core action first, followed by operational metadata. Zero wasted words; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description comprehensively documents the return values ('Total institutional ownership %, summary stats, and top holders with shares, value, percent held, change, and filing date'). For a single-parameter read-only data tool, this provides complete contextual coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the ticker parameter fully documented in the schema. The description adds value beyond the schema by providing the concrete 'EXAMPLE: { "ticker": "AAPL" }', which clarifies the expected JSON structure and format for the agent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] institutional ownership data for a stock' with specific details about 'top holders and ownership changes from 13F filings.' This specific verb-resource combination distinguishes it from sibling tools like veroq_insider (individual insider trades) or veroq_ticker_analysis (general analysis).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The explicit 'WHEN TO USE' section clearly states the use case: 'To see which institutions own a stock and whether they're increasing or decreasing positions.' While it doesn't explicitly name alternatives to avoid (e.g., veroq_insider for individual executives), the guidance is clear enough to differentiate from the 49 siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses cost ('2 credits'), data source ('SEC EDGAR'), return fields (company name, ticker, filing date, form type, location), and hard constraints (max 90 days, max 100 results). Does not mention caching, rate limits, or error behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structured format with clear headers (WHEN TO USE, RETURNS, COST, EXAMPLE, CONSTRAINTS). Zero wasted words; information density is high with front-loaded purpose. Example is appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with no output schema, description is comprehensive. Manually documents return fields to compensate for missing output schema, includes cost and constraints. No gaps remain for agent invocation decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description adds value through concrete JSON example ({ 'days': 30, 'limit': 20 }) and constraint reinforcement (max 90/100), clarifying typical usage patterns beyond raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') and resource ('upcoming and recent IPOs from SEC EDGAR S-1 filings'). Distinct from sibling veroq_filings by specifying IPO-specific S-1 filings rather than general SEC filings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'WHEN TO USE' section states purpose ('To track the IPO pipeline and recent public offerings'). Lacks explicit comparison to alternatives (e.g., veroq_filings for general filings) or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses 'COST: 1 credit' (rate limit info) and describes return structure 'Top gainers (symbol, price, change%)...' since no output schema exists. Could explicitly state idempotency/cache behavior, but covers cost and return format well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). First sentence establishes purpose immediately. No redundant text; every line adds distinct value beyond the structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description adequately compensates by detailing return values (symbols, prices, change percentages) and cost structure. Sufficient for a simple read-only data retrieval tool of low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters. Description confirms this with 'No parameters needed' and provides 'EXAMPLE: {}' to reinforce empty input expectation. Baseline 4 met with explicit confirmation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Get' and clearly identifies the resource as 'today's top market movers,' explicitly listing the three categories (gainers, losers, most actively traded). This distinguishes it from siblings like veroq_market_summary or veroq_trending by focusing specifically on intraday price movement magnitude.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes explicit 'WHEN TO USE' section stating 'For a quick snapshot of what's moving the market today' and notes 'No parameters needed.' While it doesn't explicitly name alternative tools for deeper analysis, the zero-parameter guidance and contextual trigger provide clear selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses the cost ('2 credits') and return structure ('Holdings summary... portfolio-relevant briefs ranked by relevance score'), adding crucial behavioral context. However, it omits explicit safety classification (read-only) or error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clearly demarcated sections (WHEN TO USE, RETURNS, COST, EXAMPLE). Every sentence serves a distinct purpose; the example is appropriately placed at the end and the core purpose is front-loaded. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates effectively with a detailed RETURNS section explaining the holdings summary and briefs structure. With full schema coverage and cost disclosure included, the description is complete for invoking this credit-based portfolio tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds value by providing a concrete JSON example showing the nested holdings array structure and reinforcing the 'ticker/weight pairs' concept, which aids in correct parameter construction beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('intelligence briefs'), and clearly distinguishes this tool from siblings by emphasizing the portfolio-holding relevance ranking and ticker/weight pair inputs—critical differentiation given the 40+ sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes an explicit 'WHEN TO USE' section stating the tool is for monitoring news for a specific portfolio. While this provides clear context, it does not explicitly name alternative tools (e.g., veroq_ticker_news for single-ticker monitoring) to complete the versus-alternatives guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully documents the cost ('1 credit'), return structures for both modes (list vs. run), and provides a concrete JSON example. However, it omits explicit statements about rate limits, error conditions, or the read-only nature of the operation (implied but not stated).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description employs a highly structured format with clear headers (WHEN TO USE, RETURNS, COST, EXAMPLE) that front-load critical information. Every line delivers distinct value—purpose, usage context, cost, return payload structure, and example—without redundancy or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single optional parameter, dual-mode behavior) and lack of output schema, the description is comprehensive. It compensates for the missing output schema by detailing exactly what fields are returned in both list mode (names, descriptions, filters) and run mode (price, RSI, sentiment, volume), and discloses the credit cost. No critical gaps remain for an agent to invoke this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the 'preset_id' parameter fully documented in the schema itself. The description adds marginal value by specifying there are exactly '12 presets' and providing a concrete example value ('oversold_tech'), but does not expand significantly on format constraints, validation rules, or semantic meaning beyond the schema's scope.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with specific verbs ('List' and 'run') and clearly identifies the resource ('pre-built screening strategies' or 'presets'). It implicitly distinguishes from sibling 'veroq_screener' by emphasizing 'without building filters manually,' clarifying this is for pre-built strategies versus custom screening.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'WHEN TO USE' section explicitly states the optimal scenario ('quick screening without building filters manually'), effectively contrasting with the manual filter-building alternative (likely 'veroq_screener'). It also clearly documents the dual-mode behavior—omitting preset_id to list all 12 presets versus passing one to execute it—providing unambiguous guidance on invocation patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden effectively. It reveals COST ('2 credits'), data sources ('Reddit, Twitter/X'), temporal scope ('currently trending', '1-hour change'), and return structure ('name, mention count, sentiment score'). Missing rate limits or caching details, but cost transparency is critical for API tools.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structured format with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Every line delivers value. Front-loaded with core purpose, followed by operational metadata. No wasted words despite covering multiple dimensions of tool behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description compensates with a detailed RETURNS section documenting specific fields (mention count, sentiment score, 1-hour change). Combined with cost and usage context, this provides complete information needed for an agent to invoke the tool correctly and handle its output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, establishing a baseline of 4. The description confirms this with 'No parameters needed' and provides 'EXAMPLE: {}', which validates the empty schema structure and prevents confusion about missing parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') with clear resources ('tickers and topics') and scope ('trending on social media across Reddit, Twitter/X'). It effectively distinguishes from siblings like 'veroq_trending' (likely market data) and 'veroq_social_sentiment' (likely requires specific ticker input) by emphasizing discovery of trending content across platforms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'WHEN TO USE' guidance ('To discover what retail investors are buzzing about right now') and notes 'No parameters needed.' Lacks explicit 'when-not' guidance or named alternatives (e.g., contrast with veroq_social_sentiment for specific ticker analysis), but the contextual use case is clearly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return structure ('Array of briefs with headline, confidence, category, and summary'), cost ('1 credit'), and includes working example. Deducting one point for missing rate limits, caching behavior, or error handling details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Front-loaded with purpose statement. No filler text; every line provides actionable information for tool selection and invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description fully compensates by detailing return format, fields, and total count. Covers cost implications. With only 2 well-documented parameters and clear sibling differentiation, description is complete for this tool's complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both 'symbol' and 'limit'. Description provides a JSON example showing valid inputs, but does not add semantic meaning beyond what the schema already documents. Baseline 3 is appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb ('Get') + resource ('news headlines and briefs') + scope ('specific stock or crypto ticker'). Clearly distinguishes from siblings by contrasting with veroq_search (topic-based) and veroq_feed (general news).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'WHEN TO USE' section states exact use case ('For ticker-specific news') and names two specific alternatives with their distinct scopes ('veroq_search for topic-based search', 'veroq_feed for general news').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It successfully discloses cost ('2 credits') and return value structure ('Composite score, signal...component breakdown'). It could be improved by mentioning idempotency or caching behavior, but covers the critical economic and output contract details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Every sentence earns its place. The main purpose is front-loaded in the first sentence, followed by contextual usage guidance and technical details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description is complete. It compensates for the missing output schema by detailing the return structure in the RETURNS section, mentions cost implications, and provides sibling context. No gaps remain for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'symbol' parameter fully documented as 'Ticker symbol (e.g. AAPL, NVDA, TSLA)'. The description provides a JSON example that reinforces usage but does not add semantic meaning beyond what the schema already provides. Baseline 3 is appropriate when schema documentation is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('composite trading signal score'), clearly stating the four components used (sentiment, momentum, coverage volume, event proximity). It explicitly distinguishes itself from sibling tool 'veroq_ticker_analysis' by contrasting 'quick bull/bear signal' vs 'deeper context'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit 'WHEN TO USE' section stating the specific scenario ('quick bull/bear signal on a ticker') and names the exact alternative tool ('veroq_ticker_analysis') for deeper analysis. This provides clear guidance on tool selection vs siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses COST (1 credit), RETURN structure (sources with trust levels, provenance details), and content scope (counter-argument, AI/human split). Minor gap: doesn't mention idempotency, caching, or error states.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with labeled sections (WHEN TO USE, RETURNS, COST, EXAMPLE). First sentence states core purpose immediately. Zero redundancy; every line adds unique information not present in structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so description compensates by detailing return contents (body, sources, entities, provenance). Covers cost and usage context. For a simple retrieval tool with 2 parameters, this is sufficient; minor gap on error handling or rate limit details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds significant value via EXAMPLE showing ID format ('PR-2026-0305-001'), revealing the semantic pattern/structure of brief_ids beyond the schema's generic 'string' type.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb ('Get') + resource ('intelligence brief') + scope ('by its ID'). Clearly distinguishes from sibling search/feed tools by emphasizing 'full details' retrieval versus discovery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit WHEN TO USE section states the workflow dependency: 'After finding a brief via search/feed'. Names sibling alternatives directly ('search/feed'), providing clear selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully documents the return structure ('Array of candles with date, open, high, low, close, volume'), cost ('2 credits'), and specific behavioral trait ('Latest candle highlighted'). Minor deduction for absence of rate limiting or data freshness details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured headers (WHEN TO USE, RETURNS, COST, EXAMPLE) for scannability. Every line conveys unique information; no tautology or filler. The example is compact yet illustrative, and the formatting efficiently separates functional purpose from operational constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by detailing the return array structure and fields. For a 3-parameter read-only data retrieval tool, the inclusion of cost and return format provides sufficient context, though data source freshness or pagination details would elevate it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage providing baseline documentation, the description adds value through a concrete JSON example showing realistic parameter values (symbol: 'AAPL', interval: '1d', range: '3mo'), which helps clarify expected input formats beyond the schema's type definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and clearly identifies the resource (OHLCV candlestick data for a stock ticker), explicitly enumerating the data fields (open, high, low, close, volume). It effectively distinguishes itself from sibling tool veroq_technicals by contrasting raw candlestick data with 'pre-computed indicators.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'WHEN TO USE' with concrete scenarios (price chart analysis, pattern recognition, technical analysis feeding). Critically, it names the specific sibling alternative 'veroq_technicals' for pre-computed indicators, providing clear guidance on tool selection boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully documents the cost (2 credits), return structure (headline, scores, synthesis), and provides an input example. However, it omits details about data freshness, source coverage scope, or whether results are cached vs real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structured formatting with clear headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Every sentence serves a distinct purpose. The information is front-loaded with the core function, followed by usage conditions, costs, and examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter analysis tool without a structured output schema, the description adequately covers the tool's purpose, inputs, costs, and return format. Minor gaps remain regarding rate limits or source attribution details, but sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage for the single 'topic' parameter, the description adds valuable semantic context through the EXAMPLE block showing 'Federal Reserve rate decision', clarifying the expected topic format and granularity beyond the schema's generic 'Topic to compare coverage on'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific action (compare) targeting a specific resource (news sources) with clear scope (per-source bias analysis and synthesis). The mention of 'bias analysis' effectively differentiates it from sibling tools like veroq_ticker_news or veroq_feed that likely provide raw news without comparative analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit 'WHEN TO USE' section stating 'When you need to understand media bias or see how coverage of an event differs across outlets.' This provides clear triggering conditions and distinguishes it from general news retrieval tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses cost ('2 credits'), return structure ('Series info, latest value, and historical observations array'), and operational constraints ('Max 100 observations'). Lacks error handling or rate limit details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured section headers (WHEN TO USE, RETURNS, COST, EXAMPLE, CONSTRAINTS) for scannability. Every line provides distinct information (purpose, sibling differentiation, cost, example, limits). Zero redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output schema by detailing return structure (series metadata, latest value, observations array). Establishes FRED data source context. Could enhance further by mentioning data frequency limitations or lag times, but adequate for tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value via concrete JSON example showing expected indicator slug format ('fed_funds') and reinforces the 'limit' parameter constraint (max 100). Contextualizes 'indicator' parameter as FRED-specific.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb ('Get') + resource ('FRED economic indicator') + scope ('historical observations'). Explicitly distinguishes from sibling tool 'veroq_economy' by contrasting detailed single-indicator history vs. summary of all indicators.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE' section stating 'For detailed history of one indicator' and names specific alternative 'veroq_economy (no args)' for summary use cases. Clear differentiation between detailed vs. broad economic data retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It successfully documents the credit cost ('2 credits') and return structure ('Array of timeline entries with...'). Implies read-only safety via 'Get', though explicit declaration would strengthen further.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structured formatting with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Every sentence delivers distinct value—no redundancy with schema or fluff. Front-loaded purpose followed by operational metadata.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Absence of output schema is compensated by detailed return description listing all fields (version, timestamp, summary, etc.). Single-parameter simplicity makes this adequate, though explicit mention of pagination or maximum timeline depth would achieve perfect completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds value by providing a concrete EXAMPLE ('PR-2026-0305-001') showing expected format, and semantic context that the ID comes 'from search/feed', helping the agent understand data lineage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Get' and resource 'story evolution timeline', immediately clarifying scope. The em-dash detail ('versioned updates, confidence changes, and new sources over time') precisely distinguishes this from sibling tools like veroq_brief (current state) or veroq_feed (stream).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'WHEN TO USE' section states the temporal analysis scenario ('how a story developed over time'). Critically, it specifies the prerequisite 'Requires a brief ID from search/feed', directing the agent to use veroq_search or veroq_feed first, preventing invocation errors.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses 'COST: 3 credits' and return value structure ('titles, URLs, snippets, relevance scores') which compensates for the lack of annotations. However, it omits details about error handling, rate limits, or behavior when no results are found, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Employs structured headers (WHEN TO USE, RETURNS, COST, EXAMPLE) creating highly scannable sections with zero redundancy. Every line serves a distinct purpose, efficiently organizing operational guidance without wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Thoroughly compensates for the missing output schema by detailing return values, and effectively positions the tool against siblings like veroq_brief. The cost disclosure and example provide complete operational context, though minor gaps regarding error states or pagination remain for this moderately complex tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although the schema has 100% description coverage, the description adds significant value through the EXAMPLE section showing practical JSON syntax and valid values like 'week' for freshness and true for verify. This concrete illustration reinforces parameter semantics beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The opening sentence 'Search the live web with optional VEROQ trust scoring on results' provides a specific verb (Search), resource (live web), and distinguishing feature (trust scoring). This clearly differentiates it from siblings like veroq_search and veroq_brief by emphasizing external web data versus internal intelligence sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'WHEN TO USE: When intelligence briefs don't cover a topic and you need live web results,' providing clear guidance on choosing this tool over alternatives. Additionally advises 'Enable verify=true for trust scoring,' giving specific parameter guidance for optimal usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses 'COST: 2 credits' and provides 'RETURNS' section detailing output structure (market cap, BTC dominance, volume for overview; price, changes, ATH for tokens). It explains the behavioral difference between calling with no arguments vs. with symbol. Lacks rate limits or data freshness details, but covers cost and return format comprehensively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Front-loaded with core purpose. Every line delivers distinct value—no repetition with schema or fluff. Highly scannable and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without output schema, the description is complete. The 'RETURNS' section compensates for missing output_schema by documenting return fields. Covers cost, usage context, examples, and sibling differentiation. Nothing material is missing given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'symbol' parameter fully documented. Baseline is 3. Description adds value by providing concrete JSON 'EXAMPLE' and reinforcing the optional nature ('No arguments returns market overview; pass a symbol for detailed token data'), which aids agent invocation beyond schema specification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Get' and resource 'cryptocurrency data'. It clearly distinguishes from sibling tool 'veroq_crypto_chart' by stating to use that tool 'for price history' while this tool is for 'market cap overview or individual token data'. The dual-mode behavior (overview vs. token detail) is explicitly defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE' section specifying exact use cases: 'crypto market cap overview or individual token data (price, supply, ATH)'. Explicitly names alternative tool 'veroq_crypto_chart' for price history, providing clear guidance on tool selection between siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses cost ('2 credits'), return data structure ('timestamp, price, volume, and market cap'), and operational constraints ('Max 365 days'). Lacks only edge-case behaviors (rate limits, invalid symbol handling).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Structured with clear headers (WHEN TO USE, RETURNS, COST, EXAMPLE, CONSTRAINTS). Every sentence serves a distinct purpose; no redundancy. Information is front-loaded with the core action statement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a 2-parameter tool: compensates for missing output schema by detailing return values, includes cost transparency, and documents constraints. No gaps given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage, establishing baseline 3. Description adds value via concrete JSON example and reinforces the 365-day constraint, providing usage context beyond raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb ('Get') + resource ('historical price chart data') + scope ('crypto token') and explicitly distinguishes from sibling 'veroq_crypto' for current snapshots vs historical trends.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE' section naming the use case ('crypto price history and charting') and directly contrasts with sibling alternative: 'Use veroq_crypto for current snapshot, this for historical trend.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full disclosure burden. It documents cost ('2 credits'), return behavior differences between summary/detail modes, and argument-dependent execution paths (no args vs. slug). Lacks explicit read-only safety declaration, but 'Get' and return descriptions imply non-destructive behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structured formatting with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Information is front-loaded with the core function, followed by contextual sections. Zero wasted words; every sentence provides actionable guidance or constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with no output schema, description comprehensively covers return values (summary vs. detail modes), cost constraints, sibling differentiation, and usage examples. Addresses all gaps left by missing structured metadata, providing complete context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), description adds value by explaining behavioral semantics: omitting arguments triggers summary mode while passing a slug triggers detailed history. Includes concrete JSON example showing parameter usage patterns not evident from schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Get' and resource 'macroeconomic indicators from FRED', clearly defining the tool's function. It distinguishes from sibling 'veroq_economy_indicator' by explicitly stating when to use the alternative, clarifying the tool's scope within the suite.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE' section listing specific data types (GDP, CPI, unemployment, fed funds rate). Explicitly directs users to sibling tool 'veroq_economy_indicator' for single indicator history, providing clear alternative routing and preventing tool selection errors.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and excels: discloses COST (3 credits), RETURN structure (per-URL with title/domain/word count/text), truncation limits (2000 chars), and paywall handling behavior—critical operational details beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured headers (WHEN TO USE, RETURNS, COST, EXAMPLE, CONSTRAINTS) that front-load key information. Every section earns its place with zero redundancy; the formatting makes the multi-faceted constraints scannable despite the single-parameter simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema and annotations, the description comprehensively specifies return values (title, domain, word count, truncated text), cost constraints, and rate limits. For a single-parameter extraction tool, this provides complete operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% ('Comma-separated URLs to extract (max 5)'), establishing baseline 3. The description adds value via the EXAMPLE section showing actual comma-separated URL formatting and CONSTRAINTS reiterating the limit, providing practical context beyond the schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Extract') + resource ('article content from one or more URLs') + output format ('clean text'), clearly distinguishing it from siblings like veroq_web_search (which searches) or veroq_crawl (which likely spiders sites).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'WHEN TO USE' section stating the exact scenario ('need the full text of a news article or web page for analysis') and notes paywall handling capabilities. Does not explicitly name sibling alternatives (e.g., veroq_web_search) to contrast against, which would be required for a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses critical behavioral traits: cost ('1 credit'), return structure ('Array of briefs with headline, confidence score...'), and implicit read-only nature via 'Get'. Could improve by mentioning rate limits or pagination, but covers essential operational costs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Front-loaded purpose statement with zero filler. Every sentence serves a distinct function (purpose, usage guidance, output contract, cost disclosure, input example).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a listing tool despite missing output schema: explains return values in description, covers cost, provides usage context against 45+ siblings, and includes working example. No gaps given the tool's complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% providing baseline 3. Description adds value via concrete JSON example showing realistic values ('Markets', 10) that clarify expected input formats beyond the schema's basic descriptions, particularly helpful for the 'category' parameter which lacks enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Get' + resource 'verified intelligence briefs' + ordering constraint 'reverse-chronological'. Explicitly distinguishes from sibling tool veroq_search by contrasting 'browsing without a specific search query' vs search functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE' section stating exact context ('browsing recent news without a specific search query') and names the specific alternative tool ('Use veroq_search when you have a topic in mind'), providing clear decision criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical cost behavior ('COST: 2 credits') and implementation detail ('9 sources in parallel'). Documents return structure since no output schema exists. Does not disclose error handling or cache behavior, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). First sentence establishes purpose; subsequent sentences earn their place with specific operational metadata. No filler text despite covering multiple dimensions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a single-parameter tool without output schema. Compensates for missing output schema by detailing return categories. Covers cost (critical for credit-based APIs), differentiates from 40+ siblings, and provides usage example. Nothing essential is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (ticker symbol described), establishing baseline 3. The description adds value by providing a concrete JSON example ('EXAMPLE: { "ticker": "NVDA" }') that demonstrates exact invocation format beyond what the raw schema conveys.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb+resource ('Get the full profile for a ticker') and enumerates exact data categories returned (price, fundamentals, technicals, sentiment, news). Explicitly distinguishes from sibling veroq_ticker_analysis by contrasting 'complete data dump' vs 'interpreted analysis'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE' section stating 'For a complete data dump on a single ticker.' Directly names alternative tool with 'Use veroq_ticker_analysis for an interpreted analysis instead,' providing clear decision criteria between the two options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical behavioral traits: async nature ('Kicks off async generation'), credit cost ('5 credits'), tier requirements ('Deep tier requires paid plan'), and return value structure ('Report ID, ticker, tier, and status'). Lacks only error handling or validation behavior details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Front-loaded with purpose statement, followed by workflow context, then technical details. No filler text; every sentence provides actionable information for tool selection and invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description fully compensates by documenting return fields and the critical next-step instruction to use veroq_get_report. Includes cost transparency essential for a credit-based API. Complete for an async job-initiation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value via concrete JSON example showing valid tier values and format. Also clarifies cost implications of tier parameter ('Deep tier requires paid plan') which schema doesn't convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb ('Generate') + resource ('AI-powered research report') + scope ('for a ticker'). Explicitly distinguishes from sibling tool veroq_ticker_analysis by contrasting 'polished, shareable research report' vs 'instant inline analysis', and references veroq_get_report for retrieval flow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE' section stating the specific use case ('polished, shareable research report') and directly names the alternative tool for different needs (veroq_ticker_analysis). Also clarifies the async workflow requiring veroq_get_report to retrieve results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden and effectively communicates the COST ('1 credit') and detailed RETURN structure ('title, ticker, tier, creation date, and markdown content'). It misses error handling details (e.g., behavior if ID doesn't exist), but covers cost and output format which are essential for agent planning.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses a highly structured format with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE) that makes the description scannable. Every sentence serves a distinct purpose; there is no redundant or filler content. The core action is front-loaded in the first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter retrieval tool without an output schema, the description is complete. It compensates for the missing output schema by detailing the full return structure and includes cost information necessary for agent decision-making. Given the tool's simplicity, no additional context is required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value beyond the schema by providing a concrete JSON EXAMPLE showing the expected 'report_id' format ('rpt_abc123'), which helps the agent understand the parameter structure better than the generic schema description alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Retrieve') and resource ('previously generated report'), clearly defining the tool's scope. It explicitly distinguishes itself from the sibling 'veroq_generate_report' by stating it fetches 'completed report content,' establishing the critical workflow relationship between these two tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'WHEN TO USE' guidance stating it should be used 'After calling veroq_generate_report.' This clearly defines the prerequisite and sequencing logic, preventing the agent from attempting to retrieve reports that haven't been generated yet.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully communicates cost ('3 credits'), return structure (summary, confidence assessment, co-occurrences), and scope (multi-source). Minor gap: does not explicitly confirm read-only nature or idempotency, though implied by 'research' versus mutation verbs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structural organization with clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE). Information is front-loaded with the core purpose in the first sentence. Zero redundant text; every line conveys distinct operational information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description compensates with a detailed RETURNS section enumerating all output components (confidence assessment, entity map with co-occurrences, information gaps). For a 3-parameter tool with 100% schema coverage, this provides complete contextual information for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (baseline 3). The description adds value by providing a concrete JSON example showing realistic parameter values ('query': 'impact of AI regulation...', 'max_sources': 20), which clarifies expected input format and usage patterns beyond the basic schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Run[s] deep multi-source research on a topic' with specific outputs (structured report, entity map, information gaps). It clearly distinguishes itself from the sibling tool veroq_search by contrasting 'deep' research with 'quick lookups', establishing precise scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit 'WHEN TO USE' section stating 'For thorough investigation of a topic requiring analysis across many sources' and directly names the alternative: 'Use veroq_search for quick lookups instead.' This provides clear positive guidance, negative constraints, and sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses critical behavioral traits: COST (5 credits) for resource management and RETURNS structure (symbol, price, change%, RSI, etc.) despite lack of output schema. Does not disclose rate limits or caching behavior, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured headers (WHEN TO USE, RETURNS, COST, EXAMPLE) for scannability. Zero filler text; every line provides actionable metadata. Example JSON is compact and relevant. Excellent information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 12-parameter screening tool with no annotations and no output schema, the description achieves completeness by documenting return values, credit cost, sibling relationships, and providing a representative usage example. No significant gaps remain for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage establishing baseline 3. Description adds value via EXAMPLE block showing realistic parameter composition (sector + rsi_below + sentiment + limit), demonstrating how filters combine logically. This contextualizes individual parameters beyond isolated schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Screen' and clear resource scope 'stocks or crypto', followed by mechanism 'combining technical indicators, sentiment, and fundamental filters'. Explicitly differentiates from sibling 'veroq_screener_presets' by positioning it for custom criteria vs pre-built strategies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'WHEN TO USE' section states exact use case ('find assets matching specific criteria') with concrete examples. Explicitly names sibling alternative 'veroq_screener_presets' for pre-built strategies, creating clear decision boundary between the two tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses cost behavior ('COST: 1 credit') and return structure ('Array of briefs with headline, confidence score...'). It lacks explicit read-only safety declaration, but 'Search' implies non-destructive behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured headers (WHEN TO USE, RETURNS, COST, EXAMPLE) to front-load critical information. Every sentence serves a distinct purpose—purpose statement, usage guidance, output contract, pricing, and usage example—with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates by fully detailing the return structure. It also addresses the 50+ sibling tool context by differentiating from 'veroq_ask', and covers the 6-parameter complexity with a concrete example.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100% (baseline 3), the description adds value through the EXAMPLE block showing realistic parameter usage patterns (query string format, category values, limit usage) that complement the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb (Search), resource (verified intelligence briefs), and mechanism (by keyword or topic). It explicitly distinguishes itself from sibling tool 'veroq_ask' by specifying this is for keyword/topic searches versus natural-language questions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit 'WHEN TO USE' section that defines the specific scenario (looking for specific news/events/coverage) and directly names the alternative tool 'veroq_ask' for natural-language questions, providing clear guidance on tool selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses cost ('1 credit'), return structure ('Headline suggestions... and entity suggestions'), and input constraints ('Minimum 2 characters'). Does not mention idempotency or rate limits, but covers primary behavioral traits for a read-only suggestion tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structured format with clear headers (WHEN TO USE, RETURNS, COST, EXAMPLE, CONSTRAINTS). Front-loaded purpose statement with zero wasted words. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without output schema, description is comprehensive. Compensates for missing output schema by detailing return values. Cost and constraint information complete the picture. No gaps given tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value via concrete JSON example ('{ "query": "fed rate" }') and reinforces the minimum length constraint, aiding agent understanding of valid inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Get') + resource ('search autocomplete suggestions') + scope ('headlines and entities'). Explicitly distinguishes from sibling 'veroq_search' in the WHEN TO USE section, clarifying this is for discovery/preliminary term finding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('To find the right search terms before running veroq_search') and names the alternative tool directly. Clear sequential guidance for the agent's workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses COST (2 credits) and detailed RETURNS structure (signal counts + full indicator values). Could explicitly state read-only nature, though 'Get' verb implies this.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structured format with labeled sections (WHEN TO USE, RETURNS, COST, EXAMPLE). No filler text; every line provides actionable information for tool selection and invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description compensates with detailed RETURNS section explaining signal summaries and indicator values. Includes cost information critical for API resource management. Complete for a 2-parameter read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete parameter descriptions. Description adds value through EXAMPLE block showing JSON usage pattern {'symbol': 'TSLA', 'range': '6mo'}, which illustrates practical invocation beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Get all major technical indicators' with specific examples (RSI, MACD, Bollinger Bands, moving averages) and distinguishes from sibling tool veroq_candles by stating 'Use veroq_candles for raw price data instead.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE' section specifying 'For pre-computed technical analysis' and explicitly names alternative tool (veroq_candles) for raw price data, providing clear selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the 'COST: 2 credits' (critical for API usage) and comprehensively describes the return structure ('Outlook (bullish/bearish/neutral), summary...') in lieu of an output schema. Lacks explicit mention of caching, rate limits, or idempotency, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses a clear section-based structure (WHEN TO USE, RETURNS, COST, EXAMPLE) with zero filler text. Every section earns its place by providing actionable metadata. The core purpose is front-loaded in the first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description is complete. It compensates for the missing output schema by detailing all return fields, mentions the credit cost (important for this API), and clarifies the relationship to sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by providing a concrete JSON invocation example ('EXAMPLE: { "symbol": "NVDA" }'), which aids the agent in constructing the correct request format beyond the schema's type definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('comprehensive analysis for a ticker'), clearly defining the scope. It explicitly distinguishes itself from the sibling tool 'veroq_full' by contrasting 'interpreted analysis' vs 'raw data', satisfying the sibling differentiation requirement.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit 'WHEN TO USE' section stating the specific use case ('detailed single-ticker analysis with outlook, catalysts, and risks'). It explicitly names the alternative tool ('Use veroq_full for raw data, this for interpreted analysis'), providing clear guidance on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses COST ('Free (0 credits)') and provides a RETURNS section detailing the output fields (price, change, volume, market state). Could be improved by mentioning rate limits or data freshness, but covers the essential behavioral traits well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses clear section headers (WHEN TO USE, RETURNS, COST, EXAMPLE) making it highly scannable. Every line provides distinct information. Front-loaded with core purpose. No filler text—excellent information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool without output schema, the description is comprehensive. It compensates for missing output schema by documenting return fields. Includes cost information critical for API tools. No gaps given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with parameter descriptions, setting baseline at 3. The description adds value by providing a complete JSON 'EXAMPLE' showing exact invocation format, and reinforces valid inputs by mentioning both 'stock or crypto ticker' in the main description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states exactly what the tool does: 'Get the live market price for a stock or crypto ticker' using a specific verb and resource. It clearly distinguishes from sibling tool veroq_full by contrasting 'quick price check' with 'full analysis'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit 'WHEN TO USE' section stating 'For a quick price check on any ticker.' Explicitly names the alternative: 'For full analysis, use veroq_full instead.' Also includes cost guidance ('This is free — use it freely'), which informs usage decisions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the readOnlyHint/openWorldHint annotations, the description discloses cost ('3 credits'), caching behavior ('1 hour/15 min'), methodology ('Checks 200+ verified sources first, falls back to live web'), and detailed return structure (verdict types, 4-factor confidence breakdown, evidence_chain schema, receipt hashing). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses section headers (WHEN TO USE, RETURNS, COST, EXAMPLES, CONSTRAINTS) to organize dense information efficiently. Every section earns its place: RETURNS compensates for missing output schema, EXAMPLES illustrates parameter semantics. Front-loaded with core purpose. No wasted text despite length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description fully documents return values (verdict enum, confidence scoring factors, evidence_chain structure, receipt format) and operational constraints. For a complex verification tool with fallbacks and cost implications, this is comprehensive without being overwhelming.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by repeating the character constraints ('10-1000 characters') and providing qualitative guidance ('Be specific — include names, numbers, dates') in the CONSTRAINTS section, plus concrete EXAMPLES showing optimal claim formulation. Could have elaborated on 'context' parameter values, but otherwise excellent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with specific verbs ('Fact-check') and resources ('claim with full evidence chain, confidence breakdown'), clearly positioning this as the verification-specific tool among the 40+ sibling data tools. The 'TRUST tool' branding further distinguishes it from general search or data retrieval tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit 'WHEN TO USE' section specifying triggers ('After any agent... makes a factual claim'), domains ('earnings, revenue, market movements'), and proactive use cases ('verify assumptions before making recommendations'). It implicitly contrasts with veroq_search or veroq_ask by emphasizing evidence-proving rather than information retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.