Skip to main content
Glama

zwldarren-akshare-one-mcp

Server Details

Provide access to Chinese stock market data including historical prices, real-time data, news, and…

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
zwldarren/akshare-one-mcp
GitHub Stars
145
Server Listing
akshare-one-mcp

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

9 tools
get_balance_sheetCInspect

Get company balance sheet data.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolYesStock symbol/ticker (e.g. '000001')
recent_nNoNumber of most recent records to return
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to deliver. It does not clarify whether this retrieves real-time or historical data, what time periods 'recent_n' refers to (quarters/years), authentication requirements, rate limits, or error handling behavior for invalid symbols.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at only five words with zero redundancy or filler. Every word earns its place. However, while efficient, the brevity may be excessive given the complete absence of annotations and output schema, though this is primarily a completeness issue rather than a conciseness flaw.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a financial data retrieval tool with no output schema and no annotations, the description is insufficient. It fails to specify what balance sheet components are returned (assets, liabilities, equity), the temporal nature of the data, or how it relates to the broader set of financial statement tools available.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for both parameters ('symbol' and 'recent_n'), establishing a baseline score of 3. The description adds no additional semantic context (e.g., explaining that 'recent_n' refers to reporting periods, or symbol format conventions), but is not required to compensate for schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('company balance sheet data'), making the basic purpose unambiguous. However, it lacks explicit differentiation from siblings like 'get_financial_metrics' which might also provide balance sheet-related data, forcing the agent to infer distinct use cases solely from the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'get_cash_flow' or 'get_financial_metrics'. There are no stated prerequisites (e.g., whether the symbol must be pre-validated) or exclusions to help the agent select appropriately among the nine available financial data tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_cash_flowCInspect

Get company cash flow statement data.

ParametersJSON Schema
NameRequiredDescriptionDefault
sourceNoData sourcesina
symbolYesStock symbol/ticker (e.g. '000001')
recent_nNoNumber of most recent records to return
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, yet the description fails to disclose behavioral traits like whether data is quarterly or annual, what 'recent_n' represents (periods vs. entries), rate limits, or the implications of the 'sina' data source. It only implies a read-only operation through the verb 'Get'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundant words. However, it is arguably too minimal to be 'appropriately sized' for the tool's complexity, lacking front-loaded context about the data returned.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should explain what constitutes a cash flow statement (operating, investing, financing activities) and the temporal nature of the data. The current one-sentence description leaves significant gaps in context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents all parameters (symbol, source, recent_n). The description adds no additional parameter context, but the baseline score of 3 applies since the schema carries the full semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb ('Get') and specific resource ('company cash flow statement data'), distinguishing it from siblings like get_balance_sheet and get_income_statement. However, it lacks differentiation from get_financial_metrics and doesn't specify the scope (e.g., historical periods vs. current).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like get_financial_metrics or get_hist_data, nor does it mention prerequisites such as valid ticker formats or data availability.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_financial_metricsCInspect

Get key financial metrics from the three major financial statements.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolYesStock symbol/ticker (e.g. '000001')
recent_nNoNumber of most recent records to return
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but fails to specify what 'key metrics' entails (e.g., ratios like ROE/ROA, margins, growth rates), data freshness, source, or error handling for invalid symbols. It mentions 'three major financial statements' implying aggregation but doesn't explain the calculation methodology.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundant or wasteful language. However, extreme brevity contributes to under-specification given the lack of annotations and output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema or annotations, the description inadequately prepares the agent for the response structure. It fails to enumerate the specific financial metrics returned or explain how this consolidated view differs from fetching individual statements separately, which is critical context given the sibling tool set.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both 'symbol' and 'recent_n' adequately documented in the JSON schema. The description adds no additional parameter context (e.g., what constitutes a 'record' for recent_n—quarters, years, or trailing twelve-month periods), warranting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Get) and resource (key financial metrics from three major financial statements). However, it does not explicitly differentiate from sibling tools like get_balance_sheet, get_cash_flow, and get_income_statement, leaving ambiguity about whether this returns calculated ratios/aggregates or raw statement data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the individual statement tools (get_income_statement, get_balance_sheet, get_cash_flow) or other financial data siblings. There are no prerequisites, constraints, or alternative suggestions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hist_dataBInspect

Get historical stock market data. 'eastmoney_direct' support all A,B,H shares

ParametersJSON Schema
NameRequiredDescriptionDefault
adjustNoAdjustment typenone
sourceNoData sourceeastmoney
symbolYesStock symbol/ticker (e.g. '000001')
end_dateNoEnd date in YYYY-MM-DD format2030-12-31
intervalNoTime intervalday
recent_nNoNumber of most recent records to return
start_dateNoStart date in YYYY-MM-DD format1970-01-01
indicators_listNoTechnical indicators to add
interval_multiplierNoInterval multiplier
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but only offers minimal information about data source capabilities. It does not explain error handling for invalid symbols, rate limits, data availability periods, or whether the technical indicators are computed server-side or require specific parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief with two concise statements and no extraneous information. The purpose is front-loaded in the first sentence. The second sentence suffers from minor grammatical awkwardness ('support' instead of 'supports') but remains intelligible.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 9 parameters—including technical indicators, multiple data sources, and date ranges—the description is insufficient. With no output schema or annotations to rely on, it should explain the return format, available data fields, and key parameter interactions, which it does not provide.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 100% schema coverage, the baseline is adequate, but the description adds meaningful value by clarifying that eastmoney_direct supports all A/B/H shares, which helps users select the appropriate source parameter. However, it misses opportunities to explain technical concepts like qfq/hfq adjustments or the practical use of interval_multiplier.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action ('Get') and resource ('historical stock market data'), distinguishing it from siblings like get_realtime_data and financial statement tools. The mention of A/B/H share support adds valuable scope context. However, it lacks specificity about what data points are actually returned (OHLCV prices, volume, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like get_realtime_data or the financial metrics tools. While it hints that eastmoney_direct supports all A/B/H shares, it fails to explain source selection criteria or how to choose between date ranges versus the recent_n parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_income_statementCInspect

Get company income statement data.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolYesStock symbol/ticker (e.g. '000001')
recent_nNoNumber of most recent records to return
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but fails to specify data frequency (quarterly/annual), time range limitations, data source, or what specific line items (revenue, net income) are included in the response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely brief (5 words) and front-loaded, but undersized given the lack of annotations and output schema. The single sentence conveys core purpose without waste, though additional sentences are needed for completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks critical context for a financial data tool: no mention of reporting periods, currency, response format, or how it differs from get_financial_metrics. With no output schema to reference, the description should explain what data structure is returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the schema fully documenting both 'symbol' (with example) and 'recent_n' (with default). The description adds no parameter-specific context, warranting the baseline score for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the basic action (Get) and resource (company income statement data), but provides minimal detail. It distinguishes from siblings like get_balance_sheet by naming the specific financial statement type, though barely meets the threshold for adequacy.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains no guidance on when to use this tool versus siblings such as get_financial_metrics or get_balance_sheet, nor any prerequisites like requiring a valid stock symbol format. Offers no 'when-not-to-use' constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_inner_trade_dataCInspect

Get company insider trading data.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolYesStock symbol/ticker (e.g. '000001')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides minimal information. It doesn't indicate whether this is real-time or historical data, what the return format looks like, rate limits, or whether the data requires specific permissions to access.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is not verbose and avoids repetition, but it is arguably underspecified rather than elegantly concise. It front-loads the verb and resource clearly with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with complete schema coverage, the description is minimally adequate. However, given the lack of annotations and output schema, it misses the opportunity to describe what 'insider trading data' actually includes (transactions, filings, summaries) or behavioral characteristics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameter 'symbol' is fully documented in the schema itself. The description adds no additional parameter context, but the baseline score of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the operation ('Get') and resource ('company insider trading data'), distinguishing it from sibling financial tools like get_balance_sheet or get_news_data. However, it doesn't clarify if 'inner' trade data differs from standard insider trading data or specify the data scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like get_hist_data or get_realtime_data, or what prerequisites might be needed. The description states what it does but not when to choose it over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_news_dataCInspect

Get stock-related news data.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolYesStock symbol/ticker (e.g. '000001')
recent_nNoNumber of most recent records to return
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails entirely. It does not indicate whether news is real-time or delayed, the time window covered, what sources are included, rate limits, or the structure/format of returned news items. The parameter 'recent_n' implies recency filtering, but no context is given for how old 'recent' data might be.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely terse at four words. While it avoids verbosity, it is under-specified rather than appropriately concise. Given the lack of annotations and output schema, additional sentences explaining data scope and return format would have earned their place; as written, the single sentence provides minimal value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema provided, the description should explain what data structure is returned (e.g., headlines, timestamps, sources, article text). It completely omits this critical context. Additionally, it fails to clarify the temporal scope of available news data or any data freshness guarantees, leaving significant gaps for a data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both 'symbol' and 'recent_n' adequately documented in the JSON schema. The description adds no semantic meaning beyond the schema (e.g., no guidance on valid symbol formats or typical values for recent_n), so it meets the baseline expectation when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic function (get news data) and domain (stock-related), but 'stock-related' is vague given all siblings are financial/stock tools. It minimally distinguishes this as news content versus the financial statements and price data offered by siblings, but lacks specificity about whether this returns headlines, articles, or sentiment data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like get_hist_data or get_realtime_data. It does not mention prerequisites, though the schema indicates a symbol is required. There is no discussion of when news data is more appropriate than historical price data or financial metrics.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_realtime_dataAInspect

Get real-time stock market data. 'eastmoney_direct' support all A,B,H shares

ParametersJSON Schema
NameRequiredDescriptionDefault
sourceNoData sourceeastmoney_direct
symbolNoStock symbol/ticker (e.g. '000001')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It fails to indicate safety profile (read-only vs destructive), rate limits, authentication requirements, or response format. Only behavioral hint is the A/B/H share coverage note.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise two-sentence structure. Front-loaded with the core purpose ('Get real-time stock market data'), followed immediately by implementation-specific detail. Zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 2-parameter tool with full schema coverage, but gaps remain due to absent annotations and output schema. Lacks description of return structure (price, volume, bid/ask?) and safety characteristics that annotations would normally provide.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3), but description adds valuable domain context: clarifying that 'eastmoney_direct' supports all A, B, and H shares helps users select appropriate source values for different market segments.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get') and resource ('real-time stock market data'). The term 'real-time' effectively distinguishes this tool from sibling 'get_hist_data' (historical) and fundamental data tools like 'get_balance_sheet'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit when-to-use guidance or alternative recommendations. While it hints at source selection by noting 'eastmoney_direct' supports A/B/H shares, it fails to contrast with 'get_hist_data' for temporal data needs or clarify when to use 'xueqiu' vs 'eastmoney'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_time_infoBInspect

Get current time with ISO format, timestamp, and the last trading day.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions return components (ISO format, timestamp, last trading day) but fails to disclose idempotency, timezone handling, rate limits, or whether 'last trading day' refers to a specific exchange/market.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with no waste. Information is front-loaded with specific formats (ISO, timestamp) and the key trading calendar feature mentioned immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description appropriately mentions the three return components (ISO time, timestamp, last trading day). However, given the financial context implied by siblings, it should specify which market's trading calendar is used or if it requires symbol-specific context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, establishing baseline 4 per evaluation rules. No parameter description is necessary or expected given the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves current time information in ISO format and timestamp, plus trading calendar data (last trading day). This effectively distinguishes it from financial data siblings like get_balance_sheet. However, it lacks specificity about which market's trading calendar or timezone context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives, or prerequisites for invocation. While the financial data focus of sibling tools makes this tool's utility role somewhat implied, there is no discussion of when time/trading day data is needed versus other operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.