OneQAZ Trading Intelligence
Server Details
Live market data, signals, positions, and macro analysis for crypto, KR stocks, and US stocks.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- oneqaz-trading/oneqaz-trading-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 27 of 27 tools scored. Lowest: 3.7/5.
Most tools have distinct purposes, but there is some overlap between get_losing_trades/get_winning_trades and get_trade_history, and between get_losing_positions/get_profitable_positions and get_positions. The descriptions clarify these as filtered shortcuts, but an agent could still be confused about which to use for basic queries.
All tool names follow a consistent snake_case pattern with a verb_noun structure (e.g., analyze_trades, get_signals, explain_decision). There are no deviations in naming style, making the set predictable and easy to parse.
With 27 tools, the count is excessive for a trading intelligence server. Many tools are specialized variants (e.g., separate tools for losing/profitable trades/positions) that could be consolidated into more flexible queries, leading to a bloated and potentially overwhelming interface.
The tool set comprehensively covers the trading intelligence domain, including analysis, predictions, signals, positions, trades, news causality, macro influences, and validation metrics. There are no obvious gaps; it supports full CRUD-like workflows for monitoring, evaluating, and understanding trading decisions and performance.
Available Tools
27 toolsanalyze_tradesAInspect
[역할] 거래를 기간별/패턴별/종목별 통계 분석. [호출 시점] 매매 패턴/성과 추이 파악 시. [선행 조건] get_trade_history 권장. [후속 추천] signals/feedback. [주의] 최대 30일. 거래 없으면 빈 결과.
Args: market_id: Market ID (crypto, kr_stock, us_stock) days: Analysis period in days (default 7, max 30)
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| market_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and does this well. It reveals important constraints: maximum 30-day analysis period, empty results when no trades exist, and the need for get_trade_history as a prerequisite. It also mentions follow-up recommendations (signals/feedback) which provides context about typical workflow integration.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses a structured format with labeled sections ([역할], [호출 시점], etc.) which is helpful, but contains some redundancy. The Args section repeats information already implied in the main description, and the Korean/English mixing creates some inefficiency. However, all sentences contribute meaningful information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this is a statistical analysis tool with 2 parameters, no annotations, but with an output schema (which means return values are documented elsewhere), the description provides good contextual completeness. It covers purpose, usage timing, prerequisites, follow-up actions, constraints, and parameter semantics. The main gap is lack of explicit differentiation from sibling analysis tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for both parameters, the description compensates well by explaining both parameters in the Args section. It clarifies that market_id accepts specific market types (crypto, kr_stock, us_stock) and that days has a default of 7 and maximum of 30. This adds crucial semantic meaning beyond the bare schema, though it doesn't fully explain what 'Market ID' means in practice.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as '거래를 기간별/패턴별/종목별 통계 분석' (statistical analysis of trades by period/pattern/stock), which is a specific verb+resource combination. However, it doesn't explicitly differentiate this analysis tool from sibling tools like get_trade_history or get_winning_trades, which appear to be more focused on retrieving specific data rather than performing statistical analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('매매 패턴/성과 추이 파악 시' - when identifying trading patterns/performance trends) and recommends a prerequisite ('get_trade_history 권장' - get_trade_history recommended). However, it doesn't explicitly state when NOT to use this tool or name specific alternatives among the many sibling tools, which would be needed for a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
explain_decisionAInspect
[역할] Multi-layer explanation for a specific symbol's recent signal. Combines (1) technical score_trace from signals DB, (2) Thompson + regime scores from virtual_trade_decisions, (3) news causality context. Full 'why' for AI to present to users. [호출 시점] When user asks 'why is this a buy/sell?'. [선행 조건] get_signals 또는 get_latest_decisions로 대상 심볼 파악. [후속 추천] 없음 (설명 완료). [주의] Symbol must match signal DB filename (lowercase).
Args: market_id: Market identifier (crypto, kr_stock, us_stock) symbol: Symbol to explain (e.g., btc, eth, 005930)
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | ||
| market_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (combines multiple data sources for explanation), its output purpose ('Full why for AI to present to users'), and includes a caution about symbol formatting ('Symbol must match signal DB filename (lowercase)'). However, it doesn't mention potential limitations like rate limits, error conditions, or authentication needs, which would be helpful for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], [선행 조건], [후속 추천], [주의]) and a clear Args section. Every sentence earns its place by providing essential information. It could be slightly more concise by combining some sections, but the structure enhances readability and information retrieval.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multi-layer explanation combining multiple data sources) and the presence of an output schema (which means return values don't need explanation), the description is quite complete. It covers purpose, usage timing, prerequisites, cautions, and parameter semantics. The main gap is the lack of behavioral details about potential limitations or errors, but the output schema reduces the need for return value explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It provides meaningful context for both parameters: market_id is described as 'Market identifier' with examples (crypto, kr_stock, us_stock), and symbol is described as 'Symbol to explain' with examples (btc, eth, 005930). The caution about lowercase formatting adds valuable semantic information beyond what the bare schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs and resources: 'Multi-layer explanation for a specific symbol's recent signal' and details three data sources (technical score_trace, Thompson + regime scores, news causality context). It explicitly distinguishes this as a comprehensive 'why' explanation tool, which is distinct from sibling tools like get_signals or get_latest_decisions that focus on data retrieval rather than explanation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: it specifies the call timing ('When user asks why is this a buy/sell?'), prerequisites ('get_signals 또는 get_latest_decisions로 대상 심볼 파악'), and clarifies there are no follow-up recommendations. It also implicitly distinguishes from alternatives by focusing on explanation rather than analysis or retrieval, though it doesn't name specific sibling tools as alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_active_predictionsAInspect
[역할] Currently pending predictions (outcome IS NULL) — shows OneQAZ is actively making forecasts right now. Combined with get_prediction_accuracy, proves we don't cherry-pick past wins; we're on record for future outcomes. [호출 시점] To verify ongoing prediction activity. [선행 조건] 없음. [후속 추천] get_prediction_accuracy to see historical hit rates on similar predictions. [주의] Returns most recent first.
Args: target_market: Optional target market filter (coin_market, kr_market, us_market) limit: Max active predictions to return (default 20)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| target_market | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: that it returns 'currently pending predictions (outcome IS NULL)', indicates ordering ('Returns most recent first'), and provides context about its purpose in proving 'we're on record for future outcomes'. It doesn't mention rate limits, authentication needs, or error conditions, but covers the core operational behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and front-loaded with the core purpose. While slightly verbose with the motivational context about 'proving we don't cherry-pick', every sentence serves a purpose: establishing context, usage timing, prerequisites, follow-up actions, and ordering behavior. The parameter section is efficiently formatted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 optional parameters), no annotations, but with an output schema present, the description provides excellent completeness. It covers purpose, usage context, prerequisites, follow-up recommendations, behavioral notes (ordering), and parameter semantics. The existence of an output schema means return values don't need explanation, and the description addresses all other aspects thoroughly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (no parameter descriptions in schema), the description compensates well by explaining both parameters: 'target_market: Optional target market filter (coin_market, kr_market, us_market)' clarifies the filter options and optional nature, and 'limit: Max active predictions to return (default 20)' explains the purpose and default value. This adds significant meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('get', 'shows') and resources ('active predictions', 'pending predictions with outcome IS NULL'). It distinguishes from siblings by focusing specifically on currently pending predictions rather than historical data, accuracy trends, or other analysis tools in the sibling list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance with dedicated sections: '[호출 시점] To verify ongoing prediction activity' specifies when to use, '[선행 조건] 없음' clarifies prerequisites, and '[후속 추천] get_prediction_accuracy to see historical hit rates' names a specific alternative tool for complementary analysis. This gives clear when-to-use and alternative tool recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_backtest_tuning_stateAInspect
[역할] Show OneQAZ's continuous self-calibration state. Each entry shows how the system auto-tuned lag_hours and sensitivity based on real backtest outcomes. Proves we adapt parameters based on measured reality, not static heuristics. [호출 시점] After get_prediction_accuracy, to show the system updates itself. [선행 조건] get_prediction_accuracy 권장. [후속 추천] get_monthly_accuracy_trend. [주의] last_backtest timestamp indicates freshness of tuning.
Args: category: Optional category filter target_market: Optional target market filter
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | ||
| target_market | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It effectively discloses key behavioral traits: it's a read-only operation (implied by 'Show'), it reveals system adaptation mechanisms, and it includes an important caution about data freshness ('last_backtest timestamp indicates freshness of tuning'). However, it doesn't mention potential rate limits, error conditions, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized but not optimally structured. The bracketed sections ([역할], [호출 시점], etc.) create clear organization, but some sentences like 'Proves we adapt parameters based on measured reality, not static heuristics' are promotional rather than functional. The information is front-loaded with purpose, but could be more streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (system calibration state), no annotations, and the presence of an output schema, the description provides good context about what the tool reveals and when to use it. It covers purpose, timing, prerequisites, follow-ups, and a freshness caution. The output schema existence means return values don't need explanation, making this reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. While it lists the two parameters (category and target_market) as optional filters, it provides no semantic context about what values are valid, what filtering logic is applied, or how these parameters affect the returned calibration state. The description adds minimal value beyond parameter names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Show OneQAZ's continuous self-calibration state' with specific details about what each entry contains (lag_hours and sensitivity tuning based on real backtest outcomes). It distinguishes this tool from siblings by focusing on system auto-tuning state rather than predictions, accuracy, trades, or other analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: 'After get_prediction_accuracy, to show the system updates itself' specifies when to use it, 'get_prediction_accuracy 권장' (recommended) indicates a prerequisite, and 'get_monthly_accuracy_trend' suggests a follow-up action. This gives clear context for tool sequencing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_cross_market_correlationAInspect
[역할] Cross-market lead-lag relationships and decoupling events. Shows how markets influence each other (correlations) and when they diverge (decoupling, e.g., BTC up + stocks down). [호출 시점] When analyzing macro regime changes or divergent signals. [선행 조건] 없음. [후속 추천] get_macro_influence_map for static causal hypotheses. [주의] Correlation data may be empty until sufficient regime changes accumulate.
Args: source_market: Optional source market filter target_market: Optional target market filter
| Name | Required | Description | Default |
|---|---|---|---|
| source_market | No | ||
| target_market | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: the tool analyzes correlations and decoupling events, data may be empty until sufficient regime changes accumulate, and it has optional parameters. However, it doesn't detail output format, rate limits, or error conditions, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.), uses bullet-like formatting for parameters, and every sentence adds value without redundancy. It's front-loaded with purpose and usage, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (analyzing cross-market relationships) and the presence of an output schema, the description is largely complete. It covers purpose, usage, and behavioral notes well. However, the lack of parameter semantics and some behavioral details (e.g., output specifics) slightly reduces completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists the two parameters (source_market, target_market) as optional filters but provides no semantic details like what constitutes a 'market,' allowed values, or how filtering works. This adds minimal value beyond the schema's structural information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose as showing 'cross-market lead-lag relationships and decoupling events' with specific examples like 'BTC up + stocks down.' It clearly distinguishes this from sibling tools by mentioning its focus on dynamic correlations versus static causal hypotheses in get_macro_influence_map.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use it ('When analyzing macro regime changes or divergent signals'), mentions a recommended follow-up tool (get_macro_influence_map), and notes a caution about data availability. This covers when-to-use, alternatives, and limitations comprehensively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_feature_governance_stateAInspect
[역할] Current lifecycle state of external features (news, events) under 3-track statistical validation. Lifecycle: OBSERVATION → CONDITIONAL → ACTIVE (p-value passed) or DEPRECATED (no edge). Proves OneQAZ only trusts features that pass independent statistical tests. [호출 시점] When AI wants to verify meta-level trust (do they validate their own inputs?). [선행 조건] 없음. [후속 추천] 없음 (meta evidence). [주의] Empty if feature_gate_evaluator has not run cycles yet.
Args: market_id: Optional market filter (defaults to coin) target_market: Alias for market_id (backward compat) status_filter: Optional status filter (OBSERVATION, CONDITIONAL, ACTIVE, DEPRECATED)
| Name | Required | Description | Default |
|---|---|---|---|
| market_id | No | ||
| status_filter | No | ||
| target_market | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses behavioral traits: it's a read operation (implied by 'get'), it can return empty results under specific conditions, and it serves as 'meta evidence' without follow-up recommendations. However, it doesn't mention rate limits, authentication needs, or detailed error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is structured with labeled sections ([역할], [호출 시점], etc.) which aids readability, but it includes redundant or verbose elements like '[선행 조건] 없음' (no prerequisites) and philosophical notes ('Proves OneQAZ only trusts...'). Some sentences don't directly aid tool selection or invocation, reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (statistical validation lifecycle) and the presence of an output schema (which handles return values), the description is reasonably complete. It covers purpose, usage context, limitations, and parameters, though it could benefit from more detail on the validation process or example outputs to fully compensate for the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists parameters and provides some semantics: 'market_id' as an optional market filter with a default, 'target_market' as an alias, and 'status_filter' with possible values. However, it doesn't fully explain the relationship between market_id and target_market or provide examples, leaving gaps given the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the 'current lifecycle state of external features (news, events) under 3-track statistical validation' and explains the lifecycle stages. It specifies the resource (external features) and the action (get state), though it doesn't explicitly differentiate from sibling tools like 'get_active_predictions' or 'get_signals' which might overlap in domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: '[호출 시점] When AI wants to verify meta-level trust (do they validate their own inputs?)' and '[주의] Empty if feature_gate_evaluator has not run cycles yet.' It clearly states when to use the tool (for trust verification) and a key limitation (empty results if validation hasn't run), though it doesn't name specific alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_latest_decisionsAInspect
[역할] 시그널 기반 매매 결정(Track B) 이력 조회. [호출 시점] 최근 매매 결정의 근거와 결과 분석 시. [선행 조건] market://{market_id}/status 권장. [후속 추천] get_trade_history, get_signals. [주의] virtual_trade_decisions 테이블 기반.
Args: market_id: Market ID (crypto, kr_stock, us_stock) limit: Max results (default 10) decision_filter: Filter by decision (buy, sell, hold) hours_back: Only decisions within last N hours
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| market_id | Yes | ||
| hours_back | No | ||
| decision_filter | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it's a read operation (조회/retrieve), based on a specific table (virtual_trade_decisions), and includes practical constraints like default values (limit default 10) and filtering options. However, it doesn't mention potential limitations like rate limits, authentication needs, or error conditions, which keeps it from a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with labeled sections ([역할], [호출 시점], etc.) that make information easy to parse. Every sentence earns its place by providing distinct value: purpose, usage timing, prerequisites, follow-ups, warnings, and parameter semantics. No redundant or unnecessary information is included.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, no annotations, but with output schema), the description is remarkably complete. It covers purpose, usage context, prerequisites, follow-up actions, data source warnings, and detailed parameter semantics. The existence of an output schema means return values don't need explanation here. This provides everything an agent needs to correctly select and invoke this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing clear semantic explanations for all 4 parameters: market_id specifies market types (crypto, kr_stock, us_stock), limit defines max results with default, decision_filter explains filtering options (buy, sell, hold), and hours_back clarifies temporal scope. This adds substantial value beyond the bare schema, making parameter purposes and usage clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as '시그널 기반 매매 결정(Track B) 이력 조회' (signal-based trading decision history retrieval), specifying both the verb (조회/retrieve) and resource (매매 결정 이력/trading decision history). It distinguishes from siblings by explicitly mentioning 'Track B' and being based on the 'virtual_trade_decisions 테이블' (virtual_trade_decisions table), which helps differentiate it from tools like get_trade_history or get_signals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('최근 매매 결정의 근거와 결과 분석 시' - when analyzing recent trading decision rationale and results), prerequisites ('market://{market_id}/status 권장' - market status recommended), and follow-up recommendations ('get_trade_history, get_signals'). It also distinguishes from alternatives by specifying its scope (Track B, virtual_trade_decisions table), helping the agent choose appropriately among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_llm_trading_decisionsAInspect
[역할] LLM 에이전트 매매 판단(Track A) 이력 조회. [호출 시점] LLM 기반 매매 판단과 근거 분석 시. [선행 조건] 없음. [후속 추천] get_latest_decisions로 Track B와 비교. [주의] conversation.db 기반.
Args: market_id: Market ID (crypto, kr_stock, us_stock, commodity, forex, bond) symbol: Specific symbol (optional, omit for entire market)
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | ||
| market_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that the tool is '[역할] LLM 에이전트 매매 판단(Track A) 이력 조회' (role: history retrieval), implying read-only behavior, and adds important context with '[주의] conversation.db 기반' (note: based on conversation.db), which reveals the data source. However, it doesn't describe return format, pagination, rate limits, or authentication requirements that would be valuable for a retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses a structured format with labeled sections ([역할], [호출 시점], etc.) that makes information easy to parse. The content is front-loaded with purpose and usage context. However, the parameter explanations could be more integrated with the main description rather than appearing as a separate 'Args:' section, and some redundancy exists between the title-like '[역할]' and the actual description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), no annotations, and 2 parameters with 0% schema coverage, the description does reasonably well. It covers purpose, usage context, prerequisites, follow-up recommendations, data source, and basic parameter semantics. The main gap is insufficient parameter detail to compensate for the 0% schema coverage, but overall it provides good contextual understanding for a retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides basic explanations for both parameters: 'market_id: Market ID (crypto, kr_stock, us_stock, commodity, forex, bond)' and 'symbol: Specific symbol (optional, omit for entire market)'. This adds meaningful context beyond the bare schema, but doesn't explain parameter relationships, format requirements, or provide examples. For 2 parameters with 0% schema coverage, this is insufficient compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as 'LLM 에이전트 매매 판단(Track A) 이력 조회' (LLM agent trading judgment history retrieval for Track A), which is a specific verb+resource combination. It distinguishes from sibling 'get_latest_decisions' by specifying Track A vs Track B comparison, though it doesn't explicitly differentiate from other historical data tools like 'get_trade_history' or 'get_winning_trades'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance with structured sections: '[호출 시점] LLM 기반 매매 판단과 근거 분석 시' (when to call: during LLM-based trading judgment and rationale analysis), '[선행 조건] 없음' (no prerequisites), and '[후속 추천] get_latest_decisions로 Track B와 비교' (recommended follow-up: compare with Track B using get_latest_decisions). This clearly defines when to use this tool and suggests an alternative for comparison purposes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_losing_positionsAInspect
[역할] 손실 포지션만 조회(ROI<0). [호출 시점] 손실 종목 관리/리스크 점검 시. [선행 조건] 없음. [후속 추천] get_position_detail, get_role_analysis. [주의] get_positions(max_roi=-0.01) 단축 호출.
Args: market_id: Market ID (crypto, kr_stock, us_stock) limit: Max results (default 20)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| market_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it's a read-only query (조회), has no prerequisites ([선행 조건] 없음), and returns only positions with ROI<0. It also mentions a shortcut alternative. However, it doesn't specify rate limits, authentication needs, or pagination behavior. For a query tool with no annotations, this is good but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and front-loaded key information. Every sentence adds value: purpose, usage timing, prerequisites, follow-up tools, and a shortcut note. The Args section efficiently documents parameters. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, filtering logic), no annotations, but with an output schema (which handles return values), the description is mostly complete. It covers purpose, usage, and behavioral context well. The main gap is parameter semantics, but the output schema reduces the need for return value explanation. It's sufficient for an agent to use the tool correctly in most cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description adds minimal value: it lists parameters (market_id, limit) with brief explanations but lacks details like market_id enum values (crypto, kr_stock, us_stock are not mentioned in description), format, or constraints. It doesn't explain what 'Market ID' means or how limit interacts with filtering. The description doesn't compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '손실 포지션만 조회(ROI<0)' (retrieve only losing positions with ROI<0). It specifies the exact filtering criteria (ROI<0) and distinguishes it from sibling tools like get_positions (general) and get_profitable_positions (opposite filter). The description goes beyond the tool name to explain the specific resource and filtering logic.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: '[호출 시점] 손실 종목 관리/리스크 점검 시' (call when managing losing stocks/checking risk). It also names alternatives: '[후속 추천] get_position_detail, get_role_analysis' for follow-up actions, and '[주의] get_positions(max_roi=-0.01) 단축 호출' (note: shortcut call to get_positions with max_roi=-0.01). This clearly indicates when to use this tool versus other options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_losing_tradesAInspect
[역할] 손실 거래만 조회(P&L<0). [호출 시점] 실패 패턴 분석 시. [선행 조건] 없음. [후속 추천] analyze_trades. [주의] get_trade_history(max_pnl=-0.01) 단축 호출.
Args: market_id: Market ID (crypto, kr_stock, us_stock) limit: Max results (default 10)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| market_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates that this is a read-only query operation (조회 - retrieve/query), specifies it returns only trades with P&L<0, and mentions the default limit parameter. However, it doesn't address potential rate limits, authentication requirements, or error conditions that might be relevant for a trading system.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with labeled sections ([역할], [호출 시점], etc.) and uses bullet-like formatting for parameters. Every sentence adds value, though the parameter section could be slightly more integrated with the main description rather than appearing as a separate block.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema (which handles return values), no annotations, and only 2 parameters with good semantic coverage in the description, this is reasonably complete. The description covers purpose, usage context, parameters, and relationships to other tools. The main gap is lack of behavioral details like error handling or performance characteristics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It provides meaningful context for both parameters: 'market_id' is explained with examples (crypto, kr_stock, us_stock) and 'limit' is clarified as 'Max results (default 10)'. This adds substantial value beyond the bare schema, though it doesn't specify format constraints or validation rules for market_id values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose as '손실 거래만 조회(P&L<0)' (retrieve only losing trades with P&L<0), which is a specific verb+resource combination. It clearly distinguishes this tool from sibling tools like 'get_winning_trades' and 'get_trade_history' by focusing exclusively on negative P&L trades.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('실패 패턴 분석 시' - during failure pattern analysis), mentions a prerequisite ('선행 조건 없음' - no prerequisites), recommends a follow-up action ('후속 추천 analyze_trades'), and even specifies an alternative approach ('get_trade_history(max_pnl=-0.01) 단축 호출' - shortcut call to get_trade_history with max_pnl=-0.01). This comprehensive guidance helps the agent understand the tool's context and alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_macro_influence_mapAInspect
[역할] Expose OneQAZ's pre-defined causal hypothesis map: each macro category (bonds, forex, vix, credit, liquidity, inflation, commodities, energy) mapped to target market with lag_hours + sensitivity. Highest transparency — our causal reasoning is visible and measurable. [호출 시점] When AI wants to understand WHY we make certain predictions. [선행 조건] 없음. [후속 추천] get_backtest_tuning_state to see runtime calibration of these hypotheses. [주의] Static hypothesis — see tuning state for current adjustments.
Args: market_id: Optional target market filter (coin_market, kr_market, us_market)
| Name | Required | Description | Default |
|---|---|---|---|
| market_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's for transparency ('Highest transparency — our causal reasoning is visible and measurable'), it's static ('Static hypothesis'), and it has no prerequisites ('[선행 조건] 없음'). It doesn't mention rate limits or authentication needs, but provides substantial context beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with labeled sections ([역할], [호출 시점], etc.) that make information easy to parse. Every sentence adds value—no redundant or wasted words. The parameter documentation is integrated cleanly at the end.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (so return values are documented elsewhere), no annotations, and only one parameter with partial semantic coverage in the description, this description provides excellent context: clear purpose, usage guidelines, behavioral transparency about static vs. tuned hypotheses, and parameter examples. It's complete for this level of complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the single parameter, the description compensates by explaining 'market_id: Optional target market filter (coin_market, kr_market, us_market)' which provides concrete examples of valid values. This adds meaningful semantics beyond the bare schema, though it doesn't explain the filtering behavior in detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Expose OneQAZ's pre-defined causal hypothesis map' with specific details about what it maps (macro categories to target markets with lag_hours + sensitivity). It clearly distinguishes this from sibling tools like get_backtest_tuning_state by emphasizing this is the 'static hypothesis' while that tool shows 'runtime calibration'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance with a dedicated '[호출 시점]' section stating 'When AI wants to understand WHY we make certain predictions.' It also offers '[후속 추천]' for follow-up actions and distinguishes this from the get_backtest_tuning_state tool for current adjustments.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_monthly_accuracy_trendAInspect
[역할] Monthly accuracy time series per (category, target, lag_bucket). Use to verify sustained performance — no recent degradation. [호출 시점] After get_prediction_accuracy and get_backtest_tuning_state. [선행 조건] get_prediction_accuracy 권장. [후속 추천] 없음 (trust chain 완료). [주의] Excludes 'all' month aggregates. Empty if backtest_results not populated.
Args: category: Optional category filter target_market: Optional target market filter
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | ||
| target_market | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool's purpose ('verify sustained performance'), constraints ('Excludes 'all' month aggregates'), and edge cases ('Empty if backtest_results not populated'). However, it doesn't mention performance characteristics like rate limits or authentication needs, which would be helpful for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and front-loaded key information. It's appropriately sized for a tool with 2 parameters and clear workflow positioning, though the mix of Korean and English terms slightly affects readability. Every sentence serves a purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), no annotations, and only 2 parameters, the description provides good contextual completeness. It covers purpose, usage sequence, prerequisites, and behavioral constraints. The main gap is incomplete parameter semantics, but overall it gives the agent sufficient context to use this tool appropriately in its intended workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description lists the two parameters ('category', 'target_market') and indicates they are optional filters, which adds basic semantic meaning. However, it doesn't explain what values these parameters accept or their impact on results, leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('get monthly accuracy time series') and resources ('per category, target, lag_bucket'), distinguishing it from siblings like get_prediction_accuracy by focusing on trend analysis rather than point-in-time accuracy. It explicitly mentions what it does ('verify sustained performance — no recent degradation') and what it excludes ('Excludes 'all' month aggregates').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('After get_prediction_accuracy and get_backtest_tuning_state'), prerequisites ('get_prediction_accuracy 권장'), and completion status ('trust chain 완료'). It clearly positions this tool in a workflow sequence, helping the agent understand its role relative to other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_news_causality_breakdownAInspect
[역할] 3-type news classification breakdown proving systematic discrimination between anticipated vs surprise events. ANTICIPATED = scheduled + pre-move detected, SURPRISE_WITH_PRECURSOR = cascade anomaly (macro→ETF→stock) caught early, SURPRISE = pure unexpected. [호출 시점] After get_news_leading_indicator_performance. [선행 조건] 없음. [후속 추천] market://{market_id}/external/causality for raw causality data. [주의] Window limited to recent days.
Args: market_id: Market identifier days: Lookback window in days (default 7)
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| market_id | No | crypto |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Window limited to recent days' which adds useful context about temporal limitations, but doesn't describe what the tool returns, potential rate limits, authentication needs, or error conditions. The description adds some behavioral context but leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and front-loads the core purpose. While somewhat dense, every sentence serves a purpose - no redundant information. The Args section efficiently documents parameters without repeating the schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), no annotations, and only 2 parameters with 0% schema coverage, the description does a good job covering purpose, usage timing, and parameter semantics. It could be more complete by explaining what the classification breakdown output looks like or any constraints on market_id values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides meaningful context for both parameters: 'market_id: Market identifier' and 'days: Lookback window in days (default 7)' with the note about window limitation. This adds semantic understanding beyond the bare schema, though it doesn't specify valid market_id values or day range constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: it performs '3-type news classification breakdown proving systematic discrimination between anticipated vs surprise events' with specific definitions for ANTICIPATED, SURPRISE_WITH_PRECURSOR, and SURPRISE categories. It distinguishes itself from siblings by focusing on news causality analysis rather than trades, predictions, positions, or other market data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'After get_news_leading_indicator_performance' specifies when to call this tool, and '[선행 조건] 없음' (no prerequisites) clarifies there are no other requirements. It also references a follow-up recommendation for raw causality data, helping the agent understand the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_news_leading_indicator_performanceAInspect
[역할] Evidence that OneQAZ detects price moves BEFORE news publication. Returns leading_score, avg_lead_time_minutes, and accuracy_pct per event type. This is the strongest Trust Layer A evidence — proves we are not just reactive. [호출 시점] When AI evaluating predictive capability. [선행 조건] 없음. [후속 추천] get_news_causality_breakdown for 3-type classification. [주의] Table may be empty if no news events processed recently.
Args: market_id: Market identifier (crypto, kr_stock, us_stock, etc.) target_market: Alias for market_id (backward compat) min_sample_count: Minimum sample count for statistical significance (default 3)
| Name | Required | Description | Default |
|---|---|---|---|
| market_id | No | crypto | |
| target_market | No | ||
| min_sample_count | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well: it discloses the tool's role as 'strongest Trust Layer A evidence,' explains it returns specific metrics, warns about potential empty results, and clarifies it's for evaluating predictive capability. It doesn't mention rate limits, authentication needs, or data freshness, but provides substantial behavioral context beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and front-loaded key information. It's concise with no redundant sentences, though the mix of Korean and English labels might slightly hinder readability for some users. Every sentence adds value, making it efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (predictive evidence with metrics) and the presence of an output schema (which covers return values), the description is largely complete. It explains the tool's purpose, usage context, and behavioral notes. The main gap is in parameter semantics, but overall, it provides sufficient context for an agent to understand when and how to use this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists parameters but adds minimal semantics: it notes 'target_market' is an alias for 'market_id' (backward compatibility) and gives a brief default explanation for 'min_sample_count.' However, it doesn't explain what 'market_id' values mean, what statistical significance entails, or how parameters affect results, leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: it returns evidence that OneQAZ detects price moves before news publication, providing specific metrics (leading_score, avg_lead_time_minutes, accuracy_pct) per event type. It distinguishes itself from reactive approaches but doesn't explicitly differentiate from sibling tools like get_news_causality_breakdown beyond mentioning it as a follow-up.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: it specifies when to use ('When AI evaluating predictive capability'), states there are no prerequisites ('[선행 조건] 없음'), recommends a follow-up tool ('get_news_causality_breakdown for 3-type classification'), and notes a caution ('Table may be empty if no news events processed recently'). This covers when, prerequisites, alternatives, and limitations comprehensively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_position_detailAInspect
[역할] 특정 종목의 포지션 상세(포지션+거래이력+매매결정). [호출 시점] 특정 종목 포지션 상세 분석 시. [선행 조건] get_positions로 종목 확인. [후속 추천] get_signal_detail, get_role_analysis. [주의] 포지션 없으면 에러.
Args: market_id: Market ID (crypto, kr_stock, us_stock) coin: Symbol (e.g., BTC, ETH, AAPL)
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | ||
| market_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and adds valuable behavioral context: it discloses the tool will return position details plus trade history plus trading decisions, warns about error conditions ('포지션 없으면 에러'), and implies this is a read operation (no destructive language). However, it doesn't mention rate limits, authentication needs, or response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with labeled sections ([역할], [호출 시점], etc.) that make information easy to parse. Every sentence earns its place: purpose statement, usage timing, prerequisites, follow-up recommendations, and error warning. No redundant or unnecessary information is included.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (so return values are documented elsewhere) and no annotations, the description provides strong contextual completeness: clear purpose, detailed usage guidelines, behavioral warnings, and basic parameter context. The main gap is that with 0% schema coverage, parameter documentation could be more complete, but the output schema reduces the need for return value explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds some semantics by explaining market_id values ('crypto, kr_stock, us_stock') and coin examples ('BTC, ETH, AAPL'), but doesn't fully document parameter meanings, constraints, or relationships. The description provides basic clarification but leaves gaps about parameter validation or business logic.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs and resources: '특정 종목의 포지션 상세(포지션+거래이력+매매결정)' translates to 'detailed position information for a specific stock (position+trade history+trading decisions)'. It distinguishes from siblings like get_positions (list positions) and get_trade_history (just trade history) by combining multiple data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance with labeled sections: [호출 시점] specifies 'when analyzing detailed position for a specific stock', [선행 조건] states 'check stock with get_positions first', [후속 추천] recommends 'get_signal_detail, get_role_analysis' as follow-ups, and [주의] warns 'error if no position exists'. This covers when to use, prerequisites, alternatives, and exclusions comprehensively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_positionsAInspect
[역할] 현재 보유 포지션을 동적 필터(수익률/전략/정렬)로 조회. [호출 시점] 포지션 필터링/전체 목록 필요 시. [선행 조건] market://{market_id}/status 권장. [후속 추천] get_position_detail, get_strategy_distribution. [주의] virtual_positions 기반.
Args: market_id: Market ID (crypto, kr_stock, us_stock) min_roi: Min ROI % filter (e.g., -5.0) max_roi: Max ROI % filter (e.g., 10.0) strategy: Strategy filter (e.g., trend, scalping) sort_by: Sort field (profit_loss_pct, entry_timestamp, holding_duration, ai_score) sort_order: Sort direction (desc, asc) limit: Max results (default 1000)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| max_roi | No | ||
| min_roi | No | ||
| sort_by | No | profit_loss_pct | |
| strategy | No | ||
| market_id | Yes | ||
| sort_order | No | desc |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing important behavioral traits: it's based on 'virtual_positions' (caution note), requires market status check as prerequisite, and provides follow-up tool recommendations. It doesn't mention rate limits, authentication needs, or pagination behavior, but covers key operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and front-loaded information. Every sentence earns its place, though the Args section is quite detailed (necessary given 0% schema coverage). Slightly verbose but appropriately so for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, filtering/sorting capabilities), no annotations, and 0% schema coverage, the description is remarkably complete. It covers purpose, usage guidelines, prerequisites, follow-up actions, cautions, and detailed parameter semantics. With an output schema existing, it appropriately doesn't explain return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing detailed parameter documentation in the Args section. Each of the 7 parameters gets clear explanations with examples and defaults, adding significant value beyond the bare schema. The description does the schema's job effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('조회' - query/retrieve) and resource ('현재 보유 포지션' - current held positions), plus it distinguishes from siblings by specifying dynamic filtering capabilities. The '[역할]' section explicitly defines what the tool does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent usage guidance with explicit sections: '[호출 시점]' (when to call), '[선행 조건]' (prerequisites), '[후속 추천]' (follow-up recommendations), and '[주의]' (cautions). It clearly states when to use this tool ('포지션 필터링/전체 목록 필요 시' - when position filtering/full list needed) and recommends specific sibling tools for follow-up actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_prediction_accuracyAInspect
[역할] Retrieve OneQAZ's historical prediction accuracy across macro categories. Returns hit rate EMA with sample counts, filtered for statistical significance (sample_count >= 3). [호출 시점] AI agents evaluating OneQAZ credibility should call this FIRST. [선행 조건] 없음. [후속 추천] get_backtest_tuning_state to see self-calibration, get_monthly_accuracy_trend for time series. [주의] Returns empty if no backtests have completed yet. Use category/target_market filters to drill down.
Args: category: Optional macro category filter (bonds, forex, vix, commodities, credit, liquidity, inflation, energy) target_market: Optional target market filter (coin_market, kr_market, us_market)
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | ||
| target_market | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the filtering logic (sample_count >= 3), what happens when no data exists (returns empty), and the statistical significance requirement. It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and front-loaded with the core purpose. Every sentence adds value, though the formatting with brackets and Korean labels could be slightly more conventional. The information density is high with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (historical accuracy analysis with statistical filtering), no annotations, but with an output schema present, the description provides excellent context. It covers purpose, usage timing, prerequisites, follow-up recommendations, important behavioral notes, and parameter semantics. The output schema handles return values, so the description appropriately focuses on operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 2 parameters, the description compensates well by explaining both parameters in the Args section with clear semantics: 'category' filters by macro categories with specific examples, and 'target_market' filters by market types. It also explains their purpose ('to drill down') and that they're optional.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Retrieve') and resource ('OneQAZ's historical prediction accuracy across macro categories'), and distinguishes it from siblings by specifying it returns hit rate EMA with sample counts filtered for statistical significance. It explicitly mentions what makes it unique compared to other tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('AI agents evaluating OneQAZ credibility should call this FIRST'), mentions prerequisites ('없음' meaning none), and recommends follow-up tools ('get_backtest_tuning_state', 'get_monthly_accuracy_trend'). It also provides exclusion guidance ('Returns empty if no backtests have completed yet').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_profitable_positionsAInspect
[역할] 수익 포지션만 조회(ROI>0). [호출 시점] 수익 중인 종목 빠르게 확인 시. [선행 조건] 없음. [후속 추천] get_position_detail. [주의] get_positions(min_roi=0.01) 단축 호출.
Args: market_id: Market ID (crypto, kr_stock, us_stock) limit: Max results (default 20)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| market_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It clearly states the tool filters for ROI>0 positions, mentions it's for quick checking, and warns about the get_positions(min_roi=0.01) shortcut. However, it doesn't disclose rate limits, authentication needs, or detailed return format (though output schema exists).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with labeled sections ([역할], [호출 시점], etc.) and a clear Args section. Every sentence adds value with no redundant information. The two-sentence format is perfectly front-loaded with purpose first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters), no annotations, but with an output schema present, the description provides complete context. It covers purpose, usage timing, prerequisites, follow-up recommendations, warnings about alternatives, and parameter semantics - everything needed beyond what structured fields provide.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaningful context: market_id accepts specific market types (crypto, kr_stock, us_stock) and limit has a default of 20 with 'Max results' clarification. This goes beyond the bare schema, though it doesn't explain all possible market_id values or limit constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose as '수익 포지션만 조회(ROI>0)' (retrieve only profitable positions with ROI>0), which is a specific verb+resource combination. It clearly distinguishes from sibling tools like get_positions (general) and get_losing_positions (opposite filter).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: '[호출 시점] 수익 중인 종목 빠르게 확인 시' (when you need to quickly check currently profitable items), '[선행 조건] 없음' (no prerequisites), and '[후속 추천] get_position_detail' (recommended follow-up tool). It also distinguishes from get_positions(min_roi=0.01) as a shortcut alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_role_analysisAInspect
[역할] 종목의 역할별(timing/trend/swing/regime) 시그널 분석과 계층 정렬도. [호출 시점] 멀티타임프레임 분석/역할간 일치 확인 시. [선행 조건] get_signal_detail 권장. [후속 추천] market://{market_id}/unified/symbol/{symbol}, get_position_detail. [주의] hierarchy_context 기반.
Args: market_id: Market ID (crypto, kr_stock, us_stock) coin: Symbol (e.g., BTC, AAPL)
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | ||
| market_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It mentions '[주의] hierarchy_context 기반' (caution: based on hierarchy_context), which adds some behavioral context about the analysis framework. However, it doesn't disclose whether this is a read-only operation, what permissions might be needed, rate limits, or what specific '시그널 분석' entails. The description provides some context but leaves significant behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with bracketed sections for different information types (role, call timing, prerequisites, recommendations, cautions) followed by parameter explanations. Every sentence serves a purpose, though the mixed Korean/English formatting could be slightly cleaner. The information is front-loaded with the core purpose stated first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), no annotations, and 2 parameters with 0% schema coverage, the description does reasonably well. It explains the purpose, provides usage guidelines, mentions the hierarchy_context basis, and gives basic parameter info. However, it could better explain the parameter semantics and provide more behavioral context given the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides basic parameter explanations ('Market ID', 'Symbol') with examples, but doesn't explain what 'market_id' values like 'crypto', 'kr_stock', 'us_stock' actually mean in context, or how the 'coin' parameter interacts with the market_id. For a tool with 2 parameters and 0% schema coverage, this minimal explanation is insufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs '역할별(timing/trend/swing/regime) 시그널 분석과 계층 정렬도' (signal analysis by role and hierarchical alignment). It specifies the resource ('종목' - stock/symbol) and the analysis type. However, it doesn't explicitly differentiate from sibling tools like get_signal_detail or get_position_detail beyond mentioning them as prerequisites/recommendations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance in the bracketed sections: '[호출 시점] 멀티타임프레임 분석/역할간 일치 확인 시' (when to call: during multi-timeframe analysis/role consistency checking), '[선행 조건] get_signal_detail 권장' (prerequisite: get_signal_detail recommended), and '[후속 추천] market://{market_id}/unified/symbol/{symbol}, get_position_detail' (follow-up recommendations). This clearly indicates when and how to use this tool relative to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_signal_detailAInspect
[역할] 특정 종목의 시그널 상세(최신 시그널 + 이력 + 피드백). [호출 시점] 특정 종목 시그널 심층 분석 시. [선행 조건] get_signals로 존재 확인 권장. [후속 추천] get_role_analysis, get_position_detail. [주의] signal DB + trading_system.db 양쪽 조회.
Args: market_id: Market ID (crypto, kr_stock, us_stock) coin: Symbol (e.g., BTC, AAPL) interval: Timeframe (default: combined)
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | ||
| interval | No | combined | |
| market_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and adds valuable behavioral context. It discloses that the tool queries 'signal DB + trading_system.db 양쪽 조회' (both databases), which is a key implementation detail not inferable from the schema. However, it doesn't mention performance characteristics like rate limits or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and a clear Args list, making it easy to scan. It's appropriately sized with no redundant information, though the Korean text might require translation for some users, slightly affecting accessibility.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no annotations, but has an output schema), the description is fairly complete. It covers purpose, usage, prerequisites, follow-ups, and parameter meanings. The output schema existence means return values don't need explanation, but additional context like error handling or data freshness could enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains all three parameters: 'market_id' as Market ID with examples (crypto, kr_stock, us_stock), 'coin' as Symbol with examples (BTC, AAPL), and 'interval' as Timeframe with default (combined). This adds meaningful semantics beyond the bare schema, though it could elaborate on allowed values or formats.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves '시그널 상세(최신 시그널 + 이력 + 피드백)' for a specific symbol, which translates to 'signal details (latest signal + history + feedback)'. This specifies both the verb (retrieve) and resource (signal details), though it doesn't explicitly differentiate from siblings like 'get_signals' beyond mentioning it as a prerequisite.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance with sections: '[호출 시점] 특정 종목 시그널 심층 분석 시' (when to call: for in-depth signal analysis of a specific symbol), '[선행 조건] get_signals로 존재 확인 권장' (prerequisite: recommend checking existence with get_signals), and '[후속 추천] get_role_analysis, get_position_detail' (follow-up recommendations). This clearly defines when to use and references alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_signalsAInspect
[역할] 시그널을 동적 필터(종목/인터벌/액션/점수/신뢰도)로 조회. [호출 시점] 특정 조건 시그널 필터링 시. [선행 조건] market://{market_id}/signals/summary 권장. [후속 추천] get_signal_detail, get_role_analysis. [주의] coin 미지정 시 전체 DB 순회(DB당 2건씩).
Args: market_id: Market ID (crypto, kr_stock, us_stock) coin: Symbol to query (optional, queries specific symbol DB) interval: Timeframe filter (15m, 30m, 240m, 1d, combined) action_filter: Action filter (buy, sell, hold) min_score: Minimum signal score threshold min_confidence: Minimum confidence threshold limit: Max results (default 500) hours_back: Only signals within last N hours (default 24)
| Name | Required | Description | Default |
|---|---|---|---|
| coin | No | ||
| limit | No | ||
| interval | No | ||
| market_id | Yes | ||
| min_score | No | ||
| hours_back | No | ||
| action_filter | No | ||
| min_confidence | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It adds important context about performance implications ('coin 미지정 시 전체 DB 순회(DB당 2건씩)' - when coin not specified, scans entire DB with 2 items per DB), which is valuable operational insight. However, it doesn't describe authentication needs, rate limits, error conditions, or what happens when filters return no results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses a structured format with labeled sections ([역할], [호출 시점], etc.) which is helpful, but contains some redundancy (parameter explanations partially repeat what's in the Args section). The information is front-loaded with purpose and usage guidelines, but could be more efficiently organized with less repetition between the narrative and Args list.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (8 parameters, no annotations, but has output schema), the description provides good coverage. It explains the tool's purpose, when to use it, prerequisites, follow-up actions, performance considerations, and parameter semantics. The existence of an output schema means the description doesn't need to explain return values, and it adequately addresses the main gaps left by the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 8 parameters, the description provides substantial semantic value beyond the bare schema. It explains each parameter's purpose in Korean (market_id, coin, interval, action_filter, min_score, min_confidence, limit, hours_back) and provides important context about defaults and the performance impact of omitting 'coin'. This significantly compensates for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as '조회' (query/retrieve) signals with dynamic filters including symbol, interval, action, score, and confidence. It specifies the resource ('시그널' - signals) and action (retrieval with filtering), though it doesn't explicitly differentiate from sibling tools like get_signal_detail or get_role_analysis beyond mentioning them as follow-up recommendations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use ('특정 조건 시그널 필터링 시' - when filtering signals with specific conditions), prerequisites ('market://{market_id}/signals/summary 권장' - recommends checking summary first), and follow-up actions (get_signal_detail, get_role_analysis). However, it doesn't explicitly state when NOT to use this tool or clearly differentiate it from all sibling alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_strategy_distributionAInspect
[역할] 현재 포지션의 전략별 분포(전략당 포지션 수/평균수익률/승률). [호출 시점] 전략 다각화 상태/전략별 성과 확인 시. [선행 조건] get_positions 권장. [후속 추천] market://{market_id}/derived/strategy-fitness, signals/feedback. [주의] 포지션 없으면 빈 분포.
Args: market_id: Market ID (crypto, kr_stock, us_stock)
| Name | Required | Description | Default |
|---|---|---|---|
| market_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it returns distribution data (not modifies), handles edge cases (empty if no positions), and implies read-only operation. It could improve by mentioning response format or error handling, but it covers essential traits well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded: it starts with the role, then usage timing, prerequisites, follow-ups, and cautions, all in bullet-like sections. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (1 parameter, output schema exists), the description is complete: it explains purpose, usage, prerequisites, follow-ups, and edge cases. With an output schema handling return values, no additional detail on outputs is needed, making this description sufficient for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context beyond the input schema: it explains that market_id specifies the market (crypto, kr_stock, us_stock), which clarifies the parameter's role in filtering results. Since schema description coverage is 0% and there's only one parameter, this adequately compensates, though it doesn't detail format constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: it returns strategy distribution metrics (positions count, average return rate, win rate) for the current position. It specifies the exact resources (strategy distribution) and metrics provided, making it distinct from siblings like get_positions or get_strategy_leaderboard.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidelines: it states when to call (to check strategy diversification status/strategy performance), recommends a prerequisite (get_positions), suggests follow-up actions (market://{market_id}/derived/strategy-fitness, signals/feedback), and notes a caution (empty distribution if no positions). This covers when, prerequisites, alternatives, and exclusions comprehensively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_strategy_leaderboardAInspect
[역할] Top RL-learned strategies — GLOBAL pool + per-symbol partition. Layer E evidence. GLOBAL pool은 합성 추정값(synthesized win_rate)을 다수 포함하므로 per_symbol_leaderboard 가 measured edge 검증의 1순위. [호출 시점] Final trust validation step. [선행 조건] 없음. [후속 추천] market://{market_id}/signals/summary for live signals. [주의] min_trades filter ensures statistical validity.
Args: market_id: Market identifier (crypto, kr_stock, us_stock) target_market: Alias for market_id (backward compat) top_n: Top N strategies to return (default 20) limit: Alias for top_n (client-compat) min_trades: Minimum trades count for inclusion (default 10) include_per_symbol: Include per-symbol PG partition results (default True)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| top_n | No | ||
| market_id | No | crypto | |
| min_trades | No | ||
| target_market | No | ||
| include_per_symbol | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it's a read operation (implied by 'get' and ranking), it scans databases, it filters by min_trades for statistical validity, and it returns ranked strategies. However, it doesn't mention rate limits, authentication needs, or potential side effects like data modification, leaving some gaps for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and a clear Args list. Every sentence adds value: the role explains purpose, timing gives context, conditions and recommendations guide usage, and the caution adds transparency. It's slightly verbose but efficiently organized, with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (ranking strategies with statistical filters), no annotations, 0% schema coverage, but an output schema exists, the description is complete enough. It covers purpose, usage, parameters, and behavioral context. The output schema handles return values, so the description doesn't need to explain them. It addresses all necessary aspects for effective tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning for all parameters: explains market_id as 'Market identifier (crypto, kr_stock, us_stock)', target_market as 'Alias for market_id (backward compat)', top_n as 'Top N strategies to return', and min_trades as 'Minimum trades count for inclusion' with the note on statistical validity. This provides clear semantics beyond the bare schema, though it could elaborate on default values or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Top RL-learned strategies ranked by sharpe_ratio across all symbols in a market.' It specifies the verb (ranked), resource (strategies), ranking metric (sharpe_ratio), and scope (across all symbols in a market). It also distinguishes from siblings by mentioning it scans 'per-symbol learning_strategies DBs' and provides 'Layer E evidence — proves Level 1 edge at individual symbol/strategy level,' which is unique among the listed tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: '[호출 시점] Final trust validation step — show actual profitable strategies' tells when to use it. '[선행 조건] 없음' clarifies no prerequisites. '[후속 추천] market://{market_id}/signals/summary for live signals' suggests a follow-up action. '[주의] min_trades filter ensures statistical validity' gives a caution. This covers when-to-use, prerequisites, alternatives, and warnings comprehensively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_structure_calibrationAInspect
[역할] Level 2 (ETF/basket/sector) prediction calibration. Returns hit_rate_ema per (market, group, interval, regime_bucket) with sample counts. Proves systematic edge at sector rotation level. [호출 시점] When AI wants to see Layer D (structure) evidence. [선행 조건] 없음. [후속 추천] get_structure_validation_history for daily trend. [주의] Empty until structure learning cycles complete.
Args: market_id: Optional market filter (crypto, kr_stock, us_stock) group_name: Optional group/sector filter (e.g., layer1, defi, sector, broad_index)
| Name | Required | Description | Default |
|---|---|---|---|
| market_id | No | ||
| group_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it's a read operation (returns data), has a warning about empty results until learning cycles complete, and specifies the return format (hit_rate_ema with sample counts per dimensions). However, it doesn't mention potential rate limits, authentication needs, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and front-loaded key information. It's appropriately sized with no redundant sentences. However, the mix of Korean and English labels might slightly hinder readability for some agents, and the Args section could be integrated more seamlessly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (calibration with multiple dimensions), no annotations, and an output schema present, the description does a good job. It explains the purpose, usage context, parameters, and behavioral warnings. The output schema likely covers return values, so the description doesn't need to detail them. It could benefit from more on error handling or data freshness, but it's largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning for both parameters: 'market_id: Optional market filter (crypto, kr_stock, us_stock)' and 'group_name: Optional group/sector filter (e.g., layer1, defi, sector, broad_index)'. This provides examples and clarifies they are optional filters. However, it doesn't explain the 'interval' or 'regime_bucket' dimensions mentioned in the purpose, leaving some semantics incomplete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Level 2 (ETF/basket/sector) prediction calibration. Returns hit_rate_ema per (market, group, interval, regime_bucket) with sample counts.' It specifies the verb (calibration), resource (predictions), and scope (sector rotation level). However, it doesn't explicitly differentiate from sibling tools like 'get_structure_validation_history' beyond mentioning it as a follow-up recommendation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: '[호출 시점] When AI wants to see Layer D (structure) evidence.' It includes prerequisites ('[선행 조건] 없음'), follow-up recommendations ('[후속 추천] get_structure_validation_history for daily trend'), and warnings ('[주의] Empty until structure learning cycles complete'). This covers when to use, alternatives, and exclusions comprehensively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_structure_validation_historyAInspect
[역할] Daily validation history of Level 2 structure predictions. Each row shows hit_rate for a specific day, enabling time-series verification of sustained performance. [호출 시점] After get_structure_calibration. [선행 조건] 없음. [후속 추천] get_monthly_accuracy_trend for macro-level comparison. [주의] Returns overall_hit_rate summary across the window.
Args: market_id: Optional market filter days: Lookback window in days (default 90)
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| market_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool returns ('overall_hit_rate summary across the window'), its temporal nature ('daily validation history'), and its purpose ('time-series verification of sustained performance'). However, it doesn't mention potential limitations like data availability, rate limits, or error conditions, leaving some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and front-loaded key information. It uses two sentences for the core purpose and guidelines, followed by a concise Args section. While efficient, the mix of Korean and English labels might slightly hinder readability for some users, but the content is focused and wastes no space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (time-series data retrieval), no annotations, and an output schema present, the description is complete enough. It explains the tool's purpose, usage context, parameters, and what to expect in returns ('overall_hit_rate summary'), leveraging the output schema to handle detailed return values. No critical gaps remain for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context for both parameters beyond the schema's 0% coverage. For 'market_id', it clarifies it's an 'Optional market filter,' and for 'days,' it explains it's a 'Lookback window in days (default 90),' which provides semantic understanding not present in the schema. This compensates well for the low schema coverage, though it doesn't detail format constraints (e.g., date ranges).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving daily validation history of Level 2 structure predictions with hit_rate metrics for time-series verification. It specifies the resource (validation history), verb (get), and scope (daily, Level 2 structure predictions), distinguishing it from siblings like get_monthly_accuracy_trend which provides macro-level comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'After get_structure_calibration' and recommends 'get_monthly_accuracy_trend for macro-level comparison.' It also mentions '선행 조건 없음' (no prerequisites) and includes a '주의' (caution) note about what the tool returns, offering clear context for usage versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trade_historyAInspect
[역할] 거래 내역을 동적 필터(액션/수익률/시간/심볼)로 조회. [호출 시점] 과거 거래 분석/특정 종목 검색 시. [선행 조건] 없음. [후속 추천] analyze_trades, market://{market_id}/signals/feedback. [주의] virtual_trade_history 기반. limit 최대 1000.
Args: market_id: Market ID (crypto, kr_stock, us_stock) limit: Max results (default 1000) action_filter: Filter by action (all, buy, sell) min_pnl: Min P&L % filter (e.g., -5.0) max_pnl: Max P&L % filter (e.g., 10.0) hours_back: Only trades within last N hours symbol: Filter by ticker symbol (e.g., "BTC", "AAPL"); case-insensitive
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| symbol | No | ||
| max_pnl | No | ||
| min_pnl | No | ||
| market_id | Yes | ||
| hours_back | No | ||
| action_filter | No | all |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context: '[주의] virtual_trade_history 기반' (based on virtual trade history) and 'limit 최대 1000' (maximum limit of 1000). These disclose important behavioral traits about data source and constraints that aren't in the schema. However, it doesn't mention rate limits, authentication needs, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with labeled sections ([역할], [호출 시점], etc.) and a clear parameter list. Every sentence earns its place: the purpose statement, usage context, prerequisites, follow-up recommendations, warnings, and parameter explanations all contribute value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with 0% schema coverage and no annotations, the description does an excellent job explaining parameters and usage context. The presence of an output schema means return values don't need explanation. However, for a tool with virtual trade data and filtering complexity, additional context about data freshness or filtering logic might be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must fully compensate. It provides a detailed parameter section explaining all 6 parameters with clear semantics: 'market_id: Market ID (crypto, kr_stock, us_stock)', 'action_filter: Filter by action (all, buy, sell)', 'min_pnl: Min P&L % filter (e.g., -5.0)', etc. This adds substantial meaning beyond the bare schema types and defaults.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '거래 내역을 동적 필터(액션/수익률/시간)로 조회' (retrieve trade history with dynamic filters for action/profit/time). It specifies the verb (retrieve/조회) and resource (trade history/거래 내역) with filtering capabilities. However, it doesn't explicitly differentiate from sibling tools like 'get_winning_trades' or 'get_losing_trades' which might provide similar filtered views.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent usage guidance with structured sections: '[호출 시점] 과거 거래 분석/특정 조건 거래 검색 시' (when to call: for past trade analysis/specific condition trade search), '[선행 조건] 없음' (no prerequisites), and '[후속 추천] analyze_trades, market://{market_id}/signals/feedback' (follow-up recommendations). It explicitly states when to use the tool and suggests related actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_winning_tradesAInspect
[역할] 수익 거래만 조회(P&L>0). [호출 시점] 성공 패턴 분석 시. [선행 조건] 없음. [후속 추천] analyze_trades. [주의] get_trade_history(min_pnl=0.01) 단축 호출.
Args: market_id: Market ID (crypto, kr_stock, us_stock) limit: Max results (default 10)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| market_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by specifying: 1) the tool filters for profitable trades only (P&L>0), 2) there are no prerequisites ('[선행 조건] 없음'), and 3) it mentions a shortcut alternative. However, it doesn't describe response format, pagination behavior, or potential rate limits. For a tool with no annotations, this is above average but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely well-structured and concise. It uses a clear bullet-point format with labeled sections ([역할], [호출 시점], etc.) followed by parameter explanations. Every sentence earns its place, providing maximum information density with zero waste. The information is front-loaded with the core purpose first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, filtering logic), no annotations, but with an output schema present, the description is complete enough. It explains the filtering behavior (P&L>0), usage context, parameters, and relationships to sibling tools. The presence of an output schema means the description doesn't need to explain return values. For this tool's scope, the description provides all necessary context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description compensates well by explaining both parameters: 'market_id: Market ID (crypto, kr_stock, us_stock)' provides semantic meaning and examples of valid values, and 'limit: Max results (default 10)' explains the purpose and default value. This adds significant value beyond the bare schema, though it doesn't specify format constraints or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '수익 거래만 조회(P&L>0)' translates to 'retrieve only profitable trades (P&L>0)'. This is a specific verb+resource combination that distinguishes it from sibling tools like get_trade_history (which retrieves all trades) and get_losing_trades (which retrieves losing trades). The description explicitly defines the scope as profit-making trades only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent usage guidance with multiple explicit elements: '[호출 시점] 성공 패턴 분석 시' (when to call: for analyzing successful patterns), '[후속 추천] analyze_trades' (recommended follow-up: analyze_trades), and '[주의] get_trade_history(min_pnl=0.01) 단축 호출' (note: shortcut call to get_trade_history with min_pnl=0.01). This gives clear context for when to use this tool versus alternatives, including a specific alternative tool with parameter configuration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!