get_backtest_status
Check the current status of a running backtest in MetaTrader 4 to monitor progress and results.
Instructions
Get the current status of a running backtest
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Check the current status of a running backtest in MetaTrader 4 to monitor progress and results.
Get the current status of a running backtest
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only states it retrieves status without behavioral details. It doesn't disclose whether this is read-only (implied but not explicit), if it requires authentication, rate limits, error handling, or what happens if no backtest is running. The description is minimal and lacks critical operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no wasted words. It front-loads the core purpose ('Get the current status') efficiently. Every word earns its place, making it highly concise and well-structured for its simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a simple zero-parameter tool, the description is incomplete. It doesn't explain what 'status' returns (e.g., a string, object, or progress indicator), error conditions, or how it interacts with siblings like 'run_backtest'. For a tool in a backtesting context, more operational detail would help the agent use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate here. Baseline is 4 for zero parameters, as the schema fully covers the absence of inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and target ('current status of a running backtest'), making the purpose evident. It distinguishes from siblings like 'get_backtest_results' by focusing on status rather than results. However, it doesn't specify what 'status' entails (e.g., progress percentage, error state), leaving some ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a backtest is running, but provides no explicit guidance on when to use this versus alternatives like 'get_backtest_results' or 'run_backtest'. No prerequisites (e.g., needing a backtest ID) or exclusions are mentioned, leaving the agent to infer context from tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/8nite/metatrader-4-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server