NeuroTrade Signal API
Server Details
AI-powered crypto trading signals: direction, confidence, TP/SL, thesis, technicals. 8 strategies.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 2 of 2 tools scored.
The two tools have completely distinct purposes with no overlap: generate_signal produces trading signals, while get_account retrieves account/quota information. An agent would never confuse these tools as they operate in separate domains (trading analysis vs. account management).
Both tools follow a perfect verb_noun pattern (generate_signal, get_account) with consistent snake_case formatting. The naming is predictable and follows the same grammatical structure throughout the tool set.
With only 2 tools, this server feels severely under-scoped for a trading signal API. A complete trading workflow would typically require additional tools for managing positions, historical data, portfolio analysis, or signal validation. The current set provides only signal generation and account status.
For a trading signal API domain, there are significant gaps in coverage. While signal generation and account status are present, there's no ability to backtest signals, manage trading positions, access historical market data, or configure trading parameters. The surface feels incomplete for practical trading workflows.
Available Tools
2 toolsgenerate_signalBInspect
Generate an AI-powered crypto trading signal for a given pair and timeframe. Returns: action (OPEN_LONG | OPEN_SHORT | CLOSE), confidence (0.0–1.0), entry_price, take_profit (array of price levels), stop_loss, risk_reward ratio, indicators (rsi, macd, ema_20, atr), risk_flags (overbought_rsi | oversold_rsi | low_volume | high_spread | near_resistance | near_support), generated_at (ISO 8601), expires_at (ISO 8601), and quota_remaining. The thesis field contains LLM reasoning and is only present when include_thesis=true. On quota exhaustion returns error_code=QUOTA_EXCEEDED with Retry-After header. Requires Authorization: Bearer nt_<api_key>.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading pair in BASE/QUOTE format, e.g. BTC/USDT, ETH/USDT, SOL/USDT. | |
| strategy | No | Signal strategy to apply. Defaults to trend_rider. | |
| timeframe | No | Candlestick timeframe for signal analysis. Defaults to 15m. | 15m |
| personality | No | Risk personality shaping confidence weighting and TP/SL aggressiveness. Defaults to scalper. | |
| include_thesis | No | When true, includes the LLM-generated reasoning in the `thesis` field of the response. Adds ~200ms latency. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the output format in detail (direction, confidence score, entry price, etc.) and mentions an API key requirement, but lacks information on rate limits, error handling, or whether this is a read-only or mutating operation. It adds some context but leaves gaps in behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and output details, followed by the API key requirement. It is appropriately sized with two sentences, but could be slightly more structured (e.g., separating output details from prerequisites). Overall, it is efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a crypto trading signal tool with no annotations and no output schema, the description does a fair job by detailing the output components and mentioning the API key. However, it lacks information on error cases, performance characteristics, or how to interpret the output, leaving room for improvement in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description does not add any additional meaning or clarification beyond what is provided in the input schema, such as explaining interactions between parameters or providing usage examples. Baseline 3 is appropriate when the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Generate an AI-powered crypto trading signal') and resource (crypto trading signals), distinguishing it from the sibling tool 'get_account' which likely retrieves account information rather than generating trading signals. It provides comprehensive details about what the tool produces.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions a prerequisite ('Requires a valid NeuroTrade B2B API key') but provides no guidance on when to use this tool versus alternatives or other trading strategies. There is no explicit mention of when-not-to-use scenarios or comparisons with other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_accountAInspect
Return the current NeuroTrade B2B API quota status: plan tier, calls used, calls remaining, and quota reset date. Requires a valid NeuroTrade B2B API key.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a read operation (implied by 'Return'), requires authentication ('Requires a valid NeuroTrade B2B API key'), and specifies the exact data returned. It doesn't mention rate limits or error behaviors, but covers the essential safety and operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences: the first states the purpose and output details, the second specifies the prerequisite. Every word earns its place, and information is front-loaded with no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is nearly complete: it covers purpose, output format, and authentication. It doesn't specify the exact return structure (e.g., JSON fields) or error cases, but for a simple quota check tool, this is adequate with minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so the baseline would be 3. The description adds value by explicitly stating there are no parameters needed ('Return the current... status' implies no inputs) and clarifies the authentication requirement, which compensates beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Return') and resource ('current NeuroTrade B2B API quota status'), listing the exact data points returned (plan tier, calls used, calls remaining, quota reset date). It distinguishes from the sibling tool 'generate_signal' by focusing on account/quota status rather than signal generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Return the current NeuroTrade B2B API quota status') and includes a prerequisite ('Requires a valid NeuroTrade B2B API key'). However, it doesn't specify when NOT to use it or mention alternatives, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!