AlgoVault — Crypto Quant Trade Calls
Server Details
Quant trading signals, funding rate arb scanning, and market regime detection for crypto perps.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- AlgoVaultLabs/crypto-quant-signal-mcp
- GitHub Stars
- 1
- Server Listing
- crypto-quant-signal-mcp
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 3 of 3 tools scored.
Each tool has a clearly distinct purpose: market regime classification, trade signal generation, and funding arbitrage scanning. There is no overlap in functionality, and an agent can easily differentiate between them based on their specific objectives.
All tool names follow a consistent verb_noun pattern (get_market_regime, get_trade_signal, scan_funding_arb), with clear and descriptive naming that aligns well across the set.
With only 3 tools, the set feels thin for a crypto quant trading server, lacking operations like order placement, risk management, or backtesting. While the tools are well-defined, the scope is limited and may not cover essential trading workflows.
There are significant gaps in the tool surface for crypto quant trading, such as no tools for executing trades, managing positions, or accessing historical data. The server provides analysis and signals but lacks the ability to act on them, which is critical for the domain.
Available Tools
3 toolsget_market_regimeARead-onlyInspect
Classifies the current market regime (TRENDING_UP, TRENDING_DOWN, RANGING, VOLATILE) for a Hyperliquid perp using ADX(14), volatility ratio, price structure, and cross-venue funding sentiment.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Asset symbol, e.g. 'BTC', 'ETH', 'SOL' | |
| exchange | No | Exchange to analyze. 'HL' = Hyperliquid (default), 'BINANCE' = Binance USDT-M Futures, 'BYBIT' = Bybit Linear, 'OKX' = OKX Swap, 'BITGET' = Bitget USDT-M. | HL |
| timeframe | No | Candle timeframe | 4h |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true, indicating a safe read operation with open-world data. The description adds valuable context about the classification methodology (ADX(14), volatility ratio, price structure, cross-venue funding sentiment) that goes beyond what annotations provide, though it doesn't mention rate limits or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently conveys the tool's purpose, output categories, and methodology without unnecessary words. Every element serves a clear purpose in helping the agent understand what the tool does.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only classification tool with good annotations and comprehensive parameter documentation, the description provides adequate context about what the tool does and how it works. The main gap is lack of output format details (since no output schema exists), but the description does specify the four possible classification categories.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation but doesn't provide additional semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Classifies'), the resource ('current market regime'), and the output categories (TRENDING_UP, TRENDING_DOWN, RANGING, VOLATILE). It distinguishes from siblings by focusing on regime classification rather than trade signals or funding arbitrage scanning.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for market regime analysis on Hyperliquid perps using specific technical indicators, but doesn't explicitly state when to use this tool versus the sibling tools (get_trade_signal, scan_funding_arb). No explicit exclusions or alternative scenarios are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trade_signalARead-onlyInspect
Returns a composite BUY/SELL/HOLD signal for a Hyperliquid perp. Combines RSI(14), EMA(9/21) crossover, funding rate, OI momentum, and volume into a weighted score with confidence percentage.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Asset symbol, e.g. 'ETH', 'BTC', 'SOL' | |
| exchange | No | Exchange to analyze. 'HL' = Hyperliquid (default), 'BINANCE' = Binance USDT-M Futures, 'BYBIT' = Bybit Linear, 'OKX' = OKX Swap, 'BITGET' = Bitget USDT-M. | HL |
| timeframe | No | Candle timeframe. All Hyperliquid intervals supported. 1m/3m for HFT scalping, 5m/15m for intraday agents (most popular), 30m/1h/2h for swing, 4h/8h/12h/1d for position trading. Free tier: 15m and 1h only. | 15m |
| includeReasoning | No | Include human-readable reasoning |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true, indicating safe read operations with open-ended data. The description adds valuable behavioral context beyond annotations: it explains the composite nature of the signal, the specific indicators used (RSI, EMA, funding rate, OI momentum, volume), and that it produces a weighted score with confidence percentage. This provides important implementation details not captured in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, methodology, and output format. Every element serves a purpose: the action ('Returns'), target ('Hyperliquid perp'), output type ('composite BUY/SELL/HOLD signal'), methodology ('Combines...into a weighted score'), and confidence metric. Zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with comprehensive schema coverage and no output schema, the description provides good context about what the tool does and how it works. It explains the composite nature of the signal and the specific indicators used, which helps the agent understand the tool's behavior. The main gap is lack of explicit output format details, but given the annotations and schema completeness, this is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents all 4 parameters. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate when the schema does the heavy lifting for parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns a composite BUY/SELL/HOLD signal for a Hyperliquid perp' with specific technical indicators listed (RSI, EMA crossover, funding rate, OI momentum, volume). It distinguishes from siblings like 'get_market_regime' and 'scan_funding_arb' by focusing on trade signals rather than market regimes or arbitrage opportunities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through the mention of 'Hyperliquid perp' and technical indicators, but doesn't explicitly state when to use this tool versus alternatives. It doesn't provide explicit 'when-not' guidance or named alternatives beyond what's implied by sibling tool names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_funding_arbARead-onlyInspect
Scans cross-venue funding rate differences between Hyperliquid, Binance, and Bybit. Returns top arbitrage opportunities ranked by annualized spread.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (free: max 5) | |
| minSpreadBps | No | Minimum spread in basis points to include |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and open-world hints, which the description aligns with by describing a scanning/returning operation without implying mutation. The description adds valuable behavioral context beyond annotations: it specifies the exact exchanges scanned (Hyperliquid, Binance, Bybit), the ranking criterion (annualized spread), and the output format (top opportunities), enhancing the agent's understanding of the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, scope, and output without any redundant or unnecessary information. It is front-loaded with the core action and immediately follows with key details, making it highly concise and effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (scanning multiple exchanges for arbitrage), rich annotations (readOnlyHint, openWorldHint), and 100% schema coverage, the description is largely complete. It covers the what, where, and output ranking, though it lacks details on response format (e.g., structure of returned opportunities) and any rate limits or authentication needs, which are not provided elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, fully documenting both parameters (limit and minSpreadBps) with defaults and constraints. The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining basis points or free tier limits, so it meets the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Scans'), resources ('cross-venue funding rate differences between Hyperliquid, Binance, and Bybit'), and outcome ('Returns top arbitrage opportunities ranked by annualized spread'). It precisely distinguishes this tool from its siblings (get_market_regime, get_trade_signal) by focusing on funding rate arbitrage scanning rather than market regime analysis or trade signal generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when seeking arbitrage opportunities based on funding rate differences across specific exchanges, but provides no explicit guidance on when to use this tool versus alternatives, nor any prerequisites or exclusions. The context is clear but lacks comparative or conditional direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.