DeltaSignal ATLAS-7
Server Details
Real-time financial intelligence MCP server for crypto public companies. Provides covenant stress analysis, alpha signals, peer ranking, risk distribution, SEC XBRL fundamentals, and daily changes. x402 micropayments on Base USDC. First 5 calls free.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 8 of 8 tools scored.
Each tool has a clearly distinct purpose, though 'covenant_stress' and 'top_stressed' both deal with stress data, potentially causing minor confusion. Overall, descriptions differentiate them well.
All tools follow a consistent 'deltasignal_' prefix with descriptive snake_case names (e.g., 'alpha_signals', 'company_fundamentals'), making the pattern predictable.
With 8 tools covering readiness, analysis, and monitoring for crypto public companies, the count is well-scoped and each tool earns its place without being too few or too many.
The tool set covers the full lifecycle of financial analysis: readiness check, risk distribution, stress screening, fundamentals, alpha signals, peer ranking, and daily changes. No obvious gaps for the domain.
Available Tools
8 toolsdeltasignal_alpha_signalsDeltaSignal alpha signalsARead-onlyIdempotentInspect
Use this read-only tool to retrieve DeltaSignal ATLAS-7 alpha signals for one crypto public company ticker. It returns issuer-level signal evidence such as resilience, treasury pressure, regime fit, phase-one opportunity indicators, and the active source date used by the DeltaSignal data plane. Use it for research triage on supported public-company tickers such as COIN, MSTR, MARA, RIOT, HUT, and CLSK; do not pass crypto asset symbols unless they are listed public-company tickers. Parameters: ticker is required and normalized to uppercase; source_date is optional YYYY-MM-DD and should be used only to replay a known historical DeltaSignal slice. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, does not trade, does not store user data, and does not handle wallet secrets. Usage guidelines: call readiness first if you need service freshness, use covenant_stress for covenant-only questions, use company_fundamentals for raw SEC XBRL facts, and treat the result as issuer intelligence rather than investment advice.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes | Required crypto public company ticker symbol. The server normalizes it to uppercase. Good examples: COIN, MSTR, MARA, RIOT, HUT, CLSK. Do not use asset symbols like BTC or ETH unless they are also public-company tickers. | |
| source_date | No | Optional DeltaSignal source slice date in YYYY-MM-DD format. Omit this for the latest active slice; set it only when reproducing a prior dated run. |
Output Schema
| Name | Required | Description |
|---|---|---|
| period | No | Filing or reporting period date backing the current issuer slice when available. |
| ticker | Yes | Normalized public-company ticker symbol used for the query, such as COIN or MARA. |
| provenance | No | Filing/data provenance, including quality flag, linkbase status, source date, computed timestamp, and resolution methods. |
| alpha_score | No | Composite DeltaSignal alpha score from 0 to 100. Higher values indicate stronger modeled issuer setup. |
| entity_name | No | SEC issuer entity name when available. |
| source_date | Yes | DeltaSignal source slice date used for the alpha calculation. |
| score_inputs | No | Source input values used by the scoring layer, including stress, peer, quality, and treasury inputs when available. |
| tower_status | No | DeltaSignal tower/data-plane status for the issuer slice. |
| market_regime | No | Market-regime label used by the current DeltaSignal slice. |
| tower_coherence | No | Numeric coherence score for the active DeltaSignal tower state. |
| regime_fit_score | No | Model score from 0 to 100 describing current regime alignment for the issuer. |
| treasury_strength | No | Categorical treasury-strength label derived from filing-backed balance-sheet context. |
| alpha_interpretation | No | Plain-English explanation of the main drivers behind the alpha score. |
| short_recommendation | No | Short research-triage label such as Monitor, Reduce exposure, or Build watchlist. |
| alpha_score_breakdown | No | Component scores that contribute to alpha_score, such as stress_resilience, regime_fit, treasury_strength, and debt_cleanliness. |
| dominant_treasury_asset | No | Dominant treasury asset classification when available. |
| stress_resilience_score | No | Modeled resilience score from 0 to 100 under DeltaSignal stress assumptions. |
| treasury_strength_score | No | Treasury strength score from 0 to 100. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, so the tool's safety is clear. The description adds context about the types of signals returned, but does not disclose additional behavioral traits beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is clear, front-loaded, and contains no unnecessary words. Every part is relevant.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown) and comprehensive annotations, the description is fairly complete. It lists the types of indicators, which is useful context. Minor missing elements like prerequisites or pagination details are not critical for a read-only tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters having descriptions. The tool description adds minimal extra meaning beyond the schema, mentioning 'one crypto public company ticker' which aligns with the ticker param, but does not enhance parameter understanding significantly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool returns high-conviction alpha and edge signals for one crypto public company ticker, listing specific indicators (resilience, treasury, regime-fit, phase-one opportunity). This clearly distinguishes it from sibling tools like deltasignal_company_fundamentals or deltasignal_risk_distribution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide guidance on when to use this tool over alternatives, nor does it mention when not to use it. No comparisons or usage context are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deltasignal_company_fundamentalsDeltaSignal company fundamentalsARead-onlyIdempotentInspect
Use this read-only tool to retrieve SEC XBRL-backed fundamentals for one crypto public company ticker. It returns filing period, entity identifiers, filing form, core financial values, provenance, and optional segment or related-party containers when requested. Parameters: ticker is required; period is optional YYYY-MM-DD; include_segments and include_related_party request additional containers when available and otherwise return availability metadata. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not modify SEC data, accounts, files, or wallets. Use it when the user asks for revenue, net income, assets, cash, liabilities, equity, SEC filing context, or fact provenance; use alpha_signals or covenant_stress for modeled signal interpretation.
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | Optional YYYY-MM-DD reporting period. Omit for the latest available filing period. | |
| ticker | Yes | Required crypto public company ticker. Examples: COIN, MSTR, MARA, RIOT, HUT, CLSK. | |
| include_segments | No | Request segment fact container when available. Current runtime returns availability metadata even when segment facts are not populated. | |
| include_related_party | No | Request related-party transaction container when available. Current runtime returns availability metadata even when related-party facts are not populated. |
Output Schema
| Name | Required | Description |
|---|---|---|
| cik | No | SEC Central Index Key. |
| period | No | Reporting period or filing date backing the facts. |
| ticker | Yes | Normalized public-company ticker symbol. |
| provenance | No | SEC/XBRL provenance including root resolution, mapping hit count, linkbase status, data source, and quality flag. |
| entity_name | No | SEC issuer entity name. |
| filing_form | No | SEC filing form used for the facts when available. |
| source_date | No | DeltaSignal source date used to resolve CompanyFacts inputs. |
| fundamentals | No | Core SEC XBRL financial values. Values may be null when the issuer or period lacks that fact. |
| last_updated | No | UTC timestamp for the precomputed fundamentals row. |
| segment_data | No | Segment fact availability and segment rows when requested and available. |
| subsequent_events | No | Subsequent-event availability and rows when available. |
| related_party_transactions | No | Related-party transaction availability and rows when requested and available. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds value by revealing that the data is SEC XBRL-backed and that segment and related-party facts are optional, providing context beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single 20-word sentence that conveys the core functionality and optional enhancements without redundancy. Every word serves a purpose, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema and strong annotations, the description covers the main purpose, data source, and optional parameters. It lacks details on data coverage or period handling, but for a fundamentals tool with rich schema, it is complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the input schema documents all parameters adequately. The description adds minimal extra meaning by describing the optional facts as 'deeper financial diligence', but this is not essential. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns SEC XBRL-backed company fundamentals for one ticker, with optional segment and related-party facts. The verb 'Returns' and resource 'company fundamentals' are specific and make it distinguishable from sibling tools like deltasignal_alpha_signals or deltasignal_covenant_stress.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for financial diligence but does not explicitly state when to use this tool versus siblings or provide exclusions. No alternative tools are named, and the context for choosing this over other data tools is not given, leaving the agent to infer based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deltasignal_covenant_stressDeltaSignal covenant stressARead-onlyIdempotentInspect
Use this read-only tool for ATLAS-7 covenant stress analysis on crypto public companies. Pass ticker for a single-issuer detail view, or omit ticker to screen the active issuer universe with risk, quality, debt-coverage, linkbase, minimum-stress, limit, and offset filters. It returns filing-backed stress, headroom, risk tier, debt coverage, quality, linkbase provenance, live-price deltas, and metadata needed for covenant-risk research. Parameters: ticker selects detail mode; without ticker, limit/offset paginate list mode, min_stress is 0-100, and risk_tier, quality_flag, debt_coverage_status, and linkbase_only narrow the screen. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not change filings, portfolios, wallets, or account state. Use top_stressed for the default ranked universe, peer_ranking for relative peer context, and alpha_signals when the user asks for opportunity or edge rather than covenant stress.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum rows for list mode. Use 10-25 for concise screens. | |
| offset | No | Pagination offset for list mode. | |
| period | No | Optional YYYY-MM-DD filing period for ticker detail mode. | |
| ticker | No | Optional crypto public company ticker. When set, returns detail for that issuer. Examples: MARA, RIOT, COIN, MSTR. | |
| risk_tier | No | Optional risk tier filter for list mode, such as HIGH, MODERATE, LOW, or UNCLASSIFIED. | |
| min_stress | No | Optional minimum stress score for list mode. | |
| source_date | No | Optional YYYY-MM-DD DeltaSignal source date for list mode. | |
| quality_flag | No | Optional data quality filter for list mode, such as High, Medium, or Low. | |
| linkbase_only | No | Restrict list mode to issuers backed by SEC XBRL linkbase/rooted facts. | |
| debt_coverage_status | No | Optional debt coverage status filter, such as resolved_nonzero, legitimate_zero_debt, or low_confidence_missing_debt. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | List-mode covenant stress rows. |
| meta | No | List-mode filters, pagination, and count metadata. |
| period | No | Filing/reporting period backing the stress calculation. |
| ticker | No | Ticker for detail mode. |
| risk_tier | No | ATLAS risk tier such as HIGH, MODERATE, LOW, or UNCLASSIFIED. |
| atlas_tier | No | ATLAS tier label. |
| data_source | No | Source category such as linkbase_rooted_facts or direct_facts_fallback. |
| entity_name | No | SEC issuer entity name when available. |
| source_date | No | DeltaSignal source slice date. |
| quality_flag | No | Quality flag for the issuer slice. |
| stress_30pct | No | Modeled stress value under 30 percent stress assumptions. |
| stress_50pct | No | Modeled stress value under 50 percent stress assumptions. |
| stress_70pct | No | Modeled stress value under 70 percent stress assumptions. |
| market_regime | No | Market regime label when available. |
| live_risk_tier | No | Live-price-adjusted risk tier when available. |
| resonance_status | No | Stress resonance status from the model. |
| live_stress_50pct | No | Live-price-adjusted 50 percent stress value when live prices are available. |
| is_linkbase_backed | No | Whether the result is backed by SEC XBRL linkbase/rooted facts. |
| debt_coverage_status | No | Debt coverage classification for the issuer. |
| root_resolution_method | No | Method used to resolve roots in SEC XBRL facts. |
| covenant_headroom_50pct | No | Modeled covenant headroom under 50 percent stress assumptions. |
| live_covenant_headroom_50pct | No | Live-price-adjusted covenant headroom when available. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, which are consistent with the description's 'fetches' action. The description adds no additional behavioral context beyond what annotations provide, such as rate limits or side effects, but does not contradict them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that immediately conveys the tool's purpose and capabilities. Every clause is meaningful, and there is no unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (10 parameters, two modes) and the presence of an output schema, the description is complete. It covers the two primary use cases (ticker detail and universe screening) and the available filters, enabling an agent to select and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value by grouping parameters into two modes and explaining their filtering purpose (e.g., 'risk tier, quality flags, debt coverage status, linkbase-backed rows, and minimum stress score'). This contextualizes the parameters beyond their individual schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches ATLAS-7 covenant stress for a single ticker or screens the issuer universe with filters. The verb 'fetches' and resource 'covenant stress' are specific, and it distinguishes between two modes, differentiating it from sibling tools like 'deltasignal_top_stressed' or 'deltasignal_risk_distribution'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly outlines usage in two modes: single ticker detail with an optional period, and list mode with multiple filters (risk tier, quality flags, etc.). This provides clear guidance on when to use each mode, though it does not explicitly state when not to use the tool or mention alternative tools for similar tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deltasignal_daily_changesDeltaSignal daily changesARead-onlyIdempotentInspect
Use this read-only monitoring tool to retrieve the latest meaningful DeltaSignal daily change snapshot. It highlights tracked crypto filing deltas, newly discovered crypto issuers, source dates, computed timestamps, classification summary, and change statistics. Parameters: none; call it exactly as-is when the user asks what changed today or needs a monitoring summary. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not write notifications, files, accounts, or wallet state. Use it for daily monitoring and freshness narratives; use readiness for service health and issuer-specific tools for detailed research on any ticker it mentions.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| stats | No | Counts for changed companies, new companies, and classification totals. |
| message | No | Plain-English explanation of the daily change snapshot. |
| delta_basis | No | Basis for the change set. |
| request_key | Yes | Request key, normally daily_changes. |
| source_date | Yes | Source date for the served daily change snapshot. |
| new_companies | No | Newly discovered crypto public-company issuers. |
| computed_at_utc | No | UTC compute timestamp for the snapshot. |
| snapshot_status | No | Status label such as meaningful_delta or rolled_back_to_latest_meaningful_delta. |
| changed_companies | No | Tracked crypto issuers with meaningful daily filing deltas. |
| classification_summary | No | Aggregated classification summary for the daily changes. |
| comparison_source_date | No | Prior source date used for comparison when available. |
| latest_computed_source_date | No | Latest computed source date, which may differ from source_date if empty recomputation slices were skipped. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, so the description's burden is lower. The description adds context about the content of the snapshot (SEC/XBRL changes, issuer movements) but no additional behavioral traits beyond what annotations provide. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, well-structured sentence that front-loads the core function. Every word adds value, and there is no redundancy or unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, clear purpose, and the presence of an output schema, the description provides sufficient context for an agent to understand the tool's role. It specifies the type of updates and targets monitoring workflows, making it complete enough for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has no parameters, so the baseline score is 4. Schema coverage is 100% by default. The description does not need to add parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a daily snapshot of changes, highlighting SEC/XBRL-driven changes, issuer movements, and fresh updates. It uses a specific verb-resource pair ('Returns... snapshot') and provides enough context for its purpose, though it doesn't explicitly distinguish from sibling tools like deltasignal_alpha_signals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description implies use for monitoring workflows but doesn't mention exclusions or situations where siblings would be more appropriate. For a tool with zero parameters, usage is straightforward, but guidelines are minimal.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deltasignal_peer_rankingDeltaSignal peer rankingARead-onlyIdempotentInspect
Use this read-only tool to compare one crypto public company against its current peer group. It returns peer rank, peer percentile, peer score, stressed leverage, risk tier, debt coverage, quality flags, linkbase provenance, and period/source-date context. Parameters: ticker is required and must be one public-company symbol such as COIN, MSTR, MARA, RIOT, HUT, or CLSK; period is optional and only for reproducing a known filing date. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not write external systems or access user accounts. Use it when the user asks whether one issuer is better or worse than peers; use covenant_stress for absolute stress, top_stressed for universe-wide ranking, and alpha_signals for opportunity signals.
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | Optional YYYY-MM-DD filing period. Omit for the latest peer ranking. | |
| ticker | Yes | Required crypto public company ticker. Examples: COIN, MSTR, MARA, RIOT, HUT, CLSK. |
Output Schema
| Name | Required | Description |
|---|---|---|
| mp_z | No | Momentum/market pressure z-score when available. |
| period | No | Filing/reporting period used for the ranking. |
| ticker | Yes | Normalized public-company ticker symbol. |
| peer_rank | No | Issuer rank within its peer group. |
| risk_tier | No | ATLAS risk tier. |
| atlas_tier | No | ATLAS tier label. |
| peer_count | No | Number of peers in the comparison set. |
| peer_score | No | Composite peer score when available. |
| percentile | No | Issuer percentile within the peer group. |
| data_source | No | Source category for the issuer row. |
| entity_name | No | SEC issuer entity name when available. |
| source_date | No | DeltaSignal source slice date. |
| total_peers | No | Alias for peer_count. |
| quality_flag | No | Data quality flag. |
| stress_50pct | No | Alias for stressed leverage/stress at 50 percent. |
| computed_at_utc | No | UTC compute timestamp for the ranking row. |
| peer_percentile | No | Alias for percentile. |
| resonance_status | No | Model resonance status. |
| is_linkbase_backed | No | Whether the result is backed by SEC XBRL linkbase/rooted facts. |
| debt_coverage_status | No | Debt coverage classification. |
| root_resolution_method | No | SEC XBRL root resolution method. |
| stressed_leverage_50pct | No | Stressed leverage under 50 percent stress assumptions. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint and idempotentHint, so the description adds value by detailing outputs (ranking, percentile, signals) but does not disclose additional behavioral traits like side effects or authorization needs beyond what annotations convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of ~20 words, directly stating action and outputs. No redundancy or wasted words; front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists, the description adequately outlines tool purpose and returns. However, it does not explain what 'covenant stress ranking' or 'percentile context' mean, which might require agent inference. Still, sufficient for most contexts.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, with clear definitions for 'ticker' and 'period'. The description adds no extra parameter-level meaning beyond what the schema provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares a crypto public company against its peer group, specifying the outputs: relative covenant stress ranking, percentile context, and peer-comparison signals. It distinguishes from sibling tools like deltasignal_covenant_stress and deltasignal_top_stressed by focusing on peer comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for issuer research needing peer comparison but does not explicitly state when to use this tool versus alternatives (e.g., deltasignal_covenant_stress). No exclusionary or contextual guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deltasignal_readinessDeltaSignal readinessARead-onlyIdempotentInspect
Use this read-only tool before analysis to verify that the DeltaSignal ATLAS-7 data plane is live, fresh, and safe to query. It returns service readiness, active source dates, issuer coverage, quality coverage, debt coverage, live-price status, market regime, and tower-coherence diagnostics. Parameters: none; call it exactly as-is when the user asks if DeltaSignal is ready or whether data freshness is acceptable. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, does not write external systems, and does not handle secrets or payments itself. Use it at the start of an agent workflow, after a deploy, or whenever results should be gated on freshness; use daily_changes for what changed and issuer tools for company-specific analysis.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | Whether the current DeltaSignal slice is fresh enough for analysis. |
| reason | No | Reason readiness is false, when unavailable. |
| row_count | No | Rows in the active covenant stress slice. |
| age_minutes | No | Age of the latest computed slice in minutes. |
| source_date | No | Latest active DeltaSignal computed source date. |
| time_windows | No | Latest day, week, and month coverage summaries. |
| tower_status | No | Wardenclyffe tower status such as isolated, building, synchronized, or insufficient_data. |
| market_regime | No | Market-regime label inferred from live price and stress context. |
| tower_coherence | No | Numeric tower-coherence score when enough stress data is available. |
| tower_thresholds | No | Thresholds and weights used by tower-coherence scoring. |
| low_quality_count | No | Rows marked low quality or missing quality flag. |
| filing_source_date | No | Most recent filing source date used to resolve CompanyFacts input data. |
| high_quality_count | No | Rows marked high quality. |
| live_prices_active | No | Whether live crypto price inputs are active in the current slice. |
| medium_quality_count | No | Rows marked medium quality. |
| distinct_issuer_count | No | Distinct issuer count in the active slice. |
| linkbase_backed_count | No | Rows with SEC XBRL linkbase-backed evidence. |
| freshness_window_hours | No | Configured freshness window used for readiness. |
| latest_computed_at_utc | No | UTC timestamp of the latest computed DeltaSignal slice. |
| resolved_nonzero_count | No | Issuers with resolved nonzero debt coverage. |
| legitimate_zero_debt_count | No | Issuers classified as legitimate zero debt. |
| low_confidence_missing_debt_count | No | Issuers with low-confidence missing debt coverage. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare the tool read-only and idempotent. The description adds specific details about what is checked, providing behavioral context beyond the annotations without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys all necessary information without extra words. It is well-structured and front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and the description covers the key checks, it is complete for a readiness assessment tool. The output schema likely handles return values, so no further description needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, the baseline is 4. The description explains the purpose effectively, so no additional parameter information is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it checks the live ATLAS-7 data plane before analysis, specifying four distinct aspects (service readiness, data freshness, issuer coverage, safety to query). This distinguishes it from sibling tools that perform other analyses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'before analysis', indicating the tool is a prerequisite for other operations. While it doesn't list alternatives, the context of usage is clear and helpful for selecting the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deltasignal_risk_distributionDeltaSignal risk distributionARead-onlyIdempotentInspect
Use this read-only tool to summarize the active crypto public company universe by ATLAS-7 risk tier. It returns risk-tier buckets such as HIGH, MODERATE, LOW, and UNCLASSIFIED with issuer counts and percentages. Parameters: none; call it exactly as-is when the user asks for market-wide risk mix or high-level distribution. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not write external systems or access user accounts. Use it for market-wide context before issuer drilldown; use top_stressed to name the issuers in the high-risk bucket and use issuer tools for company-level analysis.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| LOW | No | Risk tier bucket. |
| HIGH | No | Risk tier bucket. |
| MODERATE | No | Risk tier bucket. |
| UNCLASSIFIED | No | Risk tier bucket. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description adds no further behavioral details. The description explains what the tool summarizes but doesn't mention performance, pagination, or any hidden side effects. Given good annotation coverage, a score of 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence that conveys purpose and usage context efficiently with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with an output schema (context signal indicates output schema exists), the description adequately covers the tool's function and typical use case. No additional details are necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so baseline is 4 per guidelines. The description does not need to add parameter information, and it doesn't attempt to invent any.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'summarizes' and resource 'active issuer universe by ATLAS-7 risk tier', making the tool's purpose clear and distinct from siblings like 'deltasignal_top_stressed' (top stressed issuers) and 'deltasignal_covenant_stress' (specific covenant stress).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context: it's for understanding market-wide stress concentration before drilling into individual companies. It implies a sequential usage pattern but does not explicitly state when not to use the tool or list alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deltasignal_top_stressedDeltaSignal top stressed issuersARead-onlyIdempotentInspect
Use this read-only screening tool to rank the most stressed crypto public companies in the active DeltaSignal slice. It returns issuer rows sorted by stress, including ticker, period, risk tier, stress values, debt-coverage status, quality flags, linkbase provenance, live-price indicators, and pagination metadata. Parameters: limit is 1-100 and should usually be 5-20 for summaries; offset is only for pagination after a previous screen. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and never writes orders, files, accounts, or wallet state. Use it for portfolio triage, issuer watchlists, and deciding which companies deserve deeper covenant or alpha analysis; use covenant_stress with ticker for detail on one issuer.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum issuers to return. Use 5-20 for agent summaries and up to 100 for full screening. | |
| offset | No | Pagination offset for continuing a previous ranked screen. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | Rows returned by the screen. |
| meta | Yes | Pagination and count metadata. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, covering safety. Description adds ranking logic and context but no additional behavioral traits beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, well-structured sentence that is front-loaded with key information. No redundant or extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With an output schema present, description adequately covers core functionality and use case. Could mention ordering direction but overall complete for a ranking tool with documented parameters and schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and provides descriptions for both parameters (limit, offset). Description does not add any new parameter semantics beyond what schema already offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'ranks', resource 'most stressed crypto public companies', and specifies inputs (covenant stress, debt coverage, etc.) and use case (triage workflows). Distinguishes from siblings like 'deltasignal_alpha_signals' which focus on different aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage for triage workflows, but no explicit when-to-use or when-not-to-use guidance compared to sibling tools. The description does not mention alternatives or exclude other contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!