Skip to main content
Glama

DeltaSignal ATLAS-7

Ownership verified

Server Details

SEC/XBRL issuer intelligence for crypto public companies via MCP, OpenAPI, x402, and MPP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
aitrailblazer/deltasignal-atlas-codex-plugin
GitHub Stars
1
Server Listing
DeltaSignal ATLAS-7

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.7/5 across 25 of 25 tools scored. Lowest: 4.1/5.

Server CoherenceA
Disambiguation5/5

Every tool has a distinct purpose, well-described. Composite workflows are clearly delineated from individual tools, and natural-language variants are separate. No overlaps cause ambiguity.

Naming Consistency5/5

All tools follow the 'deltasignal_' prefix with descriptive, lowercase, underscore-separated names. Pattern is consistent: descriptive noun phrases for individuals, composite workflows like 'alpha_sweep', and explicit 'natural' suffixes.

Tool Count4/5

25 tools is slightly above typical range but well-justified by the breadth of functionality: screening, single-issuer analysis, historical data, composites, audits, and natural-language outputs. Each tool serves a distinct role.

Completeness5/5

The tool set covers the full domain of crypto public company analysis: screening, drilldown, historical series, risk monitoring, daily changes, audit checks. There are no obvious gaps for the server's stated purpose.

Available Tools

25 tools
deltasignal_alpha_opportunitiesDeltaSignal alpha opportunitiesA
Read-onlyIdempotent
Inspect

Use this read-only screening tool to rank the active DeltaSignal issuer universe by deterministic Phase 1 alpha score. It returns opportunity rows with ticker, CIK/entity metadata when available, issuer type, raw alpha score, board rank score, risk tier, debt coverage, quality, treasury, regime, and provenance fields. Parameters: limit is 1-100; source_date replays a known YYYY-MM-DD slice; risk_tier, quality_flag, issuer_type, include_funds, and debt_coverage_status narrow the screen. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not handle wallets, payments, orders, or account state. Default behavior returns operating-company issuers. Use include_funds=true or issuer_type=etf_trust|fund_vehicle|all only when the user asks for ETF, trust, fund, or product-vehicle screens. High scores are drilldown candidates, not standalone conclusions.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum rows to return. Use 10-25 for agent summaries and up to 100 for full screening.
risk_tierNoOptional risk tier filter such as HIGH, MODERATE, LOW, or UNCLASSIFIED.
issuer_typeNoOptional issuer-type filter: operating_company, etf_trust, fund_vehicle, foreign_issuer, unresolved_identifier, or all.
source_dateNoOptional DeltaSignal source slice date in YYYY-MM-DD format.
quality_flagNoOptional normalized quality filter such as high, medium, low, or unknown.
include_fundsNoSet true to include ETF, trust, fund, and product-vehicle rows in the screen.
debt_coverage_statusNoOptional debt coverage filter such as resolved_nonzero, legitimate_zero_debt, or low_confidence_missing_debt.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesRanked Phase 1 alpha-opportunity screen for the active DeltaSignal issuer universe. The default screen focuses on operating-company issuers and requires issuer drilldown; ETF/trust/fund/product vehicles require explicit opt-in.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds behavioral context: it performs one HTTPS read, has no side effects, and does not handle wallets or payments. It also clarifies deterministic scoring (Phase 1). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that covers purpose, parameters, and behavior without excessive verbosity. It could be slightly more structured (e.g., bullet points) but remains clear and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lists all significant return fields (ticker, CIK, alpha score, etc.), making it complete for an agent. With an output schema present and strong annotations, the description fills remaining gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds value by suggesting limit ranges (10-25 for summaries, up to 100 for full screening), default issuer_type, and when to use include_funds. This supplements the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: a read-only screening tool that ranks issuers by alpha score. It specifies the resource (DeltaSignal issuer universe) and the action (rank by Phase 1 alpha score), and the returned fields. This distinguishes it from sibling tools like deltasignal_alpha_signals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides usage guidance, including default behavior (operating-company issuers) and when to use include_funds. It implicitly differentiates from other tools by focusing on screening, but does not explicitly name alternative tools for other alpha-related tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_alpha_opportunities_auditDeltaSignal alpha opportunities auditA
Read-onlyIdempotent
Inspect

Use this read-only diagnostic tool to explain why the alpha-opportunity board includes, excludes, or demotes rows. It returns issuer-type, identity, quality-gate, and raw-alpha-versus-board-rank summaries from the same scoring universe used by deltasignal_alpha_opportunities. Parameters: limit is 1-100 for bounded samples; source_date replays a known YYYY-MM-DD slice; issuer_type narrows the audit to operating_company, etf_trust, fund_vehicle, foreign_issuer, unresolved_identifier, or all; include_rows=true attaches full publishable audit rows and should be used only for explicit debugging. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not change board scoring, payments, wallets, files, or account state. Use it after deltasignal_alpha_opportunities or deltasignal_alpha_sweep when the user asks why a high raw alpha row is missing, why ETF/trust/fund rows are excluded by default, why a row was demoted, or whether a screen is safe to summarize.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum sample rows to return in each audit sample. Use 10-25 for agent summaries and up to 100 for broad debugging.
issuer_typeNoOptional issuer-type audit filter: operating_company, etf_trust, fund_vehicle, foreign_issuer, unresolved_identifier, or all.
source_dateNoOptional DeltaSignal source slice date in YYYY-MM-DD format.
include_rowsNoSet true only when the user explicitly asks for the full publishable audit rows.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesAlpha-opportunity board audit explaining issuer-type filters, identity quality, quality gates, and raw-alpha versus board-rank demotions.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, so the description adds context beyond those: it specifies 'read-only and idempotent; it performs one HTTPS read, has no destructive side effects' and enumerates what it doesn't affect. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with front-loaded purpose. Every sentence provides essential information—no filler or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, presence of output schema, and annotations, the description covers usage scenario, behavioral boundaries, parameter guidance, and links to siblings. It is complete without needing elaboration on return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds meaning beyond parameter names and types: it explains limit range purpose, source_date replay concept, issuer_type values, and a warning for include_rows. This significantly aids correct usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('explain why... includes, excludes, or demotes rows') and identifies the resource ('alpha-opportunity board'). It distinguishes itself from siblings by stating when to use it (after deltasignal_alpha_opportunities or deltasignal_alpha_sweep) and lists return summaries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('after deltasignal_alpha_opportunities or deltasignal_alpha_sweep when the user asks why...') and provides exclusions (no destructive side effects, does not change scoring/state). Also links to sibling tools by context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_alpha_signalsDeltaSignal alpha signalsA
Read-onlyIdempotent
Inspect

Use this read-only tool to retrieve DeltaSignal ATLAS-7 alpha signals for one crypto public company ticker. It returns issuer-level signal evidence such as resilience, treasury pressure, regime fit, phase-one opportunity indicators, and the active source date used by the DeltaSignal data plane. Use it for research triage on supported public-company tickers such as COIN, MSTR, MARA, RIOT, HUT, and CLSK; do not pass crypto asset symbols unless they are listed public-company tickers. Parameters: ticker is required and normalized to uppercase; source_date is optional YYYY-MM-DD and should be used only to replay a known historical DeltaSignal slice. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, does not trade, does not store user data, and does not handle wallet secrets. Usage guidelines: call readiness first if you need service freshness, use covenant_stress for covenant-only questions, use company_fundamentals for raw SEC XBRL facts, and treat the result as issuer intelligence rather than investment advice.

ParametersJSON Schema
NameRequiredDescriptionDefault
tickerYesRequired crypto public company ticker symbol. The server normalizes it to uppercase. Good examples: COIN, MSTR, MARA, RIOT, HUT, CLSK. Do not use asset symbols like BTC or ETH unless they are also public-company tickers.
source_dateNoOptional DeltaSignal source slice date in YYYY-MM-DD format. Omit this for the latest active slice; set it only when reproducing a prior dated run.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesAlpha-signal research result for one crypto public company. Scores are issuer-level DeltaSignal values from 0 to 100 when present.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, destructiveHint. The description adds context: 'performs one HTTPS read, has no destructive side effects, does not trade, does not store user data, and does not handle wallet secrets.' This reinforces and slightly extends annotation info, but annotations already cover the key behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph with clear front-loading: it starts with the core purpose and then adds supporting details. Each sentence adds distinct value (behavior, parameters, usage guidelines). Slightly verbose but not wasteful; could arguably be shortened by a sentence without loss.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (many siblings, output schema exists, 2 parameters), the description covers all necessary aspects: what signals are returned (lists types), constraints (ticker format), behavioral traits, and sibling differentiation. Output schema presence makes return value details unnecessary. It is fully adequate for an AI agent to decide correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, each parameter is well-documented. The description adds value by summarizing ticker normalization (uppercase), source_date as optional for historical replay, and provides examples. This goes beyond the schema's baseline, but the schema already carries most of the burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool retrieves DeltaSignal ATLAS-7 alpha signals for one crypto public company ticker, lists example tickers (COIN, MSTR, etc.), and contrasts with sibling tools by naming alternatives like covenant_stress and company_fundamentals. The verb 'retrieve' and resource 'DeltaSignal alpha signals' are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance: research triage on supported public-company tickers, warns against passing crypto asset symbols. It names specific alternative tools for different use cases (readiness for freshness, covenant_stress for covenant-only, company_fundamentals for raw XBRL). Also advises treating results as issuer intelligence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_alpha_sweepDeltaSignal alpha sweepA
Read-onlyIdempotent
Inspect

Use this read-only composite workflow tool for opportunity and alpha screening across the current DeltaSignal issuer universe. It server-enforces the alpha-sweep call plan: readiness, alpha_opportunities with limit 15, and daily_changes; alpha_opportunities defaults to operating-company issuers. Parameters: optional output_mode=compact only; do not pass limit, offset, ticker, source_date, or issuer filters because this preset owns exact arguments internally. Behavior: read-only and idempotent; it performs three internal HTTPS reads, has no destructive side effects, never calls issuer-level tools, and preserves partial results if one internal call fails. Use it when the user asks for alpha opportunities, opportunity sweep, clean alpha board, or names worth follow-up research; treat the result as a screen requiring issuer drilldown.

ParametersJSON Schema
NameRequiredDescriptionDefault
output_modeNoOptional response mode. Only compact is accepted.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesServer-enforced DeltaSignal alpha sweep composite response.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the annotations (readOnlyHint, idempotentHint, destructiveHint), the description adds valuable behavioral details: three internal HTTPS reads, no destructive side effects, never calls issuer-level tools, and preserves partial results on failure. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the purpose, then details behavior and restrictions, then usage guidance. Every sentence is informative with no fluff. It is slightly lengthy but justified by the complexity; still efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only one parameter, an output schema (implied but not shown), and annotations covering safety, the description provides all necessary context: purpose, behavior, parameter constraints, and usage cues. It is fully adequate for decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already describes the single parameter (output_mode) with the constraint 'only compact is accepted'. The description adds critical context by explicitly listing parameters that should not be passed (limit, offset, etc.) and why, which compensates for the lack of enums and adds significant value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is for opportunity and alpha screening across the DeltaSignal issuer universe. It distinguishes from siblings by noting it is a composite workflow that server-enforces a specific call plan, making its purpose specific and non-redundant.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use (e.g., user asks for alpha opportunities, sweep, clean alpha board) and what parameters not to pass (limit, offset, ticker, etc.). It lacks explicit exclusion of sibling tools but implies that issuer-level drilldown is separate, which is adequate guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_atlas7_audit_statusDeltaSignal ATLAS-7 audit statusA
Read-onlyIdempotent
Inspect

Use this read-only tool to check whether the Azure-native ATLAS-7 full-universe regression audit is healthy. It reads the latest audit summary artifact from Azure Blob and reports last successful run time, issuer count, operation count, failure counts, historical route status, composite route status, and artifact prefix. Parameters: none. Behavior: read-only and idempotent; it has no destructive side effects, does not run the audit, mutate data, or access raw issuer evidence. Use this before trusting historical ATLAS-7 surfaces in an agent workflow or when an operator asks whether the nightly 215-issuer audit is current.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesLatest Azure ATLAS-7 regression audit health summary.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, idempotent, non-destructive. The description adds valuable context: it does not run the audit, mutate data, or access raw evidence. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the purpose and use case. Every sentence adds value; no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, rich annotations, and an output schema, the description fully covers what the tool does and when to use it. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters, the baseline is 4. The description notes 'Parameters: none', which is sufficient. No further explanation needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to check the health of the ATLAS-7 audit. It specifies the verb 'check' and the resource 'audit status', listing exact information returned. It distinguishes itself from sibling tools by focusing on ATLAS-7 specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance: 'before trusting historical ATLAS-7 surfaces' or when asked about the nightly audit currency. It lacks explicit alternatives or when-not-to-use scenarios, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_atlas7_calculation_historyDeltaSignal ATLAS-7 complete calculation historyA
Read-onlyIdempotent
Inspect

Use this read-only tool when agents need the complete ATLAS-7 calculation bundle for an issuer and source_date. It assembles one calculation-history row from the existing ATLAS-7 precomputed surfaces: covenant stress, company fundamentals, peer ranking, alpha signals, alpha score breakdown, market regime context, SPECTRA inputs, quality flags, provenance, source fields, and hashes. Parameters: source_date replays one YYYY-MM-DD ATLAS-7 slice; source_date_from/source_date_to can page recent slices; ticker or CIK narrows to one issuer; mode=compact by default and full includes source_fields_json. Behavior: read-only and idempotent; it has no destructive side effects and performs no wallet, settlement, or trading actions. Use this before historical report rendering so agents do not mix latest-only fields into a historical answer.

ParametersJSON Schema
NameRequiredDescriptionDefault
cikNoOptional CIK filter. Bare digits or CIK-prefixed digits are accepted.
modeNoOptional response mode: compact or full. Compact omits source_fields_json.
limitNoMaximum calculation rows to return. Defaults to 25 and is capped at 100 through MCP.
offsetNoPagination offset over calculation rows.
tickerNoOptional issuer ticker, for example MSTR.
source_dateNoOptional exact ATLAS-7 source date in YYYY-MM-DD format. Defaults to latest when no range is supplied.
source_date_toNoOptional inclusive source-date upper bound in YYYY-MM-DD format.
source_date_fromNoOptional inclusive source-date lower bound in YYYY-MM-DD format.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesComplete ATLAS-7 calculation-history rows keyed by source_date and issuer.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint, idempotentHint, destructiveHint), the description adds that it has no wallet, settlement, or trading actions, and explains the parameter behavior (date ranges for paging, mode effect). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with 6-7 sentences that are front-loaded with purpose. Every sentence adds value, no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 optional params, output schema exists), the description covers all key aspects: what it returns, how to filter, mode options, safety guarantees. The output schema handles return value details, so completeness is high.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds meaningful context: source_date replays a slice, source_date_from/to page recent slices, ticker/CIK narrow, mode compact/full. This goes beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the action (assemble complete ATLAS-7 calculation history) and the resource (bundle from various precomputed surfaces). It distinguishes from siblings by enumerating the components it includes, which none of the other ATLAS-7 tools do comprehensively.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('when agents need the complete ATLAS-7 calculation bundle') and provides context ('before historical report rendering so agents do not mix latest-only fields'). It does not explicitly name alternative tools, but the listed components imply differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_atlas7_companyfacts_historyDeltaSignal ATLAS-7 CompanyFacts historyA
Read-onlyIdempotent
Inspect

Use this read-only tool to retrieve compact SEC CompanyFacts/XBRL materialization rows for the crypto public-company universe or a specific ticker/CIK. It returns one compact row per issuer for a materialized companyfacts source_date, including tag counts, crypto/digital-asset flags, top tags, taxonomies, evidence hashes, and fact source pointers. Parameters: source_date replays a known YYYY-MM-DD materialization slice; ticker or cik optionally narrows to one issuer; limit defaults to 25 and is capped at 250; offset paginates the universe. Behavior: read-only and idempotent; it performs no writes, has no destructive side effects, and never returns the raw facts_payload, so use fact_source_pointer plus evidence_hash for drilldown/audit. Use this for Mirror Pulse or ATLAS-7 historical joins when you need the compact issuer-level CompanyFacts inventory for all 215 crypto companies.

ParametersJSON Schema
NameRequiredDescriptionDefault
cikNoOptional SEC CIK. Accepts bare digits or CIK-prefixed form and normalizes to 10 digits.
modeNoOptional response mode. Use compact by default; summary includes the compact summary object.
limitNoMaximum issuer rows to return. Defaults to 25 and is capped at 250.
offsetNoPagination offset over compact issuer rows.
tickerNoOptional public-company ticker. Omit with cik to page the full materialized crypto issuer universe.
source_dateNoOptional materialized CompanyFacts source date in YYYY-MM-DD format. Defaults to the latest materialized source date.
include_summaryNoInclude the compact summary JSON object in each row.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesCompact materialized SEC CompanyFacts rows keyed by source_date and issuer.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description reinforces these by stating 'read-only and idempotent; it performs no writes, has no destructive side effects' and adds context about not returning the raw facts_payload, providing drill-down guidance.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that front-loads the purpose and key behavior. It is clear and efficient, though slightly verbose in listing parameter details that are already in the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, output schema present), the description adequately covers use cases, behavior, and parameter usage. It leverages the existing schema and annotations to avoid redundancy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are well-documented. The description summarizes key parameters (source_date, ticker, cik, limit, offset) and their roles, adding value like default limit and cap, but does not substantially extend the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as a read-only resource to retrieve compact SEC CompanyFacts/XBRL materialization rows for the crypto public-company universe or a specific ticker/CIK. It specifies the verb 'retrieve' and the resource 'CompanyFacts history', making the purpose distinct from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool ('for Mirror Pulse or ATLAS-7 historical joins') and what to expect (compact issuer-level inventory). It also hints at what not to do (raw facts_payload is not returned) but does not explicitly name alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_atlas7_point_in_time_historyDeltaSignal ATLAS-7 point-in-time factor historyA
Read-onlyIdempotent
Inspect

Use this read-only structured history tool when Mirror Pulse, backtests, or agents need daily ATLAS-7 CompanyFacts-derived factor rows keyed by as_of_date. It returns point-in-time-safe rows derived from the latest CompanyFacts archive by applying only facts with filed <= as_of_date; rows are retrospective recomputations and include lookahead safety flags. Parameters: optional ticker or CIK, source_date, as_of_date_from, as_of_date_to, mode=compact|full, limit, and offset. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, does not generate Natural Language, and never executes trades, wallets, or settlement flows. Use compact mode by default for Mirror Pulse joins; use full mode only for selected audit pages because factor_payload can be larger.

ParametersJSON Schema
NameRequiredDescriptionDefault
cikNoOptional CIK filter. Bare digits or CIK-prefixed digits are accepted.
modeNoOptional response mode. Use compact by default; full includes factor_payload.
limitNoMaximum daily factor rows to return. Defaults to 250 on REST and is capped lower on MCP.
offsetNoPagination offset for daily factor rows.
tickerNoOptional issuer ticker filter, for example MSTR.
source_dateNoOptional CompanyFacts archive source date in YYYY-MM-DD format.
as_of_date_toNoOptional inclusive as-of date upper bound in YYYY-MM-DD format.
as_of_date_fromNoOptional inclusive as-of date lower bound in YYYY-MM-DD format.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesCompanyFacts-derived point-in-time ATLAS-7 factor rows keyed by as_of_date and issuer.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already show readOnly, idempotent, non-destructive. Description adds that it performs one HTTPS read, no destructive side effects, no NL generation, no trades, and includes lookahead safety flags, adding significant value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, front-loaded with purpose, each sentence provides unique value. No redundant or vague language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 optional parameters, output schema exists, and annotations cover safety, the description explains behavior, mode choices, safety features, and intended use. Complete for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and description adds practical guidance: use compact by default for Mirror Pulse joins, full only for audit pages due to payload size. This goes beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns 'daily ATLAS-7 CompanyFacts-derived factor rows keyed by as_of_date' with point-in-time safety. It distinguishes itself from sibling history tools by emphasizing retrospective recomputation and lookahead safety flags.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: for Mirror Pulse, backtests, or agents needing daily factor history. Provides mode guidance (compact vs full). Does not directly exclude siblings, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_atlas_historyDeltaSignal ATLAS historyA
Read-onlyIdempotent
Inspect

Use this read-only tool to retrieve a historical ATLAS-7 covenant and stress series for one crypto public company ticker. It returns one compact row per ATLAS source_date, including debt, crypto fair value, BTC holdings, stress, risk tier, live-price fields, quality flags, and provenance needed for mNAV and Mirror Pulse joins. Parameters: ticker is required; source_date_from and source_date_to bound the inclusive ATLAS source-date range; limit defaults to 500 and is capped at 2000; offset paginates the dated series. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not apply the latest ATLAS snapshot retroactively across history. Use this when the user needs historical ATLAS data, MSTR/Strategy time series, mNAV backtests, Mirror Pulse joins, or dated stress/risk snapshots.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum dated rows to return. Defaults to 500 and is capped at 2000.
offsetNoPagination offset over dated ATLAS rows.
tickerYesRequired crypto public company ticker. Examples: MSTR, COIN, MARA, RIOT.
source_date_toNoOptional inclusive ATLAS source-date upper bound in YYYY-MM-DD format.
source_date_fromNoOptional inclusive ATLAS source-date lower bound in YYYY-MM-DD format.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesHistorical ATLAS-7 issuer time series keyed by source_date.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly and idempotent hints. The description adds further behavioral context: single HTTPS read, no destructive side effects, and no retroactive application of latest snapshot, providing full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loaded with purpose, and structured into two clear sentences covering usage, parameters, and behavior without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of output schema and annotations, the description fully covers return fields (debt, fair value, stress, etc.), usage context, and parameter behaviors, leaving no significant gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and descriptions already document parameters. The tool description summarizes them (ticker required, date bounds, limit cap, offset) but adds little beyond what schema provides, resulting in baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves historical ATLAS-7 covenant and stress series for a ticker with specific return fields and use cases, distinguishing it from sibling tools like deltasignal_atlas7_point_in_time_history and deltasignal_covenant_stress.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists when to use the tool (historical ATLAS data, mNAV backtests, Mirror Pulse joins) with clear context, but does not explicitly state when not to use or mention specific alternatives beyond implied differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_company_fundamentalsDeltaSignal company fundamentalsA
Read-onlyIdempotent
Inspect

Use this read-only tool to retrieve SEC XBRL-backed fundamentals for one crypto public company ticker. It returns filing period, entity identifiers, filing form, core financial values, provenance, and optional segment or related-party containers when requested. Parameters: ticker is required; period is optional YYYY-MM-DD; include_segments and include_related_party request additional containers when available and otherwise return availability metadata. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not modify SEC data, accounts, files, or wallets. Use it when the user asks for revenue, net income, assets, cash, liabilities, equity, SEC filing context, or fact provenance; use alpha_signals or covenant_stress for modeled signal interpretation.

ParametersJSON Schema
NameRequiredDescriptionDefault
periodNoOptional YYYY-MM-DD reporting period. Omit for the latest available filing period.
tickerYesRequired crypto public company ticker. Examples: COIN, MSTR, MARA, RIOT, HUT, CLSK.
include_segmentsNoRequest segment fact container when available. Current runtime returns availability metadata even when segment facts are not populated.
include_related_partyNoRequest related-party transaction container when available. Current runtime returns availability metadata even when related-party facts are not populated.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesSEC XBRL-backed fundamentals for one crypto public company ticker.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, destructiveHint. Description adds further context: 'performs one HTTPS read, has no destructive side effects, does not modify SEC data, accounts, files, or wallets.' No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured paragraphs. First states purpose and parameter summary; second explains behavior and usage guidance. No wasted words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters, output schema, and rich annotations, description fully covers purpose, parameter details, behavioral traits, and usage context. Output schema handles return format, so no gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description adds value by summarizing parameters and noting behavior like returning availability metadata for optional containers, which goes beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves SEC XBRL-backed fundamentals for one crypto public company ticker, listing return contents and differentiating from siblings like alpha_signals and covenant_stress for modeled signal interpretation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly specifies when to use this tool (e.g., user asks for revenue, net income, filing context) and when to use alternatives (alpha_signals or covenant_stress for modeled interpretation). Also notes read-only and idempotent behavior.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_company_reportDeltaSignal company reportA
Read-onlyIdempotent
Inspect

Use this read-only composite workflow tool for the default full single-issuer DeltaSignal ATLAS-7 company report. It server-enforces the complete company report call plan: readiness, company_fundamentals, alpha_signals, peer_ranking, covenant_stress, and SPECTRA field-map support for one normalized ticker. Parameters: ticker is required and normalized to uppercase; period, include_segments, include_related_party, and output_mode=compact are optional. SPECTRA is included when a field-map contract is available for the issuer. Behavior: read-only and idempotent; it performs six internal HTTPS reads, has no destructive side effects, rejects invalid tickers before fan-out, and preserves partial results if a required issuer leg fails. Use it when the user asks for a report, deep dive, issuer brief, or diligence package on one crypto public-company ticker; use low-level tools only for custom drilldowns.

ParametersJSON Schema
NameRequiredDescriptionDefault
periodNoOptional YYYY-MM-DD reporting period to pass to fundamentals, peer ranking, and covenant stress when reproducing a known filing date.
tickerYesRequired crypto public company ticker. The server trims whitespace and normalizes to uppercase before all internal calls. Examples: RIOT, MARA, COIN, MSTR.
output_modeNoOptional response mode. Only compact is accepted in Phase 1.
include_segmentsNoRequest segment containers from company_fundamentals when available.
include_related_partyNoRequest related-party containers from company_fundamentals when available.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesServer-enforced DeltaSignal company report composite response.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant value beyond annotations: it details the six internal HTTPS reads, confirms read-only and idempotent behavior, explains rejection of invalid tickers before fan-out, and mentions preservation of partial results. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single well-structured paragraph that front-loads purpose and then details usage and behavior. Every sentence adds information, with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity as a composite workflow and the presence of an output schema, the description covers all necessary aspects: behavior, constraints, usage scenarios, and parameter specifics. It is fully adequate for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already defines parameters. The description adds context like ticker normalization, period usage, and conditional inclusion of SPECTRA, which goes beyond the schema. However, the schema is already thorough, so a 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is a read-only composite workflow for the default full DeltaSignal ATLAS-7 company report, listing its components. It explicitly says to use it for reports, deep dives, issuer briefs, or diligence packages on a single crypto ticker, distinguishing it from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: use this tool when a user asks for a report or deep dive, and use low-level tools for custom drilldowns. It also notes the tool enforces the complete call plan, leaving no ambiguity about when to choose it over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_covenant_stressDeltaSignal covenant stressA
Read-onlyIdempotent
Inspect

Use this read-only tool for ATLAS-7 covenant stress analysis on crypto public companies. Pass ticker for a single-issuer detail view, or omit ticker to screen the active issuer universe with risk, quality, debt-coverage, linkbase, minimum-stress, limit, and offset filters. It returns filing-backed stress, headroom, risk tier, debt coverage, quality, linkbase provenance, live-price deltas, and metadata needed for covenant-risk research. Parameters: ticker selects detail mode; without ticker, limit/offset paginate list mode, min_stress is 0-100, and risk_tier, quality_flag, debt_coverage_status, and linkbase_only narrow the screen. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not change filings, portfolios, wallets, or account state. Use top_stressed for the default ranked universe, peer_ranking for relative peer context, and alpha_signals when the user asks for opportunity or edge rather than covenant stress.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum rows for list mode. Use 10-25 for concise screens.
offsetNoPagination offset for list mode.
periodNoOptional YYYY-MM-DD filing period for ticker detail mode.
tickerNoOptional crypto public company ticker. When set, returns detail for that issuer. Examples: MARA, RIOT, COIN, MSTR.
risk_tierNoOptional risk tier filter for list mode, such as HIGH, MODERATE, LOW, or UNCLASSIFIED.
min_stressNoOptional minimum stress score for list mode.
source_dateNoOptional YYYY-MM-DD DeltaSignal source date for list mode.
quality_flagNoOptional data quality filter for list mode, such as High, Medium, or Low.
linkbase_onlyNoRestrict list mode to issuers backed by SEC XBRL linkbase/rooted facts.
debt_coverage_statusNoOptional debt coverage status filter, such as resolved_nonzero, legitimate_zero_debt, or low_confidence_missing_debt.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesCovenant stress detail when ticker is supplied, or list-mode response with data and meta when ticker is omitted.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description reinforces these by stating 'read-only and idempotent; it performs one HTTPS read, has no destructive side effects' and further describes return data (stress, headroom, etc.). This adds useful behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at about 5 sentences, front-loading the main purpose and mode distinction. Every sentence adds value – no redundancy or fluff. It efficiently covers purpose, usage, behavior, and alternatives.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (10 parameters, no required, output schema exists, rich annotations), the description covers all key aspects: two modes, filter behaviors, read-only nature, return data summary, and sibling tool guidance. It is complete enough for effective tool selection and invocation, though it could optionally detail output schema fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description summarizes parameter roles (ticker for detail, filters for list mode) but does not add significant meaning beyond what the schema already provides. The added usage context is adequate but not exceptional.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is for ATLAS-7 covenant stress analysis on crypto public companies. It specifies two modes (single-issuer detail via ticker, or screening the universe with filters) and explicitly distinguishes from sibling tools (top_stressed, peer_ranking, alpha_signals).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs. alternatives: 'Use top_stressed for the default ranked universe, peer_ranking for relative peer context, and alpha_signals when the user asks for opportunity or edge rather than covenant stress.' It also explains the detail vs. list mode behavior and filter usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_covenant_stress_naturalDeltaSignal covenant stress Natural Language briefA
Read-onlyIdempotent
Inspect

Use this premium read-only Natural Language tool when the user wants ticker-specific covenant stress evidence explained in human-readable Markdown. It renders compact ATLAS-7 covenant, leverage, liquidity, filing, and stress evidence into an audit-grade brief while preserving returned ticker, issuer, values, source dates, nulls, quality flags, and caveats. Parameters: ticker is required; date is optional and maps to the evidence period when supported; style is professional, concise, trader, or detailed. Behavior: read-only and idempotent; it performs one HTTPS read against the Natural Language route, has no destructive side effects, and never infers covenant breach, default risk, insolvency, liquidity crisis, or trade direction unless returned by evidence.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoRendering style. Style changes tone and density only, not facts.
periodNoOptional YYYY-MM-DD source/evidence period selector.
tickerYesRequired crypto public company ticker. Examples: RIOT, MARA, COIN, MSTR.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesNatural Language Covenant Stress response.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and idempotentHint. The description adds significant behavioral details: it performs one HTTPS read, has no destructive side effects, and explicitly states it never infers breach or risk unless evidenced. This goes beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose, parameter details, and behavioral notes. It is slightly verbose but every sentence adds value. Front-loading the intended use is effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (3 params, high schema coverage, output schema exists), the description fully explains what the tool does, what it returns (Markdown brief with specific content), and its limitations (no inferences). No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds extra meaning by explaining that 'period' maps to the evidence period and 'style' has specific values (professional, concise, trader, detailed) and notes that style only changes tone not facts.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: generating human-readable Markdown briefs on ticker-specific covenant stress evidence. It distinguishes from siblings like deltasignal_covenant_stress (raw data) and deltasignal_top_stressed_natural (different scope) by specifying ticker-specific natural language output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description begins with 'Use this premium read-only Natural Language tool when...', providing clear context for when to invoke it. It does not explicitly name alternatives or when-not to use, but the use case is well-defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_daily_change_evidenceDeltaSignal daily change evidenceA
Read-onlyIdempotent
Inspect

Use this read-only drilldown tool only when the user asks why one issuer or CIK was flagged in daily changes. It returns paginated raw CompanyFacts tag evidence for a specific ticker or CIK, plus page metadata and issuer identity. Parameters: ticker or cik is required; source_date is optional; limit defaults to 100 and is capped at 250; offset paginates the raw tag page. Behavior: read-only and idempotent; it performs one internal daily-changes read, filters evidence for one issuer/change, and has no destructive side effects. Do not use it for routine monitoring, Morning Brief, or Alpha Sweep unless the user explicitly asks for proof.

ParametersJSON Schema
NameRequiredDescriptionDefault
cikNoOptional SEC CIK. Required when ticker is absent.
limitNoMaximum raw tag evidence rows to return. Default 100, maximum 250.
offsetNoPagination offset for raw tag evidence rows.
tickerNoOptional public-company ticker. Required when cik is absent.
source_dateNoOptional CompanyFacts activity source date. Omit for latest activity date.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesIssuer-specific paginated daily-change evidence.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds internal details: 'performs one internal daily-changes read, filters evidence for one issuer/change.' This goes beyond annotations without contradicting them, explaining the single-read behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with purpose and usage condition, then parameters, then behavior. Every sentence adds value, no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 5 parameters fully documented, rich annotations, and an output schema present, the description covers all necessary context: when to use, what it returns, and behavioral guarantees. It is complete for an agent to select and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, but the description adds value by summarizing that ticker or cik is required (though schema says none required), explaining limit defaults and cap, and describing offset pagination. This clarifies usage beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is a read-only drilldown for when a user asks why an issuer or CIK was flagged in daily changes. It specifies the action (drilldown), resource (daily change evidence), and distinguishes from sibling tools like deltasignal_daily_changes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use: 'only when the user asks why one issuer or CIK was flagged in daily changes.' It also provides a clear exclusion: 'Do not use it for routine monitoring, Morning Brief, or Alpha Sweep unless the user explicitly asks for proof.' This gives excellent guidance on alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_daily_changesDeltaSignal daily changesA
Read-onlyIdempotent
Inspect

Use this read-only monitoring tool to retrieve the latest meaningful DeltaSignal daily change snapshot. It highlights tracked crypto filing deltas, newly discovered crypto issuers, source dates, computed timestamps, classification summary, and change statistics. Parameters: none; call it exactly as-is when the user asks what changed today or needs a monitoring summary. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not write notifications, files, accounts, or wallet state. Use it for daily monitoring and freshness narratives; use readiness for service health and issuer-specific tools for detailed research on any ticker it mentions.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesLatest promoted DeltaSignal daily change snapshot for monitoring workflows, with separate compute and CompanyFacts artifact freshness fields.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds context beyond annotations: 'performs one HTTPS read, has no destructive side effects, does not write notifications, files, accounts, or wallet state.' No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded purpose, then behavior and usage guidance. Slightly verbose but each sentence adds value; could be trimmed minimally.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully complete given no parameters, output schema present, and rich annotations. Covers all necessary aspects: purpose, usage, behavior, and sibling distinction.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%. Description explicitly states 'Parameters: none; call it exactly as-is,' which is clear and helpful.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the 'latest DeltaSignal daily change snapshot' and lists specific content types (crypto filing deltas, newly discovered issuers, etc.). It distinguishes from siblings like readiness (service health) and issuer-specific tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use: 'daily monitoring and freshness narratives' and when not: 'use readiness for service health and issuer-specific tools for detailed research.' Provides clear alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_morning_briefDeltaSignal morning briefA
Read-onlyIdempotent
Inspect

Use this read-only composite workflow tool as the default first-pass DeltaSignal ATLAS-7 daily scan. It server-enforces the complete morning brief call plan: readiness, daily_changes, risk_distribution, top_stressed with limit 10, alpha_opportunities with limit 10, and alpha_opportunities_audit with limit 10. Parameters: optional output_mode=compact only; do not pass limit, offset, ticker, source_date, or issuer filters because this preset owns exact arguments internally. Behavior: read-only and idempotent; it performs six internal HTTPS reads, has no destructive side effects, never calls issuer-level tools, and preserves partial results if one required internal call fails. Use it for morning brief, daily brief, daily scan, current risk board, and newsroom first-pass requests; use low-level tools only when the user explicitly asks for a custom drilldown.

ParametersJSON Schema
NameRequiredDescriptionDefault
output_modeNoOptional response mode. Only compact is accepted in Phase 1.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesServer-enforced DeltaSignal morning brief composite response. The data object preserves successful subtool payloads plus a deterministic internal call ledger.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint, idempotentHint, destructiveHint), the description adds that it performs six internal HTTPS reads, has no destructive side effects, never calls issuer-level tools, and preserves partial results on failure. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately long but every sentence adds substantive value. It fronts the purpose prominently. Minimal redundancy justifies a 4, not 5, as it could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's composite nature, presence of output schema, and rich annotations, the description covers all necessary context: internal call plan, behavior on partial failure, explicit usage guidelines, and constraints. It is fully complete for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter output_mode, which already has description. The description adds value by stating only 'compact' is accepted and clarifying that other parameters (limit, offset, etc.) must not be passed because the preset owns arguments internally.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a read-only composite workflow tool for the default first-pass DeltaSignal ATLAS-7 daily scan, listing the specific internal calls and distinguishing from siblings by directing custom drilldowns to low-level tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly specifies when to use (morning brief, daily brief, daily scan, current risk board, newsroom first-pass) and when not to (use low-level tools for custom drilldown). Also forbids passing certain parameters, providing clear usage boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_morning_brief_naturalDeltaSignal Morning Brief Natural Language briefA
Read-onlyIdempotent
Inspect

Use this premium read-only Natural Language tool when the user wants the server-composed Morning Brief rendered as audit-grade Markdown. It compiles backend-composed compact evidence across readiness, daily changes, risk distribution, top stressed issuers, and alpha opportunities. The renderer never fans out into tools and never generates social drafts or trade recommendations. Parameters: style is professional, concise, trader, or detailed. Date and limit are accepted only where the backend composite supports them. Behavior: read-only and idempotent; it performs the server-enforced Morning Brief workflow, has no destructive side effects, then renders the returned compact evidence as a bounded Natural Language response.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoOptional compact record limit where supported.
styleNoRendering style. Style changes tone and density only, not facts.
periodNoOptional YYYY-MM-DD date selector when supported by the backend composite.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesNatural Language Morning Brief response rendered from backend-composed compact evidence.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it states the tool never fans out, never generates social drafts or trade recommendations, and is read-only/idempotent with no side effects. This fully aligns with annotations (readOnlyHint, idempotentHint) and enhances transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with the core purpose, and every sentence adds meaningful information. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description fully covers purpose, behavior, parameter conditions, and constraints. It explains what the tool does and what it avoids, leaving no ambiguity for a simple tool with optional parameters and an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100% and schema descriptions exist, the description adds valuable conditional guidance: parameters 'date' and 'limit' are only accepted where the backend supports them, and 'style' is explained as changing tone and density only. This clarifies usage beyond raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the specific resource (Morning Brief) and action (render as audit-grade Markdown). It distinguishes this tool from siblings by emphasizing it is the natural language version and does not fan out into other tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states when to use the tool ('when the user wants the server-composed Morning Brief rendered as audit-grade Markdown'). It also clarifies what it does not do (no social drafts or trade recommendations), though it lacks explicit exclusions or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_peer_rankingDeltaSignal peer rankingA
Read-onlyIdempotent
Inspect

Use this read-only tool to compare one crypto public company against its current peer group. It returns peer rank, peer percentile, peer score, stressed leverage, risk tier, debt coverage, quality flags, linkbase provenance, and period/source-date context. Parameters: ticker is required and must be one public-company symbol such as COIN, MSTR, MARA, RIOT, HUT, or CLSK; period is optional and only for reproducing a known filing date. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not write external systems or access user accounts. Use it when the user asks whether one issuer is better or worse than peers; use covenant_stress for absolute stress, top_stressed for universe-wide ranking, and alpha_signals for opportunity signals.

ParametersJSON Schema
NameRequiredDescriptionDefault
periodNoOptional YYYY-MM-DD filing period. Omit for the latest peer ranking.
tickerYesRequired crypto public company ticker. Examples: COIN, MSTR, MARA, RIOT, HUT, CLSK.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesPeer ranking and percentile context for one crypto public company.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description confirms these but adds only minor context (e.g., 'performs one HTTPS read'). No contradictions. Since annotations cover most behavioral traits, description adds limited value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph but front-loaded with purpose, followed by return fields, parameter details, behavior, and usage guidance. Every sentence is informative with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given two parameters, an output schema, and safety annotations, the description covers purpose, usage, parameter semantics, behavioral transparency, and sibling alternatives. The agent has all necessary context to invoke this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds meaning: ticker must be a public-company symbol with examples, and period is for reproducing a known filing date. This goes beyond the schema's format constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs and resources: 'compare one crypto public company against its current peer group' and enumerates return fields. It distinguishes from siblings by naming alternatives like covenant_stress and top_stressed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'Use this read-only tool' and provides clear context of when to use it ('when the user asks whether one issuer is better or worse than peers'). It also lists explicit alternatives: covenant_stress, top_stressed, alpha_signals.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_pressure_boardDeltaSignal pressure boardA
Read-onlyIdempotent
Inspect

Use this read-only composite workflow tool for risk and stress monitoring across the current DeltaSignal issuer universe. It server-enforces the pressure-board call plan: readiness, top_stressed with limit 15, and risk_distribution. Parameters: optional output_mode=compact only; do not pass limit, offset, ticker, source_date, or issuer filters because this preset owns exact arguments internally. Behavior: read-only and idempotent; it performs three internal HTTPS reads, has no destructive side effects, never calls issuer-level tools, and preserves partial results if one internal call fails. Use it when the user asks for risk monitoring, pressure board, stress board, top stressed overview, or current risk mix.

ParametersJSON Schema
NameRequiredDescriptionDefault
output_modeNoOptional response mode. Only compact is accepted.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesServer-enforced DeltaSignal pressure board composite response.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, destructiveHint. Description adds significant behavioral detail: server-enforces a specific call plan, performs three internal HTTPS reads, has no destructive side effects, never calls issuer-level tools, and preserves partial results on failure. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is moderately long but all sentences are informative. It is well-structured: starts with purpose, then parameters, then behavior, then usage suggestions. No redundancy. Could be slightly more concise, but given the complexity it is appropriate.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is an output schema (not shown but indicated), the description does not need to detail return values. It covers purpose, when to use, parameter constraints, behavioral guarantees (read-only, idempotent, partial failure handling), and internal mechanics. Complete for an agent to correctly invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already describes the single optional parameter output_mode with full coverage. Description adds value by explicitly stating that only 'compact' is accepted and instructing not to pass other filters like limit, offset, ticker, etc. This additional guidance improves parameter semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool is for risk and stress monitoring across the DeltaSignal issuer universe. It explicitly lists the built-in call plan (readiness, top_stressed with limit 15, risk_distribution) and differentiates from sibling tools like deltasignal_top_stressed and deltasignal_risk_distribution by being a composite workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: when user asks for risk monitoring, pressure board, stress board, top stressed overview, or current risk mix. Also warns about parameters NOT to pass (limit, offset, ticker, source_date, issuer filters) and that output_mode=compact only is accepted. Provides clear context for appropriate invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_quick_ticker_checkDeltaSignal quick ticker checkA
Read-onlyIdempotent
Inspect

Use this read-only composite workflow tool for a fast single-ticker sanity check without the full company-report payload. It server-enforces the quick-check call plan: readiness, covenant_stress, and alpha_signals for one normalized ticker. Parameters: ticker is required and normalized to uppercase; output_mode=compact is optional. Fundamentals, peer ranking, and SPECTRA are intentionally excluded. Behavior: read-only and idempotent; it performs three internal HTTPS reads, has no destructive side effects, rejects invalid tickers before fan-out, and preserves partial results if a required issuer leg fails. Use it when the user asks whether one ticker is clean, stressed, actionable, or needs deeper diligence.

ParametersJSON Schema
NameRequiredDescriptionDefault
tickerYesRequired crypto public company ticker. The server trims whitespace and normalizes to uppercase before all internal calls.
output_modeNoOptional response mode. Only compact is accepted.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesServer-enforced DeltaSignal quick ticker check composite response.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description provides detailed behavioral traits beyond annotations: it performs three internal HTTPS reads, has no destructive side effects, rejects invalid tickers before fan-out, and preserves partial results upon failure. This adds significant context to the already provided annotations (readOnlyHint, idempotentHint).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that efficiently covers purpose, parameters, behavior, and usage. It is front-loaded with the main purpose and contains no filler, but could be slightly more streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's composite nature and the presence of annotations, schema, and output schema, the description fully covers what the agent needs to know: purpose, parameters, behavior, and usage guidance. It is complete and self-contained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by explaining ticker normalization (uppercase) and the optional output_mode with accepted value 'compact', going beyond the schema's basic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: a fast single-ticker sanity check for crypto public companies, distinguishing it from full report payloads. It uses a specific verb ('check') and resource ('ticker'), and contrasts with 'full company-report payload'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use the tool ('when the user asks whether one ticker is clean, stressed, actionable, or needs deeper diligence') and what is excluded (fundamentals, peer ranking, SPECTRA). However, it does not specify when NOT to use it or directly reference sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_readinessDeltaSignal readinessA
Read-onlyIdempotent
Inspect

Use this read-only tool before analysis to verify that the DeltaSignal ATLAS-7 data plane is live, fresh, and safe to query. It returns service readiness, active source dates, issuer coverage, quality coverage, debt coverage, live-price status, market regime, and tower-coherence diagnostics. Parameters: none; call it exactly as-is when the user asks if DeltaSignal is ready or whether data freshness is acceptable. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, does not write external systems, and does not handle secrets or payments itself. Use it at the start of an agent workflow, after a deploy, or whenever results should be gated on freshness; use daily_changes for what changed and issuer tools for company-specific analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesReadiness and data-freshness diagnostics for the live DeltaSignal ATLAS-7 data plane.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds behavioral context beyond annotations: 'read-only and idempotent', 'performs one HTTPS read, has no destructive side effects, does not write external systems, does not handle secrets or payments itself.' No contradiction with annotations (readOnlyHint, idempotentHint, destructiveHint are all consistent).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficient and well-structured, with the purpose front-loaded. Every sentence adds value—usage, behavior, and alternatives are clearly separated. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, rich output (listed in description), and existence of an output schema, the description is complete. It covers purpose, use cases, behavior, and relationship to siblings, leaving no gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so the baseline is 4. The description explicitly states 'Parameters: none; call it exactly as-is', reinforcing the schema. No additional semantics needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'verify that the DeltaSignal ATLAS-7 data plane is live, fresh, and safe to query.' It lists the return values and distinguishes from siblings like daily_changes and issuer tools, meeting the 'specific verb+resource' and 'distinguishes from siblings' criteria.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('before analysis', 'at the start of an agent workflow', 'after a deploy', 'whenever results should be gated on freshness') and when not to use (use 'daily_changes' for what changed, 'issuer tools' for company-specific analysis), providing perfect guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_risk_distributionDeltaSignal risk distributionA
Read-onlyIdempotent
Inspect

Use this read-only tool to summarize the active crypto public company universe by ATLAS-7 risk tier. It returns risk-tier buckets such as HIGH, MODERATE, LOW, and UNCLASSIFIED with issuer counts and percentages. Parameters: none; call it exactly as-is when the user asks for market-wide risk mix or high-level distribution. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not write external systems or access user accounts. Use it for market-wide context before issuer drilldown; use top_stressed to name the issuers in the high-risk bucket and use issuer tools for company-level analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesRisk-tier distribution for the active DeltaSignal issuer universe. Tier names are object keys such as HIGH, MODERATE, LOW, and UNCLASSIFIED.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, idempotent, non-destructive. The description adds that it performs one HTTPS read and does not write to external systems or access user accounts, which is useful but not necessary given the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with purpose and usage, and every sentence adds value. It is slightly verbose but not overly so.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and the existence of an output schema, the description fully conveys what the tool returns (risk-tier buckets with counts and percentages) and how to invoke it. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist and schema coverage is 100%. The description clearly states 'Parameters: none; call it exactly as-is', which is sufficient for a zero-parameter tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool summarizes the active crypto public company universe by ATLAS-7 risk tier, returning buckets with issuer counts and percentages. It distinguishes itself from siblings like top_stressed and issuer tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: use for market-wide risk context before issuer drilldown, and alternatives: for high-risk issuer names use top_stressed, for company-level analysis use issuer tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_spectra_field_mapDeltaSignal SPECTRA field mapA
Read-onlyIdempotent
Inspect

Use this read-only tool to retrieve the SPECTRA historical field-map contract for one crypto public company ticker. It returns issuer-specific filing choreography and pressure-map context used by DeltaSignal report and visualization workflows. Parameters: ticker is required and must be one public-company symbol such as RIOT, MARA, COIN, MSTR, HUT, or CLSK. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and does not write files, wallets, orders, or account state. Use it when the user asks for SPECTRA, field-map, historical pressure, filing choreography, or report-visualization context for a named issuer.

ParametersJSON Schema
NameRequiredDescriptionDefault
tickerYesRequired crypto public company ticker symbol. Examples: RIOT, MARA, COIN, MSTR, HUT, CLSK.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesSPECTRA historical field-map contract for one issuer.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already set readOnlyHint=true, destructiveHint=false, idempotentHint=true. The description adds specific behaviors: performs one HTTPS read, no destructive side effects, does not write files/wallets/orders/account state. This aligns with annotations and provides extra context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, each earning its place: purpose, return context, parameter requirements, and behavioral note. Front-loaded with the main action. No verbosity or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read-only tool with an output schema, the description covers purpose, usage context, parameter details, and behavioral transparency. Includes return specifics about filing choreography and pressure-map context. No gaps identified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a description for the ticker parameter. The description reiterates that ticker is required and must be a public-company symbol, listing examples already in the schema. This adds minimal value beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the SPECTRA historical field-map contract for one crypto public company ticker. It distinguishes from siblings by specifying the return of issuer-specific filing choreography and pressure-map context used in report and visualization workflows.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use it when the user asks for SPECTRA, field-map, historical pressure, filing choreography, or report-visualization context for a named issuer.' While it doesn't mention alternatives or when not to use it, the sibling tools list provides context for differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_top_stressedDeltaSignal top stressed issuersA
Read-onlyIdempotent
Inspect

Use this read-only screening tool to rank the most stressed crypto public companies in the active DeltaSignal slice. It returns issuer rows sorted by stress, including ticker, period, risk tier, stress values, debt-coverage status, quality flags, linkbase provenance, live-price indicators, and pagination metadata. Parameters: limit is 1-100 and should usually be 5-20 for summaries; offset is only for pagination after a previous screen. Behavior: read-only and idempotent; it performs one HTTPS read, has no destructive side effects, and never writes orders, files, accounts, or wallet state. Use it for portfolio triage, issuer watchlists, and deciding which companies deserve deeper covenant or alpha analysis; use covenant_stress with ticker for detail on one issuer.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum issuers to return. Use 5-20 for agent summaries and up to 100 for full screening.
offsetNoPagination offset for continuing a previous ranked screen.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesRanked stressed issuer screen.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, destructiveHint. Description adds valuable behavioral details: performs one HTTPS read, has no destructive side effects, never writes orders/files/accounts/wallet. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is thorough but slightly verbose for a top-scoring tool. Every sentence adds value, but could be tightened without losing clarity. Well-structured with clear sections for behavior and usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given output schema exists (not shown but noted) and annotations are rich, the description is fully complete. It explains return fields (issuer rows with specific data), pagination metadata, and intended use cases. No gaps for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage for both parameters. Description adds usage guidance: limit should be 5-20 for summaries, offset only for pagination after a previous screen. This provides practical meaning beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it is a read-only screening tool to rank stressed crypto public companies. Includes specific verb 'rank' and resource 'most stressed issuers in DeltaSignal slice'. Distinguishes from sibling tool covenant_stress by directing users to use that for detail on one issuer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: portfolio triage, issuer watchlists, deciding which companies deserve deeper analysis. Also states when not to use: for detail on one issuer, use covenant_stress. Provides clear context for agent decision.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deltasignal_top_stressed_naturalDeltaSignal top stressed Natural Language briefA
Read-onlyIdempotent
Inspect

Use this premium read-only Natural Language tool when the user wants the Top Stressed screen explained in human-readable Markdown. It renders compact ATLAS-7 Top Stressed evidence into an audit-grade brief while preserving returned ranks, stress values, quality flags, nulls, source dates, and caveats. Parameters: limit is 1-100, offset paginates, and style is professional, concise, trader, or detailed. Style changes tone and density only, not facts. Behavior: read-only and idempotent; it performs one HTTPS read against the Natural Language route, has no destructive side effects, and never executes trades, wallets, settlements, or writes. Use raw deltasignal_top_stressed for cheap structured JSON and this tool for premium human-facing summaries.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum issuers to include in the brief. Use 5-10 for concise human summaries.
styleNoRendering style. Style changes tone and density only, not facts.
offsetNoPagination offset for continuing a previous ranked screen.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesNatural Language Top Stressed response.
provenanceYesTraceability information for the MCP tool response.
mcp_summaryYesConcise high-signal summary of the tool response. Maximum 140 characters.
usage_metadataNoPerformance and estimated cost metadata for this MCP tool call.
suggested_follow_upsYesConcrete next MCP calls an agent can run to continue the workflow.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds behavioral context beyond annotations: it performs one HTTPS read, has no destructive side effects, never executes trades, wallets, settlements, or writes. This supplements the readOnlyHint and idempotentHint annotations effectively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences plus a parameter summary. It is front-loaded with purpose, followed by behavioral details and parameter hints. Every sentence is informative and earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description need not explain return values. It covers all necessary aspects: purpose, usage, behavior, parameters, and differentiation from siblings. It is fully adequate for an agent to select and invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds value by summarizing parameters (limit 1-100, offset paginates, style changes tone) and providing usage nuance like '5-10 for concise summaries'. While schema already describes each parameter, the description offers practical guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: a premium read-only Natural Language tool to explain the Top Stressed screen in human-readable Markdown. It distinguishes from the sibling deltasignal_top_stressed by specifying this tool provides premium human-facing summaries while the other is for cheap structured JSON.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use this tool ('when the user wants the Top Stressed screen explained in human-readable Markdown') and when not to ('Use raw deltasignal_top_stressed for cheap structured JSON'), providing clear alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.