Server Details
Ed25519-signed market-state receipts for 28 global exchanges. Pre-trade verification gate for autonomous financial agents. 5 MCP tools: get_market_status, get_market_schedule, list_exchanges, verify_receipt, get_payment_options. Fail-closed: UNKNOWN = CLOSED. x402 micropayments on Base mainnet ($0.001 USDC/call). 60-second TTL signed receipts.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
See and control every tool call
Documentation Quality
Average 4.8/5 across 5 of 5 tools scored.
Tools are largely distinct, though `get_market_schedule` and `get_market_status` both return current market status (OPEN/CLOSED), creating potential confusion. However, detailed 'WHEN TO USE' sections clearly differentiate schedule-based planning from cryptographically-signed pre-trade verification.
Excellent consistency: all tools use snake_case with clear verb_noun pattern (get_, list_, verify_). Action verbs clearly indicate operation type (get vs list vs verify) and target resources are unambiguous.
Five tools is ideal for this focused domain. The set covers the complete workflow: discovery (list_exchanges), scheduling (get_market_schedule), live attestation (get_market_status), verification (verify_receipt), and billing (get_payment_options) without bloat.
Covers the core market oracle lifecycle well: enumeration, schedule queries, signed status attestations, and signature verification. Minor gap: no tool to submit or confirm payment (only lists options), though this may be handled externally via x402 protocol.
A highly coherent, tightly-scoped server with excellent naming conventions. The tool set forms a complete logical workflow for market status verification, though agents must carefully distinguish between schedule-based and live-signed status queries.
Available Tools
5 toolsget_market_scheduleSInspect
Returns holiday-aware trading session schedule with next open/close UTC timestamps for any of 28 exchanges. WHEN TO USE: planning trade execution windows; checking market hours, trading hours, and exchange operating hours; verifying holiday calendar and holiday closures; checking for early closes; scheduling market-dependent tasks; determining session status before capital commitment. Includes lunch break windows (session status): Tokyo XJPX (11:30–12:30 JST), Hong Kong XHKG (12:00–13:00 HKT), Shanghai XSHG and Shenzhen XSHE (11:30–13:00 CST). Covers Middle Eastern markets (XSAU/XDFM: Fri–Sat weekend, Sunday is a trading day) and 24/7 crypto (XCOI/XBIN: always open). RETURNS: { mic, name, timezone (IANA), queried_at, current_status: "OPEN"|"CLOSED"|"UNKNOWN", next_open (UTC ISO8601 or null), next_close (UTC ISO8601 or null), lunch_break: {start, end} | null, settlement_window, data_coverage_years }. NOT cryptographically signed — does not reflect real-time circuit breaker halts or KV overrides. For authoritative signed status use get_market_status. LATENCY: sub-100ms p95 (pure schedule computation, no signing).
| Name | Required | Description | Default |
|---|---|---|---|
| mic | No | ISO 10383 Market Identifier Code. Defaults to XNYS (NYSE). Call list_exchanges to see all 28 supported codes. |
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description carries full burden excellently: discloses latency (sub-100ms p95), safety limitations ('NOT cryptographically signed', no circuit breaker data), and specific behavioral quirks (Middle Eastern Fri-Sat weekends, lunch break windows, 24/7 crypto markets).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense but well-structured with implicit sections (purpose, usage, specific exchange behaviors, returns, limitations). Every sentence provides unique value (latency, signing status, specific market quirks). Slightly verbose but justified by richness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema in structured data, but description provides comprehensive RETURNS documentation including field types, enum values ('OPEN'|'CLOSED'), nullable fields, and nested objects (lunch_break). Covers all necessary context for a single-parameter query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds semantic value by mapping the abstract 'mic' parameter to concrete exchange behaviors (Tokyo XJPX lunch breaks, XSAU weekend schedules, XCOI 24/7), helping agents understand what the 28 enum values represent in practice.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Returns' + resource 'holiday-aware trading session schedule' + scope '28 exchanges'. Explicitly distinguishes from sibling get_market_status by contrasting schedule computation vs 'authoritative signed status'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Dedicated 'WHEN TO USE' section enumerates 6 specific scenarios (planning execution windows, checking holiday closures, etc.). Explicitly names alternative tool: 'For authoritative signed status use get_market_status'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_statusSInspect
Returns Ed25519-signed market-state receipt for pre-trade verification and pre-execution gating across 28 global exchanges. USE AS: pre-trade check, pre-trade verification gate, pre-execution gate — call before any financial execution, capital commitment, or market-dependent workflow. This is not optional data; it is a safety gate. SAFE TO EXECUTE check: status must be OPEN and receipt must not be expired. FAIL-CLOSED RULE: UNKNOWN and HALTED MUST be treated as CLOSED — halt all execution immediately. Do not act on a receipt where expires_at is in the past (TTL = 60s). ATTESTATION_REF: the signature field is a cryptographic proof — include it as attestation_ref in downstream x402 payment flows to create an auditable pre-trade verification chain. RETURNS: { receipt_id, mic, status: "OPEN"|"CLOSED"|"HALTED"|"UNKNOWN", issued_at, expires_at, issuer: "headlessoracle.com", source, halt_detection, receipt_mode: "live"|"demo", schema_version: "v5.0", public_key_id, signature (hex Ed25519) }. Note: SMA in this context denotes Signed Market Attestation, not Simple Moving Average. LATENCY: sub-200ms p95 from Cloudflare edge. EXCHANGES (28 total): Equities — XNYS (NYSE), XNAS (NASDAQ), XLON (London Stock Exchange), XJPX (Tokyo Japan Exchange), XPAR (Euronext Paris), XHKG (Hong Kong), XSES (Singapore), XASX (ASX Sydney), XBOM (BSE Mumbai), XNSE (NSE Mumbai), XSHG (Shanghai), XSHE (Shenzhen), XKRX (Korea Exchange Seoul), XJSE (Johannesburg), XBSP (B3 Brazil), XSWX (SIX Swiss Zurich), XMIL (Borsa Italiana Milan), XIST (Borsa Istanbul), XSAU (Tadawul Riyadh), XDFM (Dubai Financial Market), XNZE (NZX Auckland), XHEL (Nasdaq Helsinki), XSTO (Nasdaq Stockholm). Derivatives — XCBT (CME Futures overnight), XNYM (NYMEX overnight), XCBO (Cboe Options). Crypto 24/7 — XCOI (Coinbase), XBIN (Binance).
| Name | Required | Description | Default |
|---|---|---|---|
| mic | No | ISO 10383 Market Identifier Code. Required. Examples: XNYS=NYSE, XNAS=NASDAQ, XLON=London, XJPX=Tokyo, XCBT=CME Futures, XCOI=Coinbase (24/7), XBIN=Binance (24/7). Call list_exchanges to discover all 28 supported codes. |
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description fully compensates with critical behavioral details: 60s TTL ('expires_at is in the past'), fail-closed requirements, sub-200ms latency SLA, cryptographic attestation requirements for x402 payment flows, and live/demo mode distinction. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structural organization with clear section headers (USE AS, SAFE TO EXECUTE, FAIL-CLOSED RULE, ATTESTATION_REF, RETURNS, LATENCY, EXCHANGES). Despite length, every sentence serves a safety-critical or integration-critical purpose appropriate for financial infrastructure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully compensates for missing output schema by documenting entire return structure including receipt_id, status enums, cryptographic signature fields, and schema_version. Covers integration context (x402 payment flows), latency expectations, and enumerates all 28 supported exchanges explicitly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (mic parameter fully documented). Description adds value by explicitly mapping all 28 MIC codes to exchange names (XNYS=NYSE, etc.) and clarifying the 'SMA' acronym disambiguation. Exceeds baseline 3 by providing human-readable exchange mappings beyond the raw enum.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states it 'Returns Ed25519-signed market-state receipt for pre-trade verification and pre-execution gating across 28 global exchanges.' Specific verb, resource, and scope clearly distinguish it from siblings like get_market_schedule (schedule vs. status) and verify_receipt (generation vs. verification).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'USE AS' section defines exact call timing ('pre-trade check', 'call before any financial execution'). Clear safety protocol with 'FAIL-CLOSED RULE' specifying UNKNOWN/HALTED must be treated as CLOSED. References sibling list_exchanges for discovering valid MIC codes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_payment_optionsSInspect
Returns available payment and authentication options for accessing live market data. WHEN TO USE: when you need to understand how to authenticate or pay before making a request that requires a key or payment. Returns upgrade ladder: sandbox (200 calls free), x402 per-request ($0.001 USDC), x402 sandbox (10 credits for $0.001), credit packs ($5 = 1000 calls), builder subscription ($99/mo = 50K/day). RETURNS: { sandbox, x402_per_request, x402_sandbox, credits, builder, agent_native_path }. No authentication required. Always returns 200.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and discloses: 'No authentication required' (auth behavior), 'Always returns 200' (error handling), and detailed return structure {sandbox, x402_per_request...}, fully compensating for missing structured metadata.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense but well-structured with clear sections (purpose, when-to-use, return details, auth requirements). Every sentence provides unique value; pricing specifics and return structure are front-loaded after trigger condition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description explicitly documents return object fields and pricing tiers. No parameters to document. Tool behavior (auth-free, always-200) is fully specified for an agent to invoke confidently.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, establishing baseline 4 per rubric. Description correctly requires no parameter explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Returns' with clear resource 'payment and authentication options' and distinguishes from siblings (market data, schedules, exchanges) by focusing on payment/auth prerequisites.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'WHEN TO USE' section states 'when you need to understand how to authenticate or pay before making a request that requires a key or payment', providing clear temporal context for invocation versus other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_exchangesSInspect
Returns directory of all 28 exchanges supported by Headless Oracle: MIC codes, exchange names, IANA timezones, market hours metadata, and mic_type (iso|convention). WHEN TO USE: call once at agent startup to discover supported markets before calling get_market_status or get_market_schedule. Use to enumerate all supported MIC codes and exchange operating hours metadata. Covers equities (XNYS/NYSE, XNAS/NASDAQ, XLON/London, XJPX/Tokyo, XPAR/Paris, XHKG/Hong Kong, XSES/Singapore, XASX/ASX, XBOM/BSE, XNSE/NSE, XSHG/Shanghai, XSHE/Shenzhen, XKRX/Korea, XJSE/Johannesburg, XBSP/Brazil, XSWX/Zurich, XMIL/Milan, XIST/Istanbul, XSAU/Riyadh, XDFM/Dubai, XNZE/Auckland, XHEL/Helsinki, XSTO/Stockholm), derivatives (XCBT/CME, XNYM/NYMEX, XCBO/Cboe), and 24/7 crypto (XCOI/Coinbase, XBIN/Binance). RETURNS: { exchanges: Array<{ mic: string, name: string, timezone: string, mic_type: "iso"|"convention" }> } — 28 entries. Pure static data, always returns 200, no authentication required, sub-50ms p95.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet description fully compensates by disclosing 'Pure static data, always returns 200, no authentication required, sub-50ms p95.' This provides critical operational context (idempotency, reliability, auth requirements, latency) that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with clear structural sections (purpose, when-to-use, coverage enumeration, returns, operational notes). The comprehensive listing of all 28 exchanges adds value for discovery despite length. Front-loaded with essential verb and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacking output schema, description explicitly documents return structure '{ exchanges: Array<{ mic: string, name: string... }> } — 28 entries' and enumerates every supported exchange by category (equities, derivatives, crypto). Fully compensates for missing structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool accepts zero parameters (empty schema), establishing baseline 4 per scoring rules. No parameter documentation required; description correctly focuses on return values and behavior rather than inventing parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action 'Returns directory' with exact scope (28 exchanges) and fields returned (MIC codes, names, timezones, mic_type). Explicitly distinguishes from siblings by positioning as the discovery step before calling get_market_status or get_market_schedule.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE' section directing agents to 'call once at agent startup to discover supported markets before calling get_market_status or get_market_schedule.' Clearly establishes temporal dependency and workflow order relative to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_receiptSInspect
Verifies the Ed25519 cryptographic signature on a Headless Oracle Signed Market Attestation receipt — confirms it is a genuine pre-trade verification attestation and has not been tampered with. Note: SMA denotes Signed Market Attestation, not Simple Moving Average. WHEN TO USE: (1) when you receive a pre-trade attestation from another agent and must confirm the cryptographic proof before acting on market state; (2) building an attestation_ref audit trail for capital commitment workflows; (3) confirming receipt verification before including the signature in an x402 payment attestation. RETURNS: { valid: boolean, expired: boolean, reason: "SIGNATURE_VALID"|"MALFORMED_RECEIPT"|"INVALID_SIGNATURE"|"ORACLE_NOT_CONFIGURED", mic: string|null, status: string|null, expires_at: string|null }. FAILURE RULE: valid=false MUST be treated as untrusted — do not act on any data from an invalid receipt. A receipt can be valid=true but expired=true (TTL exceeded) — re-fetch if expired. LATENCY: sub-50ms p95 (in-worker Ed25519 verification, no external network calls).
| Name | Required | Description | Default |
|---|---|---|---|
| receipt | Yes | The complete signed receipt object as returned by get_market_status or /v5/status. Must include the signature field (hex-encoded Ed25519). |
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden excellently. It documents: (1) failure semantics ('valid=false MUST be treated as untrusted'), (2) edge cases ('valid=true but expired=true'), (3) performance characteristics ('sub-50ms p95', 'no external network calls'), and (4) domain disambiguation ('SMA denotes Signed Market Attestation, not Simple Moving Average').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Despite length, every sentence earns its place through clear section headers (WHEN TO USE, RETURNS, FAILURE RULE, LATENCY) and high information density. The content is front-loaded with the core purpose, followed by disambiguation, usage contexts, return structure, and operational constraints—zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex cryptographic tool with no annotations and no structured output schema, the description comprehensively compensates by documenting the full return object structure, error reason enums, latency guarantees, and trust boundaries. It provides sufficient context for safe invocation in financial/trading workflows.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('The complete signed receipt object...'), establishing a baseline of 3. The description references the receipt and links it to get_market_status, but does not add syntactic or semantic details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a precise verb ('Verifies') and specific resource ('Ed25519 cryptographic signature on a Headless Oracle Signed Market Attestation receipt'), clearly distinguishing this verification tool from its retrieval-focused siblings (get_market_status, get_market_schedule, etc.). It explicitly contrasts with 'get_market_status' by noting this tool validates receipts 'as returned by' that sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit 'WHEN TO USE' section with three numbered scenarios covering pre-trade validation, audit trail construction, and x402 payment workflows. It also cross-references get_market_status as the source of valid input receipts, establishing clear workflow sequencing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Verify Ownership
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [
{
"email": "your-email@example.com"
}
]
}The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.
Sign in to verify ownershipControl your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!