Skip to main content
Glama
Ownership verified

Server Details

Pre-reasoned Bitcoin & macro financial briefings for AI agents. Trend signals, regimes, 17 contexts.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
PreReason/mcp
GitHub Stars
0
Server Listing
PreReason-mcp

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_context retrieves a comprehensive market briefing, get_health checks API status, get_metric fetches a single analyzed metric, list_briefings enumerates available briefings, and list_metrics lists available metrics. There is no overlap in functionality, making tool selection straightforward for an agent.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (get_context, get_health, get_metric, list_briefings, list_metrics), using lowercase with underscores. This uniformity enhances readability and predictability for agents interacting with the server.

Tool Count5/5

With 5 tools, the server is well-scoped for its purpose of providing pre-reasoned market data. The tools cover essential operations like retrieving data, checking health, and listing available resources, without being overly sparse or bloated, ensuring efficient agent use.

Completeness4/5

The tool set offers strong coverage for accessing and listing market briefings and metrics, with clear CRUD-like operations (get and list). A minor gap exists in lacking update or delete tools, but this is reasonable given the server's data retrieval focus, and agents can work around this by relying on the provided get and list functions.

Available Tools

5 tools
get_contextGet Market BriefingA
Read-only
Inspect

Get a pre-reasoned market briefing with trend signals, confidence scores, regime classification, and causal narratives for the requested domain (Bitcoin, macro, cross-asset, FX). Includes crypto, macro financial data, and cross-asset correlations (indices, bonds, volatility as BTC relationship signals). Output is decision-ready — designed to be consumed directly by reasoning agents, not parsed for individual values. Briefings are tier-gated: Free (btc.quick-check, btc.context, macro.snapshot, cross.correlations, btc.pulse, btc.grid-stress), Basic (btc.momentum, macro.liquidity, btc.on-chain, cross.breadth, btc.miner-survival), Pro (btc.full, btc.factors, cross.regime, fx.liquidity, btc.energy, btc.treasury). Requires API key authentication via Authorization header.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoHistorical date in YYYY-MM-DD format (tier-gated: Free=30d, Basic=from Jan 2025, Pro=all)
formatNoResponse format: markdown (human-readable, default) or json (structured data)markdown
briefingYesBriefing ID to fetch
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds substantial value beyond readOnlyHint=true by disclosing authentication requirements (API key via Authorization header), tier-based access constraints, and output consumption patterns. Does not cover rate limits or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded value proposition. The tier enumeration is lengthy but necessary for clarity. Each sentence serves distinct purposes: capability, domain scope, consumption pattern, access control, and authentication.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a read-only retrieval tool without output schema. Covers authentication, access tiers, domain coverage, and output nature. Missing only error handling specifics or rate limit details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage, adds crucial semantic context by organizing the 16 briefing enum values into tier categories (Free/Basic/Pro) and explaining the date parameter's historical limitations in narrative form, making the tier system comprehensible.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with 'Get a pre-reasoned market briefing' followed by exact outputs (trend signals, confidence scores, regime classification). Clearly distinguishes from sibling metric tools by emphasizing aggregate, decision-ready analysis rather than individual data points.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear consumption guidance ('decision-ready... not parsed for individual values') implying when NOT to use it, and details tier-gating requirements (Free/Basic/Pro) which directly impacts usage eligibility. Stops short of explicitly naming sibling alternatives like get_metric.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_healthHealth CheckA
Read-only
Inspect

Check API health status, version, and available endpoints. No authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
verboseNoInclude detailed endpoint descriptions, pricing tiers, and data coverage info. Defaults to false.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true; description adds valuable behavioral context that authentication is not required, which impacts invocation timing. No contradictions with annotations. Could improve by mentioning if caching occurs or rate limits apply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First establishes purpose and return values; second provides critical auth requirement. Front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple health-check tool with one optional parameter and read-only annotation. Describes return value categories (status, version, endpoints) and auth requirements. No output schema exists but description hints at content structure sufficient for agent planning.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the 'verbose' parameter (detailed endpoint descriptions, pricing tiers). Description provides no additional parameter semantics, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Check') and resources (health status, version, available endpoints). Clear distinction from siblings like get_metric or list_briefings by focusing on API metadata rather than business data, though lacks explicit comparative language.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides useful authentication context ('No authentication required'), informing the agent it can be called without credentials. However, lacks explicit guidance on when to prefer this over get_context or get_metric, or prerequisites for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_metricGet Metric AnalysisA
Read-only
Inspect

Get a single metric with pre-reasoned signal interpretation, trend analysis (7d/30d/90d), confidence score with factor breakdown, momentum, and percentile ranking. Available metrics: bitcoin-price, bitcoin-market-cap, bitcoin-volume-24h, bitcoin-change-24h, bitcoin-dominance, bitcoin-hash-rate, bitcoin-difficulty, bitcoin-tx-count, bitcoin-tx-volume, bitcoin-active-addresses, fed-balance, m2-supply, treasury-10y, rrp, eur-usd, cny-usd, yield-curve, tga, net-liquidity, hash-ribbon, mvrv-proxy, etf-spy, etf-qqq, etf-iwm, etf-vxx, etf-uup, usdt-market-cap, usdt-dominance, btc-price-200d-ma, btc-price-200d-distance. Requires API key authentication. Returns analyzed context (signal, trend direction, momentum, 90d percentile), not raw values. Use when you need to understand what a metric means for decision-making, not just its number.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNoResponse format: markdown (default) or jsonmarkdown
metricYesMetric name to fetch
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only provide readOnlyHint. Description adds substantial value: API key authentication requirement, explicit disclosure that it returns 'analyzed context... not raw values', and detailed breakdown of analysis components (signal, trend direction, percentile ranking, momentum) that agents can rely on.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five sentences each earning their place: core capability, metric reference list, auth requirement, return value nature, and usage guidance. The metric list makes it slightly long but justified as reference data; front-loading puts key capability first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates well by detailing return structure (analyzed context with signal, trend, momentum) and covering authentication needs. Does not mention rate limiting or error conditions, but sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% providing baseline 3. Description adds significant semantic value by exhaustively listing all 30+ available metrics (bitcoin-price, fed-balance, etf-spy, etc.), giving the agent domain context and practical reference beyond the schema enum.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with clear resource 'single metric' and detailed scope 'pre-reasoned signal interpretation, trend analysis...'. It effectively distinguishes this from raw data retrieval by emphasizing analyzed context with specific components (7d/30d/90d trends, confidence scores, momentum).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'Use when...' clause stating decision-making context vs just getting numbers. Provides clear positive guidance for when analysis is needed, though it does not explicitly name sibling alternatives (e.g., list_metrics) for contrast.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_briefingsList BriefingsA
Read-only
Inspect

List all available pre-reasoned market briefings with tier requirements. Returns 17 briefings across Free (6), Basic (5), and Pro (6) tiers. Covers Bitcoin, macro, FX, and cross-asset financial data categories. No authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
tierNoFilter briefings by subscription tier (free, basic, or pro). Returns all tiers if omitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true. Description adds significant value: specific count (17 briefings), tier distribution breakdown (6/5/6), content categories, and authentication requirements. Goes beyond annotations by describing the actual payload characteristics and access constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: purpose (sentence 1), quantity/tier breakdown (sentence 2), categories (sentence 3), auth (sentence 4). Front-loaded with key operational info. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple listing tool with one optional parameter and no output schema, description adequately compensates by specifying exact return volume (17 items), tier distribution, and content domains. No authentication note is crucial. Covers all necessary bases for agent selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'tier' parameter fully documented (enum values, optional). Description mentions 'tier requirements' in content breakdown but does not add validation rules, format details, or examples beyond what schema provides. Baseline score appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'List' with resource 'briefings' and qualifies with 'pre-reasoned market' and 'tier requirements'. Clearly distinguishes from siblings (get_metric, list_metrics, etc.) by specifying it returns pre-reasoned financial briefings rather than raw metrics or health data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use through content specification ('pre-reasoned market briefings', 'Bitcoin, macro, FX'). States 'No authentication required' which is a critical usage constraint. Lacks explicit 'when not to use' or named alternatives, but content description sufficiently distinguishes from siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_metricsList MetricsA
Read-only
Inspect

List all available individual metrics (30 metrics across bitcoin, macro, and calculated categories). Includes Bitcoin price, hash rate, Fed balance sheet, M2 supply, Treasury 10Y yield, net liquidity, and cross-asset indices (SPY, QQQ, VXX, UUP) used as BTC correlation signals. Optionally filter by category or search term. No authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
searchNoSearch term to filter metrics by name or description
categoryNoFilter by metric category
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The readOnlyHint annotation confirms this is safe read-only, but the description adds valuable specific context: the exact count (30 metrics), concrete examples of included metrics (Bitcoin price, hash rate, M2 supply, specific tickers like SPY/QQQ), and explicit 'No authentication required' note. This domain-specific detail helps the agent understand the data catalog scope beyond the generic annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently pack the metric inventory (count + categories), specific examples (8+ distinct metrics listed), filtering capabilities, and auth status. Information density is high without redundancy. Could benefit from clearer separation between 'what this returns' and 'how to filter', but every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema, but description compensates well by enumerating representative metrics and categories. Clarifies optional nature of both parameters (0 required params) through 'Optionally filter.' Covers the discovery/listing use case adequately; agent understands this returns metadata catalog, not time-series values (implied by examples vs get_metric sibling).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete descriptions for 'search' (filter by name/description) and 'category' (filter by category). Description adds minimal semantic value beyond schema baseline—only noting the optional nature. No syntax examples or validation details beyond schema limits.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb-resource combination: 'List all available individual metrics.' Specific scope indicated by '30 metrics across bitcoin, macro, and calculated categories.' Distinguishes from get_metric (single metric retrieval vs catalog listing) implicitly through plural 'metrics' and inventory examples, though explicit sibling distinction would strengthen this.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States filtering capabilities ('Optionally filter by category or search term') implying when to use filters, but lacks explicit when/when-not guidance versus siblings. No mention of using this discovery tool before get_metric, or when list_briefings might be preferred. 'No authentication required' adds operational context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.