Skip to main content
Glama

Server Details

Send a coupling matrix, get zone classifications and optimal factorization strategy.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Bwana7/factorguide
GitHub Stars
0
Server Listing
FactorGuide

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 7 of 7 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools serve distinct purposes in the analysis workflow (navigate for terrain mapping, diagnose for single-pair checks, explain for interpretation). While navigate and diagnose both involve risk prediction, their scopes (full model vs. single pair) are clearly differentiated in descriptions. The two detection tools target different mathematical domains (time series vs. spectral).

Naming Consistency3/5

The naming mixes conventions: three tools use bare verbs (diagnose, explain, navigate), two use verb_noun (report_outcome, submit_payment), and two use noun_verb (regime_detect, synergy_detect). While all use snake_case consistently, the inconsistent word order and structure (simple vs. compound) creates mild unpredictability.

Tool Count5/5

Seven tools is well-suited for this specialized statistical factorization domain. The set covers the complete workflow (analysis, diagnosis, explanation, outcome reporting, payment) without bloat. Even with two pending specifications, the count represents appropriate scope for the server's purpose.

Completeness4/5

The core diagnostic loop is well-covered (navigate → explain → report_outcome). However, two analysis tools (regime_detect and synergy_detect) are marked as specification pending for v1.1, leaving gaps in time-series and spectral analysis capabilities. Additionally, there are no tools for retrieving or managing prediction history beyond the explain function.

Available Tools

7 tools
factorguide_diagnoseCInspect

Quick single-pair diagnostic. IC with risk prediction for both model classes. Include variances for sign detectability. Requires X-Wallet header with your EVM wallet address (0x...). First 5 queries are free trial.

ParametersJSON Schema
NameRequiredDescriptionDefault
iYesFirst variable name
jYesSecond variable name
variance_iNo
variance_jNo
sample_sizeYes
coupling_valueYesIC or coupling value
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fails to disclose critical behavioral traits like safety (read-only vs. mutation), idempotency, or return format. It mentions calculation details ('risk prediction,' 'sign detectability') but omits operational risks, side effects, or performance characteristics necessary for safe invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact (three sentences) with information-dense phrasing. While efficient, the heavy reliance on acronyms and technical shorthand ('IC,' 'model classes') slightly undermines accessibility. No redundant words, though front-loading could better define 'IC' on first use.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a diagnostic tool with 6 parameters, but incomplete given the lack of output schema and annotations. Missing: definition of returned diagnostic metrics, explanation of 'model classes,' and clarification of 'IC.' Description covers inputs reasonably but fails to set expectations for outputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Effectively compensates for the 50% schema description gap by mapping description terms to parameters: 'variances' clarifies variance_i/variance_j, 'IC' clarifies coupling_value, and 'single-pair' clarifies i/j. However, it omits guidance on sample_size's role or the significance of the minimum: 10 constraint.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

Identifies the tool as a 'single-pair diagnostic' with 'risk prediction,' but relies heavily on undefined domain jargon ('IC,' 'model classes') that hinders immediate comprehension. The 'single-pair' scope slightly distinguishes it from siblings like synergy_detect or regime_detect, but the core purpose remains opaque without domain knowledge.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit when-to-use guidance or alternatives. The phrase 'Include variances for sign detectability' could imply optional parameter usage but is grammatically ambiguous (imperative vs. descriptive) and does not clearly signal when to provide optional variance_i/variance_j versus omitting them.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_explainAInspect

Plain-language explanation of a previous navigate response, including wave mechanics grounding for observational cost guidance. Requires a prediction_hash from a prior factorguide_navigate call. Consumes 1 query allocation. Available for starter and professional tiers. Requires X-Wallet header with your EVM wallet address (0x...). First 5 queries are free trial.

ParametersJSON Schema
NameRequiredDescriptionDefault
prediction_hashYesprediction_hash from a previous navigate response
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses cost behavior ('Consumes 1 query allocation'), access restrictions ('starter and professional tiers'), and content characteristics ('wave mechanics grounding for observational cost guidance').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three efficiently structured sentences covering purpose, prerequisites, and constraints. Every sentence earns its place with no redundancy, and the information is front-loaded with the core function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description provides substantial context including domain-specific details ('wave mechanics grounding'), cost implications, and tier availability. It adequately covers operational context despite lacking explicit return value documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage and already describes the parameter as coming from 'a previous navigate response,' the description adds value by explicitly naming the sibling tool 'factorguide_navigate,' reinforcing the cross-tool dependency and workflow context beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool provides a 'plain-language explanation of a previous navigate response,' using a specific verb and resource. It clearly distinguishes itself from siblings by specifically referencing 'factorguide_navigate' and 'navigate response,' establishing it as a follow-up tool in a workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear prerequisites ('Requires a prediction_hash from a prior factorguide_navigate call') and operational constraints ('Consumes 1 query allocation,' 'Available for starter and professional tiers'). It implies the workflow sequence but does not explicitly state when NOT to use it or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_navigateAInspect

Map the factorization terrain of your model. Send coupling structure (precision matrix preferred for n>2; covariance matrix recommended if sign or CC information is needed) and receive a block-diagonal strategy with calibrated risk prediction. Answers: 'How should I factorize, and what will it cost me?' Optional: set report_sign_detectability=true to get sign(ρ) for high-leverage pairs at no additional cost when variance ratio > 20. Requires X-Wallet header with your EVM wallet address (0x...). First 5 queries are free trial.

ParametersJSON Schema
NameRequiredDescriptionDefault
couplingYes
task_typeNoinference
cost_modelNocubic
model_classNounknown
sample_sizeYes
synergy_checkNo
compute_budgetNominimize
encoding_labelNo
variable_namesNo
accuracy_targetNo
report_marginal_icNo
distribution_diagnosticsNo
report_sign_detectabilityNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that the tool performs 'calibrated risk prediction' and reveals a specific behavioral condition: report_sign_detectability provides sign(ρ) at 'no additional cost when variance ratio > 20.' However, it lacks information on compute intensity, idempotency, or whether this is a read-only analysis versus a state-changing operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three dense, well-structured sentences with zero waste. First sentence establishes purpose, second details input/output contract, third covers optional behavior. Every clause provides actionable guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 13 parameters, nested matrix input structures, and no output schema, the description covers the essential workflow but leaves significant gaps. It does not characterize the return value structure beyond 'block-diagonal strategy,' nor explain the semantics of optional parameters like compute_budget, accuracy_target, or distribution_diagnostics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. It successfully explains the coupling parameter (with matrix selection logic) and report_sign_detectability. However, with 13 total parameters including critical enums (TaskType, CostModel, ModelClass) and flags like synergy_check, the description leaves too many parameters semantically undefined.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'maps the factorization terrain' and outputs a 'block-diagonal strategy with calibrated risk prediction.' It specifically answers 'How should I factorize, and what will it cost me?' which distinguishes it from diagnostic or explanatory siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear guidance on input selection: 'precision matrix preferred for n>2; covariance matrix recommended if sign or CC information is needed.' Also explains when report_sign_detectability provides value. However, it does not explicitly differentiate when to use this tool versus siblings like factorguide_diagnose or factorguide_synergy_detect.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_regime_detectBInspect

Detect coupling regime changes in time series via windowed IC. Specification pending — v1.1 target.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral disclosure. It mentions 'windowed IC' (implementation approach) and warns that the specification is pending, but lacks details on return format, side effects, input data requirements, or resource constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences. The first conveys the tool's purpose; the second communicates critical implementation status. No redundant or wasteful text is present, and information is appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the empty input schema (unusual for a time series analysis tool) and lack of output schema or annotations, the description is insufficient for operational use. While the 'v1.1 target' note honestly flags intentional incompleteness, it does not compensate for missing critical context such as how time series data is provided or what the tool returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. According to scoring rules, 0 parameters establishes a baseline score of 4. There are no parameters requiring semantic clarification beyond the schema structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific action (Detect), resource (coupling regime changes in time series), and method (via windowed IC), distinguishing it from sibling synergy_detect and other non-analysis tools. However, the 'Specification pending' caveat slightly tempers the clarity regarding current availability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use guidance or alternative recommendations are provided. While 'Specification pending' implies readiness constraints, it does not specify conditions for use versus sibling tools like factorguide_synergy_detect or factorguide_diagnose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_report_outcomeAInspect

Complete the prediction loop — report inference diagnostics so future predictions improve. After running the approach FactorGuide recommended, return your ESS ratio, PSIS-khat, or log-likelihood gap. Zero additional computation required. Does not consume a query allocation.

ParametersJSON Schema
NameRequiredDescriptionDefault
ess_ratioNo
psis_khatNo
log_lik_gapNo
approach_takenYes
n_replicationsNo
prediction_hashYes
runtime_secondsNo
actual_mse_ratioNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It successfully adds critical behavioral constraints not inferable from the schema: 'Zero additional computation required' and 'Does not consume a query allocation.' It also clarifies the feedback loop purpose ('so future predictions improve').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste. Front-loaded with the core purpose ('Complete the prediction loop'), followed by specific metrics, then critical behavioral constraints. Every sentence provides unique value not available in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the specialized statistical domain and lack of annotations/output schema, the description adequately establishes the workflow position and data flow (reporting back after prediction). It could improve by explaining how to obtain prediction_hash or what constitutes valid diagnostic values, but sufficiently covers the tool's functional role.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, so the description must compensate. It explains the purpose of ess_ratio, psis_khat, and log_lik_gap by naming them as diagnostics to return, and implies approach_taken. However, it omits prediction_hash (required), n_replications, runtime_seconds, and actual_mse_ratio entirely, leaving half the parameters undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Complete[s] the prediction loop' and specifies reporting 'inference diagnostics' (ESS ratio, PSIS-khat, log-likelihood gap). It effectively distinguishes from siblings like 'diagnose' or 'regime_detect' by focusing specifically on post-prediction outcome reporting to improve future predictions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear temporal context ('After running the approach FactorGuide recommended') and specifies exactly which metrics to return. However, it lacks explicit contrast with siblings (e.g., when to use 'diagnose' vs this tool) or prerequisite details about obtaining the prediction_hash.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_submit_paymentAInspect

Submit payment proof after sending stablecoins to a FactorGuide wallet address. For x402: provide tx_hash and chain. For MPP: use in-band Authorization header instead — no separate submission needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesChain identifier, e.g. 'eip155:8453' or 'tempo:4217'
tx_hashYesOn-chain transaction hash
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the workflow distinction between x402 and MPP integration patterns, but fails to mention side effects, idempotency, verification behavior, or what constitutes success/failure for a financial transaction operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero waste. The first establishes the core operation; the second immediately provides the critical conditional logic for protocol selection. Perfectly front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description adequately covers input requirements and protocol selection logic, but lacks disclosure of return values, confirmation behavior, or post-submission state changes that would help an agent handle the response appropriately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description maps parameters to the x402 protocol use case but does not add syntax details, format constraints, or semantic relationships beyond what the schema already provides for tx_hash and chain.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (submit payment proof), the context (after sending stablecoins), and the target resource (FactorGuide wallet address). It distinguishes from siblings (diagnose, explain, navigate) by being the only payment-related operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance: specifies exactly when to use this tool ('For x402: provide tx_hash and chain') and when NOT to use it ('For MPP: use in-band Authorization header instead — no separate submission needed'), clearly distinguishing between two payment protocols.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_synergy_detectBInspect

Detect hidden synergistic structure via Walsh-Hadamard spectral analysis. Accepts pre-computed Walsh coefficients — agent performs the transform locally and sends only the spectral summary. Specification pending — v1.1 target.

ParametersJSON Schema
NameRequiredDescriptionDefault
n_samplesNo
n_variablesNo
ic_matrix_refNo
transform_methodNo
walsh_coefficientsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but fails to disclose safety profile (read-only vs destructive), return format, or side effects. It does mention the 'Specification pending — v1.1 target' status, which is relevant behavioral context, but this is insufficient for a tool with complex mathematical processing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no wasted words. The information density is high, though the critical 'Specification pending' disclaimer might be more effective if front-loaded rather than placed at the end.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (Walsh-Hadamard analysis, 5 parameters with nested object structures, 0% schema coverage, no annotations, no output schema), the description is inadequate. It fails to explain parameter relationships, expected return values, or error conditions necessary for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, requiring the description to compensate. While it mentions 'pre-computed Walsh coefficients' (referencing the walsh_coefficients parameter), it completely omits explanations for n_samples, n_variables, ic_matrix_ref, and transform_method, leaving the majority of parameters undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the specific action ('Detect hidden synergistic structure') and method ('Walsh-Hadamard spectral analysis'), which distinguishes it from sibling tools like regime_detect or diagnose. However, it does not explicitly differentiate when to choose this over other analysis tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage context by stating it 'Accepts pre-computed Walsh coefficients' and that the 'agent performs the transform locally,' indicating prerequisites. However, lacks explicit when-to-use guidance compared to siblings and does not mention the 'Specification pending' status as a usage blocker until the final sentence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.