Skip to main content
Glama

Server Details

Send a coupling matrix, get zone classifications and optimal factorization strategy.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Bwana7/factorguide
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

7 tools
factorguide_diagnoseCInspect

Quick single-pair diagnostic. IC with risk prediction for both model classes. Include variances for sign detectability. Requires X-Wallet header with your EVM wallet address (0x...). First 5 queries are free trial.

ParametersJSON Schema
NameRequiredDescriptionDefault
iYesFirst variable name
jYesSecond variable name
variance_iNo
variance_jNo
sample_sizeYes
coupling_valueYesIC or coupling value
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses it performs 'risk prediction' and 'sign detectability' analysis (behaviors not in schema), but lacks details on side effects, persistence, or cost despite payment sibling existing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at 3 sentences with no fluff; front-loaded with 'single-pair diagnostic' though density sacrifices clarity for non-experts.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema and fails to describe return values (what does the diagnostic return?); description mentions 'risk prediction' but insufficient for invocation confidence.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Mentions 'variances' (supporting variance_i/j) and 'IC' (linking to coupling_value), compensating partially for 50% schema coverage, but leaves sample_size unexplained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States 'single-pair diagnostic' which distinguishes from regime/synergy siblings, but 'IC' is jargon-heavy and unclear without domain context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use versus factorguide_regime_detect or factorguide_synergy_detect; 'quick' implies speed but no selection criteria provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_explainAInspect

Plain-language explanation of a previous navigate response, including wave mechanics grounding for observational cost guidance. Requires a prediction_hash from a prior factorguide_navigate call. Consumes 1 query allocation. Available for starter and professional tiers. Requires X-Wallet header with your EVM wallet address (0x...). First 5 queries are free trial.

ParametersJSON Schema
NameRequiredDescriptionDefault
prediction_hashYesprediction_hash from a previous navigate response
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations absent; description fully carries burden by disclosing query allocation cost (1 unit), tier availability constraints, and dependency on prior tool invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with core purpose ('Plain-language explanation...'), followed by domain context ('wave mechanics grounding'), then operational constraints; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Sufficiently complete for a single-parameter tool; describes return value nature despite lack of output schema, though could briefly characterize explanation depth.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3); description mirrors schema ('prediction_hash from a previous navigate call') with minor specificity added ('prior factorguide_navigate'), contributing no new semantic meaning beyond structured definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('explanation') + resource ('previous navigate response') and clearly distinguishes from sibling tools by requiring their output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states prerequisite ('Requires a prediction_hash from a prior factorguide_navigate call') establishing when to use, but lacks explicit 'when-not-to-use' vs siblings like diagnose or regime_detect.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_navigateAInspect

Map the factorization terrain of your model. Send coupling structure (precision matrix preferred for n>2; covariance matrix recommended if sign or CC information is needed) and receive a block-diagonal strategy with calibrated risk prediction. Answers: 'How should I factorize, and what will it cost me?' Optional: set report_sign_detectability=true to get sign(ρ) for high-leverage pairs at no additional cost when variance ratio > 20. Requires X-Wallet header with your EVM wallet address (0x...). First 5 queries are free trial.

ParametersJSON Schema
NameRequiredDescriptionDefault
couplingYes
task_typeNoinference
cost_modelNocubic
model_classNounknown
sample_sizeYes
synergy_checkNo
compute_budgetNominimize
encoding_labelNo
variable_namesNo
accuracy_targetNo
report_marginal_icNo
distribution_diagnosticsNo
report_sign_detectabilityNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses critical behavioral details absent from annotations: conditional zero-cost feature activation ('no additional cost when variance ratio > 20'), risk prediction calibration, and sign detection limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently front-loaded with metaphorical context ('terrain'), followed by clear input/output mapping and specific optional feature details; minimal fluff despite technical density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers the core workflow (input matrices → strategy output) but fails to address the majority of the 13 input parameters (synergy_check, compute_budget, accuracy_target, distribution_diagnostics), leaving significant API surface unexplained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates partially by explaining the 'coupling' input structure and 'report_sign_detectability' flag, but leaves 11 other parameters (task_type, cost_model, model_class, etc.) undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool provides factorization strategies ('block-diagonal strategy') and cost predictions, distinguishing it from diagnostic or explanatory siblings, though 'terrain' metaphor slightly obscures the concrete action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides specific guidance on matrix input selection (precision vs. covariance based on dimension and information needs) and explains the conditional benefit of report_sign_detectability, but lacks explicit comparison to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_regime_detectBInspect

Detect coupling regime changes in time series via windowed IC. Specification pending — v1.1 target.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Lacking annotations, the description carries the burden and adds crucial lifecycle context (Specification pending/v1.1 target), but omits other behavioral traits like side effects, idempotency, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with zero fluff: first clause states purpose, second clause states implementation status—every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a stub tool (acknowledges pending status), but lacks return value documentation and usage context given the analytical complexity implied by the domain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero-parameter schema establishes baseline 4; description appropriately provides no parameter details since none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific analytical purpose (detect coupling regime changes in time series via windowed IC) and distinguishes from siblings like synergy_detect, though the 'Specification pending' caveat introduces minor ambiguity about current capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to select this tool versus alternatives (e.g., factorguide_diagnose or factorguide_synergy_detect) or prerequisites for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_report_outcomeAInspect

Complete the prediction loop — report inference diagnostics so future predictions improve. After running the approach FactorGuide recommended, return your ESS ratio, PSIS-khat, or log-likelihood gap. Zero additional computation required. Does not consume a query allocation.

ParametersJSON Schema
NameRequiredDescriptionDefault
ess_ratioNo
psis_khatNo
log_lik_gapNo
approach_takenYes
n_replicationsNo
prediction_hashYes
runtime_secondsNo
actual_mse_ratioNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses critical behavioral traits absent from annotations: zero computation cost, no query allocation consumption, and feedback loop effect ('future predictions improve').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly crafted sentences front-loaded with purpose; every clause delivers value regarding workflow position, parameters, or resource costs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers domain-specific workflow (prediction loop completion) and required inputs given no output schema exists; minor gap on optional performance metrics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, partially compensates by explaining ess_ratio, psis_khat, and log_lik_gap as inference diagnostics, but omits n_replications, runtime_seconds, and actual_mse_ratio.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verbs ('complete', 'report') and resource ('inference diagnostics') clearly identify this as the post-prediction feedback tool, distinguishing it from diagnostic/analysis siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear temporal context ('after running the approach') and cost constraints ('zero computation', 'no query allocation'), though lacks explicit 'when not to use' guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_submit_paymentAInspect

Submit payment proof after sending stablecoins to a FactorGuide wallet address. For x402: provide tx_hash and chain. For MPP: use in-band Authorization header instead — no separate submission needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesChain identifier, e.g. 'eip155:8453' or 'tempo:4217'
tx_hashYesOn-chain transaction hash
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Clarifies workflow timing (post-transfer submission) and protocol variations (x402 vs MPP) despite no annotations; lacks details on submission outcomes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose followed by protocol guidance; no extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for low complexity (2 params, no output schema); covers purpose, timing, and protocol distinctions adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

References parameters in x402 context, but schema coverage is 100% with clear descriptions, meeting baseline expectations without significant additional semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb (submit) + resource (payment proof) clearly stated; distinctly different from diagnostic/explanatory siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly distinguishes when to use (x402 payments with tx_hash/chain) vs when not to use (MPP with in-band auth alternative).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

factorguide_synergy_detectBInspect

Detect hidden synergistic structure via Walsh-Hadamard spectral analysis. Accepts pre-computed Walsh coefficients — agent performs the transform locally and sends only the spectral summary. Specification pending — v1.1 target.

ParametersJSON Schema
NameRequiredDescriptionDefault
n_samplesNo
n_variablesNo
ic_matrix_refNo
transform_methodNo
walsh_coefficientsNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses client-side computation pattern and v1.1 maturity status, but omits side effects, rate limits, or what 'spectral summary' entails given no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tight sentences with no fluff; purpose front-loaded, though 'specification pending' warning might ideally come earlier.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for the schema complexity (5 params, nested objects, 0% coverage); critical gaps around reference parameters and return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, description only explains 'walsh_coefficients' conceptually while leaving 'ic_matrix_ref', 'n_variables', and nested order structures completely unexplained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (detect synergistic structure) and method (Walsh-Hadamard spectral analysis), though doesn't clearly differentiate from sibling 'regime_detect'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context (when agent has pre-computed Walsh coefficients) but lacks explicit when-not or comparison to diagnose/regime_detect alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.