factorguide
Server Details
Send a coupling matrix, get zone classifications and optimal factorization strategy.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Bwana7/factorguide
- GitHub Stars
- 0
- Server Listing
- FactorGuide
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 7 of 7 tools scored. Lowest: 2.9/5.
Most tools serve distinct purposes in the analysis workflow (navigate for terrain mapping, diagnose for single-pair checks, explain for interpretation). While navigate and diagnose both involve risk prediction, their scopes (full model vs. single pair) are clearly differentiated in descriptions. The two detection tools target different mathematical domains (time series vs. spectral).
The naming mixes conventions: three tools use bare verbs (diagnose, explain, navigate), two use verb_noun (report_outcome, submit_payment), and two use noun_verb (regime_detect, synergy_detect). While all use snake_case consistently, the inconsistent word order and structure (simple vs. compound) creates mild unpredictability.
Seven tools is well-suited for this specialized statistical factorization domain. The set covers the complete workflow (analysis, diagnosis, explanation, outcome reporting, payment) without bloat. Even with two pending specifications, the count represents appropriate scope for the server's purpose.
The core diagnostic loop is well-covered (navigate → explain → report_outcome). However, two analysis tools (regime_detect and synergy_detect) are marked as specification pending for v1.1, leaving gaps in time-series and spectral analysis capabilities. Additionally, there are no tools for retrieving or managing prediction history beyond the explain function.
Available Tools
7 toolsfactorguide_diagnoseCInspect
Quick single-pair diagnostic. IC with risk prediction for both model classes. Include variances for sign detectability. Requires X-Wallet header with your EVM wallet address (0x...). First 5 queries are free trial.
| Name | Required | Description | Default |
|---|---|---|---|
| i | Yes | First variable name | |
| j | Yes | Second variable name | |
| variance_i | No | ||
| variance_j | No | ||
| sample_size | Yes | ||
| coupling_value | Yes | IC or coupling value |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fails to disclose critical behavioral traits like safety (read-only vs. mutation), idempotency, or return format. It mentions calculation details ('risk prediction,' 'sign detectability') but omits operational risks, side effects, or performance characteristics necessary for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely compact (three sentences) with information-dense phrasing. While efficient, the heavy reliance on acronyms and technical shorthand ('IC,' 'model classes') slightly undermines accessibility. No redundant words, though front-loading could better define 'IC' on first use.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a diagnostic tool with 6 parameters, but incomplete given the lack of output schema and annotations. Missing: definition of returned diagnostic metrics, explanation of 'model classes,' and clarification of 'IC.' Description covers inputs reasonably but fails to set expectations for outputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Effectively compensates for the 50% schema description gap by mapping description terms to parameters: 'variances' clarifies variance_i/variance_j, 'IC' clarifies coupling_value, and 'single-pair' clarifies i/j. However, it omits guidance on sample_size's role or the significance of the minimum: 10 constraint.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies the tool as a 'single-pair diagnostic' with 'risk prediction,' but relies heavily on undefined domain jargon ('IC,' 'model classes') that hinders immediate comprehension. The 'single-pair' scope slightly distinguishes it from siblings like synergy_detect or regime_detect, but the core purpose remains opaque without domain knowledge.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit when-to-use guidance or alternatives. The phrase 'Include variances for sign detectability' could imply optional parameter usage but is grammatically ambiguous (imperative vs. descriptive) and does not clearly signal when to provide optional variance_i/variance_j versus omitting them.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
factorguide_explainAInspect
Plain-language explanation of a previous navigate response, including wave mechanics grounding for observational cost guidance. Requires a prediction_hash from a prior factorguide_navigate call. Consumes 1 query allocation. Available for starter and professional tiers. Requires X-Wallet header with your EVM wallet address (0x...). First 5 queries are free trial.
| Name | Required | Description | Default |
|---|---|---|---|
| prediction_hash | Yes | prediction_hash from a previous navigate response |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses cost behavior ('Consumes 1 query allocation'), access restrictions ('starter and professional tiers'), and content characteristics ('wave mechanics grounding for observational cost guidance').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three efficiently structured sentences covering purpose, prerequisites, and constraints. Every sentence earns its place with no redundancy, and the information is front-loaded with the core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description provides substantial context including domain-specific details ('wave mechanics grounding'), cost implications, and tier availability. It adequately covers operational context despite lacking explicit return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage and already describes the parameter as coming from 'a previous navigate response,' the description adds value by explicitly naming the sibling tool 'factorguide_navigate,' reinforcing the cross-tool dependency and workflow context beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool provides a 'plain-language explanation of a previous navigate response,' using a specific verb and resource. It clearly distinguishes itself from siblings by specifically referencing 'factorguide_navigate' and 'navigate response,' establishing it as a follow-up tool in a workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear prerequisites ('Requires a prediction_hash from a prior factorguide_navigate call') and operational constraints ('Consumes 1 query allocation,' 'Available for starter and professional tiers'). It implies the workflow sequence but does not explicitly state when NOT to use it or name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
factorguide_regime_detectBInspect
Detect coupling regime changes in time series via windowed IC. Specification pending — v1.1 target.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral disclosure. It mentions 'windowed IC' (implementation approach) and warns that the specification is pending, but lacks details on return format, side effects, input data requirements, or resource constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences. The first conveys the tool's purpose; the second communicates critical implementation status. No redundant or wasteful text is present, and information is appropriately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the empty input schema (unusual for a time series analysis tool) and lack of output schema or annotations, the description is insufficient for operational use. While the 'v1.1 target' note honestly flags intentional incompleteness, it does not compensate for missing critical context such as how time series data is provided or what the tool returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. According to scoring rules, 0 parameters establishes a baseline score of 4. There are no parameters requiring semantic clarification beyond the schema structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific action (Detect), resource (coupling regime changes in time series), and method (via windowed IC), distinguishing it from sibling synergy_detect and other non-analysis tools. However, the 'Specification pending' caveat slightly tempers the clarity regarding current availability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance or alternative recommendations are provided. While 'Specification pending' implies readiness constraints, it does not specify conditions for use versus sibling tools like factorguide_synergy_detect or factorguide_diagnose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
factorguide_report_outcomeAInspect
Complete the prediction loop — report inference diagnostics so future predictions improve. After running the approach FactorGuide recommended, return your ESS ratio, PSIS-khat, or log-likelihood gap. Zero additional computation required. Does not consume a query allocation.
| Name | Required | Description | Default |
|---|---|---|---|
| ess_ratio | No | ||
| psis_khat | No | ||
| log_lik_gap | No | ||
| approach_taken | Yes | ||
| n_replications | No | ||
| prediction_hash | Yes | ||
| runtime_seconds | No | ||
| actual_mse_ratio | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully adds critical behavioral constraints not inferable from the schema: 'Zero additional computation required' and 'Does not consume a query allocation.' It also clarifies the feedback loop purpose ('so future predictions improve').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste. Front-loaded with the core purpose ('Complete the prediction loop'), followed by specific metrics, then critical behavioral constraints. Every sentence provides unique value not available in structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the specialized statistical domain and lack of annotations/output schema, the description adequately establishes the workflow position and data flow (reporting back after prediction). It could improve by explaining how to obtain prediction_hash or what constitutes valid diagnostic values, but sufficiently covers the tool's functional role.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, so the description must compensate. It explains the purpose of ess_ratio, psis_khat, and log_lik_gap by naming them as diagnostics to return, and implies approach_taken. However, it omits prediction_hash (required), n_replications, runtime_seconds, and actual_mse_ratio entirely, leaving half the parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Complete[s] the prediction loop' and specifies reporting 'inference diagnostics' (ESS ratio, PSIS-khat, log-likelihood gap). It effectively distinguishes from siblings like 'diagnose' or 'regime_detect' by focusing specifically on post-prediction outcome reporting to improve future predictions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides clear temporal context ('After running the approach FactorGuide recommended') and specifies exactly which metrics to return. However, it lacks explicit contrast with siblings (e.g., when to use 'diagnose' vs this tool) or prerequisite details about obtaining the prediction_hash.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
factorguide_submit_paymentAInspect
Submit payment proof after sending stablecoins to a FactorGuide wallet address. For x402: provide tx_hash and chain. For MPP: use in-band Authorization header instead — no separate submission needed.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | Chain identifier, e.g. 'eip155:8453' or 'tempo:4217' | |
| tx_hash | Yes | On-chain transaction hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the workflow distinction between x402 and MPP integration patterns, but fails to mention side effects, idempotency, verification behavior, or what constitutes success/failure for a financial transaction operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero waste. The first establishes the core operation; the second immediately provides the critical conditional logic for protocol selection. Perfectly front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description adequately covers input requirements and protocol selection logic, but lacks disclosure of return values, confirmation behavior, or post-submission state changes that would help an agent handle the response appropriately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description maps parameters to the x402 protocol use case but does not add syntax details, format constraints, or semantic relationships beyond what the schema already provides for tx_hash and chain.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (submit payment proof), the context (after sending stablecoins), and the target resource (FactorGuide wallet address). It distinguishes from siblings (diagnose, explain, navigate) by being the only payment-related operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: specifies exactly when to use this tool ('For x402: provide tx_hash and chain') and when NOT to use it ('For MPP: use in-band Authorization header instead — no separate submission needed'), clearly distinguishing between two payment protocols.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
factorguide_synergy_detectBInspect
Detect hidden synergistic structure via Walsh-Hadamard spectral analysis. Accepts pre-computed Walsh coefficients — agent performs the transform locally and sends only the spectral summary. Specification pending — v1.1 target.
| Name | Required | Description | Default |
|---|---|---|---|
| n_samples | No | ||
| n_variables | No | ||
| ic_matrix_ref | No | ||
| transform_method | No | ||
| walsh_coefficients | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but fails to disclose safety profile (read-only vs destructive), return format, or side effects. It does mention the 'Specification pending — v1.1 target' status, which is relevant behavioral context, but this is insufficient for a tool with complex mathematical processing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no wasted words. The information density is high, though the critical 'Specification pending' disclaimer might be more effective if front-loaded rather than placed at the end.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (Walsh-Hadamard analysis, 5 parameters with nested object structures, 0% schema coverage, no annotations, no output schema), the description is inadequate. It fails to explain parameter relationships, expected return values, or error conditions necessary for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, requiring the description to compensate. While it mentions 'pre-computed Walsh coefficients' (referencing the walsh_coefficients parameter), it completely omits explanations for n_samples, n_variables, ic_matrix_ref, and transform_method, leaving the majority of parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly identifies the specific action ('Detect hidden synergistic structure') and method ('Walsh-Hadamard spectral analysis'), which distinguishes it from sibling tools like regime_detect or diagnose. However, it does not explicitly differentiate when to choose this over other analysis tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage context by stating it 'Accepts pre-computed Walsh coefficients' and that the 'agent performs the transform locally,' indicating prerequisites. However, lacks explicit when-to-use guidance compared to siblings and does not mention the 'Specification pending' status as a usage blocker until the final sentence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.