Skip to main content
Glama

Server Details

Finds the simplest equation consistent with your data. SINDy and PySR symbolic regression via MCP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

3 tools
feature.requestAInspect

Request a feature that Occam doesn't support yet.

Use this when you need a capability that Occam doesn't currently offer. Requests are logged and used to prioritize development.

ParametersJSON Schema
NameRequiredDescriptionDefault
descriptionYesA short description of the feature you need. Examples: 'LaTeX output for equations', 'support for ODE constraints', 'GPU-accelerated search', 'larger dataset limits'. Helps prioritize development.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint: false (write operation) and destructiveHint: false. The description adds valuable behavioral context that 'Requests are logged and used to prioritize development,' explaining the side effect (logging) and downstream impact (development prioritization) beyond the structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence states purpose immediately; second provides usage context. No redundant information or verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter logging tool with no output schema requirement, the description adequately covers purpose, usage trigger, and behavioral outcome (logging/prioritization). No gaps given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'description' parameter fully documented in the schema including examples. The tool description provides no additional parameter-specific semantics, which is appropriate when the schema carries full documentation burden (baseline 3).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Request[s] a feature that Occam doesn't support yet' with a specific verb and resource. It clearly distinguishes from siblings pysr.run and sindy.run (which execute algorithms) by positioning this as a meta/feedback mechanism.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('Use this when you need a capability that Occam doesn't currently offer'). Lacks explicit when-NOT-to-use or named alternatives, though the distinction from algorithm-execution siblings is clear from context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pysr.runA
Read-only
Inspect

Evolutionary Symbolic Regression (PySR).

Discovers algebraic equations from feature/target data.
Returns a Pareto front of expressions ranked by the tradeoff
between complexity and accuracy. Slower than SINDy (10-60s).
Best for finding closed-form relationships without time structure.

Data policy: https://occam.fit/privacy — Citation info: https://occam.fit/cite
ParametersJSON Schema
NameRequiredDescriptionDefault
XYes2D array of input features. Each row is an observation, each column is a feature. Max 1,000 rows, 10 features.
yYesTarget values, one per row of X.
populationsNoNumber of evolutionary populations for the search. Default 15, max 20.
feature_namesNoNames for each variable/feature column. Defaults to x0, x1, ...
max_complexityNoMaximum expression tree size. Higher allows more complex expressions. Default 20, max 25.
timeout_secondsNoWall clock time limit in seconds. Default 60, max 60.
unary_operatorsNoAllowed unary operators. Options: sin, cos, tan, exp, log, log2, log10, sqrt, abs, sinh, cosh, tanh. Default: sin, cos, exp, log, sqrt.
binary_operatorsNoAllowed binary operators. Options: +, -, *, /, ^. Default: +, -, *, /.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only safety (readOnlyHint=true), and the description adds valuable behavioral context: explains return format ('Pareto front of expressions ranked by...complexity and accuracy'), time complexity ('10-60s'), and data policy constraints. Does not contradict annotations. Could improve by mentioning timeout behavior or failure modes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured and front-loaded. Opens with technical identifier, immediately follows with action and return value, then comparative performance, then use-case constraints. Every sentence adds distinct value (purpose, output, performance, suitability, compliance). No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains the return value ('Pareto front of expressions'). Covers algorithm type, performance characteristics (10-60s), computational scope (closed-form, no time structure), and external links for compliance. Appropriately complete for a complex ML tool, though could mention behavior on timeout or convergence failure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for all 8 parameters (X, y, populations, feature_names, max_complexity, timeout_seconds, operators). Description provides no additional parameter guidance, but the baseline of 3 is appropriate when schema carries full semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states it 'Discovers algebraic equations from feature/target data' using 'Evolutionary Symbolic Regression (PySR)'. Clearly distinguishes from sibling tool sindy.run by noting it is 'Slower than SINDy' and specifically 'Best for finding closed-form relationships without time structure', implying SINDy handles time series.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit comparative guidance: notes performance tradeoff ('Slower than SINDy (10-60s)') and domain suitability ('Best for...without time structure'). This clearly signals when to prefer this over sindy.run and sets performance expectations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sindy.runA
Read-onlyIdempotent
Inspect

Sparse Identification of Nonlinear Dynamics (SINDy).

Discovers governing differential equations from time series data.
Returns human-readable sparse expressions. Fast (seconds).
Best for systems where you have time-resolved measurements of
multiple state variables and want to recover the dynamics.

Data policy: https://occam.fit/privacy — Citation info: https://occam.fit/cite
ParametersJSON Schema
NameRequiredDescriptionDefault
tYesTimestamps corresponding to each row of data. Length must match row count.
dataYes2D array of time series data. Each row is a timestep, each column is a state variable. Max 10,000 rows, 20 variables.
max_iterNoMaximum STLSQ optimizer iterations. Default 20.
thresholdNoSTLSQ sparsity threshold. Higher values produce sparser equations. Default 0.1.
poly_degreeNoPolynomial library degree for SINDy candidate functions. Default 2.
feature_namesNoNames for each variable/feature column. Defaults to x0, x1, ...
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive behavior. The description adds valuable context about output format ('human-readable sparse expressions') and performance ('Fast (seconds)'), plus policy/citation links. However, it lacks details on error handling, computational limits, or what happens with ill-conditioned data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with no wasted words: it opens with the method name, states core functionality, describes outputs/performance, provides usage context, and ends with policy links. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description partially compensates by stating the tool returns 'human-readable sparse expressions.' However, it lacks specifics on the return structure (e.g., JSON format, field names) or error scenarios that would help agents handle responses correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'time series data' generally but does not add syntax details, validation rules, or semantic relationships between parameters (e.g., how 'threshold' affects sparsity) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Discovers governing differential equations from time series data' using the SINDy method, providing specific verbs and resources. However, it does not explicitly differentiate from the sibling tool 'pysr.run' (which also performs equation discovery), preventing a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear positive guidance with 'Best for systems where you have time-resolved measurements of multiple state variables and want to recover the dynamics.' This gives agents clear context on when to select this tool, though it lacks explicit exclusions or comparisons to alternatives like 'pysr.run'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources