Skip to main content
Glama

Server Details

Finds the simplest equation consistent with your data. SINDy and PySR symbolic regression via MCP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose with clear boundaries. feature_request is for capability requests, pysr_run is for algebraic equation discovery, pysr_uncertainty is for confidence intervals on pysr results, and sindy_run is for differential equation discovery. The descriptions explicitly differentiate between pysr_run and sindy_run for algebraic vs. time-series data.

Naming Consistency4/5

Three tools follow a consistent verb_noun pattern (pysr_run, pysr_uncertainty, sindy_run), all using snake_case. feature_request deviates slightly as a noun_noun pattern, but it's still snake_case and semantically clear. The naming is mostly consistent with one minor deviation.

Tool Count5/5

Four tools is well-scoped for a symbolic regression and dynamics identification server. Each tool earns its place: two core analysis tools (pysr_run, sindy_run), one follow-up analysis tool (pysr_uncertainty), and one administrative tool (feature_request). This is a lean, focused set without redundancy.

Completeness5/5

The tool surface covers the core workflows for symbolic regression and dynamics identification comprehensively. pysr_run and sindy_run handle the primary analysis tasks for algebraic and differential equations respectively, pysr_uncertainty provides essential follow-up uncertainty quantification, and feature_request allows for capability expansion. There are no obvious gaps for the server's stated purpose.

Available Tools

4 tools
feature_requestAInspect

Request a feature that Occam doesn't support yet.

Use this when you need a capability that Occam doesn't currently
offer. Requests are logged and used to prioritize development.

Rate limit: 5 requests/hour per IP, 50/hour global — stricter than
the compute tools' 10/hour to prevent log flooding. Descriptions
longer than 500 characters are truncated.
ParametersJSON Schema
NameRequiredDescriptionDefault
descriptionYesA short description of the feature you need. Examples: 'LaTeX output for equations', 'support for ODE constraints', 'GPU-accelerated search', 'larger dataset limits'. Helps prioritize development.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okYes
messageYes
descriptionYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide basic hints (non-readOnly, non-destructive, non-idempotent, non-openWorld), but the description adds valuable context: it explains that requests are 'logged and used to prioritize development,' which clarifies the tool's backend behavior and purpose beyond the annotations. However, it doesn't disclose additional behavioral traits like rate limits, authentication needs, or specific response formats.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured: two short sentences that directly address purpose and usage. Every word earns its place, with no redundant information. It's front-loaded with the core purpose and follows with clear guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple nature (single parameter, no output schema, basic annotations), the description provides adequate context. It explains what the tool does and when to use it, which covers the essential information needed for this type of feedback/submission tool. The main gap is lack of information about what happens after submission (confirmation, tracking, etc.), but for a simple request tool, this is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single 'description' parameter. The tool description doesn't add any parameter-specific information beyond what's in the schema. According to the rubric, when schema coverage is high (>80%), the baseline score is 3 even without parameter details in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Request a feature that Occam doesn't support yet.' It specifies the verb ('request') and resource ('feature'), though it doesn't explicitly differentiate from sibling tools like 'pysr_run' or 'sindy_run' (which appear to be execution tools rather than feedback mechanisms). The purpose is specific enough for an agent to understand this is for submitting development requests.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this when you need a capability that Occam doesn't currently offer.' It clearly defines the triggering condition (missing capability) and explains the outcome ('Requests are logged and used to prioritize development'). This gives the agent clear context about when to invoke this tool versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pysr_runA
Read-only
Inspect

Evolutionary Symbolic Regression (PySR).

Discovers algebraic equations y = f(x1, x2, ...) from feature/target
data. Returns a Pareto front ranked by the complexity/accuracy
tradeoff. Slower than SINDy (10-60s); searches often terminate early
on convergence. For differential equations from time series, use
sindy_run instead.

Pricing: free tier up to 100 rows × 8 features, 60s timeout. Beyond
that, $0.25 + $0.03 per 100 extra rows + $0.01 per extra feature
squared, timeout up to 300s (5 min), via x402 (USDC on Base) or
MPP/Stripe. MPP/Stripe adds a flat $0.35 per-transaction fee (Stripe
processing), so the MPP challenge amount in a `payment_required`
response is $0.35 higher than the x402 amount for the same base
price; x402 gets the lower rate. Omit `payment` for free-tier
requests; paid requests without a valid credential receive a
`payment_required` result with pricing and accepted schemes. Full
pricing: occam://pricing

Advisory limits: jobs over 50,000 rows or 20 features are accepted
but may not converge; response carries a top-level `warning`.

Operators: fixed supported set only — custom operators (e.g.
'inv(x) = 1/x') are rejected. Unary: sin, cos, tan, exp, log, log2,
log10, sqrt, abs, sinh, cosh, tanh. Binary: +, -, *, /, ^.
See also prompt `supported_operators`.

Loss metric: `loss` (in `pareto_front[].loss` and `best_loss`) is
mean squared error between model prediction and `y` on the full
training set — not RMSE, and not normalized by Var(y). A threshold
appropriate for one dataset scales with y's magnitude, so set
`loss_threshold` with that in mind (e.g. for y values near 1.0,
1e-6 is a tight fit; for y near 1000, the equivalent is 1.0).

Early termination: set `loss_threshold` to stop at your noise floor.
The server also stops when the search stalls (<1% improvement in the
last third of the budget); disable with `stall_detection=false`.
Response `stop_reason` is one of: loss_threshold, stall, timeout,
natural.

If `feature_names` is supplied, its length must equal the number of
columns in `X`; a mismatch is rejected with a validation error.

Follow-up: call `pysr_uncertainty` with a chosen expression and the
same dataset for bootstrap confidence intervals on its fit constants
and optional prediction bands.

Rate limit: 10 requests/hour per IP, 200/hour global, max queue
depth 20 (shared with sindy_run and pysr_uncertainty).

Response (success) includes `pareto_front[]` (each with `complexity`,
`loss`, `expression`, `expression_latex`), `best_expression`,
`best_expression_latex`, `best_loss`, `best_complexity`, `stop_reason`,
`elapsed_seconds`, `queue_seconds` (>0 = server saturated; use as
backoff signal), optional `warning`, optional `_meta` (MPP receipt).
Full response and payment-required schemas: occam://tool-schemas

Example request:
  X=[[0.0], [1.0], [2.0], [3.0]], y=[1.0, 3.0, 5.0, 7.0],
  feature_names=["x"], max_complexity=10, timeout_seconds=15

Policy: occam://privacy-policy — Citation: occam://citation-info
ParametersJSON Schema
NameRequiredDescriptionDefault
XYes2D array of input features. Each row is an observation, each column is a feature. Free tier: 100 rows, 8 features. Paid tier: up to 50,000 rows, 20 features.
yYesTarget values, one per row of X.
paymentNoPayment credential as a JSON string. Required when the dataset exceeds the free tier (100 rows, 8 variables). Omit for free-tier requests. For x402: {"transaction":"0x...","network":"...","priceToken":"..."}. For MPP/Stripe: {"challenge":{...},"payload":"..."}.
populationsNoNumber of evolutionary populations for the search. Default 15, max 20.
feature_namesNoNames for each variable/feature column. Defaults to x0, x1, ...
loss_thresholdNoOptional early-stop threshold on the best loss found. If set, the search terminates as soon as any Pareto-front member reaches a loss at or below this value, even if the timeout has not been reached. Useful when you know your noise floor. Default: None (no user threshold; the search runs until the stall detector or timeout).
max_complexityNoMaximum expression tree size. Higher allows more complex expressions. Default 20, max 25.
stall_detectionNoWhen true (default), the server stops the search early if the best loss has not improved by more than 1% during the last third of the time budget. This reclaims compute once the search has converged. Set to false only if you want the search to run for the full timeout regardless of progress.
timeout_secondsNoWall clock time limit in seconds. Free tier: max 60. Paid tier: max 300 (5 minutes). Default 60.
unary_operatorsNoAllowed unary operators, drawn from the fixed supported set: sin, cos, tan, exp, log, log2, log10, sqrt, abs, sinh, cosh, tanh. Custom operators (e.g. 'inv(x) = 1/x') are NOT supported — only the names listed are accepted. Default: sin, cos, exp, log, sqrt. Pass [] for none.
binary_operatorsNoAllowed binary operators, drawn from the fixed supported set: +, -, *, /, ^. Custom operators are NOT supported. Default: +, -, *, /. Pass [] for none.

Output Schema

ParametersJSON Schema
NameRequiredDescription
warningNo
best_lossNo
stop_reasonNo
pareto_frontNo
queue_secondsNo
best_complexityNo
best_expressionNo
elapsed_secondsNo
best_expression_latexNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds substantial behavioral context beyond annotations, including performance characteristics ('10-60s'), early termination behavior with loss_threshold and stall detection, rate limits ('10 requests/hour per IP'), and details about the stop_reason field. Annotations indicate readOnly and non-destructive operations, which the description doesn't contradict.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with purpose upfront, usage guidelines, behavioral details, and examples. While comprehensive, some sections (like the full example response and data policy links) could be more concise. Most sentences earn their place by adding unique value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity and lack of output schema, the description provides excellent completeness. It explains what the tool returns (Pareto front, best expression), includes a detailed example response, covers behavioral traits, and addresses limitations. The combination of annotations and description gives the agent sufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds value by explaining the practical use of loss_threshold ('stop once the best loss reaches your noise floor') and stall_detection behavior, and provides concrete examples of X, y, and feature_names usage. However, it doesn't add significant semantic context for most parameters beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Discovers algebraic equations from feature/target data' and 'Returns a Pareto front of expressions ranked by the tradeoff between complexity and accuracy.' It distinguishes from sibling tools by explicitly contrasting with sindy_run for differential equations versus algebraic relationships.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives: 'Best for finding closed-form relationships without time structure' and 'This tool does NOT recover differential equations or dynamics from time series — use sindy_run for that.' It also includes performance context ('Slower than SINDy') and data policy/citation information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pysr_uncertaintyA
Read-only
Inspect

Bootstrap confidence intervals for the numeric constants of a frozen expression, plus optional prediction bands on an x-grid.

Typical flow: call pysr_run, pick an expression from the response
(best_expression or a pareto_front entry), pass it back here with
the same dataset to get CIs on its fit constants.

Returns frequentist bootstrap confidence intervals, not Bayesian
credible intervals — posterior inference over expression structures
is an open research problem. This tool freezes the expression
chosen by the caller and bootstraps only its numeric constants;
uncertainty about *which* expression is correct is not quantified.

Bootstrap semantics:
  - If y_sigma is supplied, uses parametric bootstrap
    (y_b = y + Normal(0, y_sigma)). CI reflects user-stated
    measurement noise.
  - Otherwise uses residual bootstrap: fit once, resample residuals.
    CI reflects estimated-from-residuals noise.

Only Float constants in the expression become free parameters.
Integers stay structural (the 2 in x**2 is a function-class choice,
not a fit constant). Expressions with no Float constants
(e.g. "x + y") will be rejected with a validation error.

Expression grammar: the `expression` string is parsed by sympy.
Accepted operators are the same set pysr_run emits: unary `sin`,
`cos`, `tan`, `exp`, `log`, `log2`, `log10`, `sqrt`, `abs`, `sinh`,
`cosh`, `tanh`; binary `+`, `-`, `*`, `/`, `^` (or `**`). Whitespace
and parenthesization are free. Every free symbol in the expression
must correspond to an entry in `feature_names` — an unrecognised
symbol is silently treated as a fresh sympy Symbol and the fit will
fail downstream rather than reject early. Parse failures (syntax
errors, malformed operators) surface as tool errors.

If `feature_names` is supplied, its length must equal the number of
columns in `X`; a mismatch is rejected with a validation error.

Pricing: always free, regardless of dataset size. This tool has no
`payment` parameter and is never subject to the x402/Stripe gate.
Large bootstrap jobs still count against the shared rate limit
below, so budget `n_resamples` accordingly.

Rate limit: 10 requests/hour per IP, 200/hour global, max queue
depth 20 (shared with sindy_run and pysr_run).
ParametersJSON Schema
NameRequiredDescriptionDefault
XYes2D array of input features. Each row is an observation, each column is a feature. Free tier: 100 rows, 8 features. Paid tier: up to 50,000 rows, 20 features.
yYesTarget values, one per row of X.
alphaNoSignificance level. 0.05 → 95%% CI. Default 0.05.
x_gridNoOptional 2D grid of feature values at which to report a prediction band. Must have the same number of columns as X. Omit to skip prediction-band computation.
y_sigmaNoOptional per-point measurement standard deviations, or a single scalar applied to all points. When supplied, the helper uses parametric bootstrap (y_b = y + Normal(0, y_sigma)); otherwise it uses residual bootstrap. Supplying y_sigma also improves the initial weighted fit.
expressionYesThe expression to bootstrap, as returned by pysr_run (`best_expression` or a `pareto_front[i].expression`). Only numeric Float constants are treated as free parameters — integers in the expression (e.g. the 2 in x**2) stay structural.
n_resamplesNoNumber of bootstrap resamples. Higher = tighter CIs, more compute. Default 100.
feature_namesNoNames for each variable/feature column. Defaults to x0, x1, ...

Output Schema

ParametersJSON Schema
NameRequiredDescription
noteYes
alphaYes
coefficientsYes
prediction_ciNo
bootstrap_methodYes
n_requested_resamplesYes
n_successful_resamplesYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, indicating a safe read operation. The description adds valuable behavioral context beyond annotations: it explains bootstrap semantics (parametric vs. residual), validation rules for expressions with no Float constants, and rate limits (10 requests/hour per IP, shared with pysr_run and sindy_run). It doesn't contradict annotations, but could mention more about error handling or output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and efficiently organized. It starts with a clear purpose statement, provides usage flow, explains key concepts (bootstrap types, constant handling), and ends with rate limit information. Every sentence adds value without redundancy, making it easy to parse and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex statistical tool with 8 parameters and no output schema, the description does an excellent job covering purpose, usage, behavioral nuances, and limitations. It explains what the tool does and doesn't do (e.g., no Bayesian intervals, no expression structure uncertainty). The main gap is the lack of output format details, which would be helpful given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds some semantic context by explaining the role of y_sigma in bootstrap type selection and the treatment of Float vs. Integer constants in the expression, but doesn't provide significant additional parameter details beyond what the schema offers. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Bootstrap confidence intervals for the numeric constants of a frozen expression, plus optional prediction bands on an x-grid.' It specifies the exact operation (bootstrap confidence intervals) and distinguishes it from sibling tools like pysr_run and sindy_run by focusing on uncertainty quantification rather than expression discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Typical flow: call pysr_run, pick an expression from the response (best_expression or a pareto_front entry), pass it back here with the same dataset to get CIs on its fit constants.' It clearly indicates when to use this tool (after pysr_run) and distinguishes it from alternatives by noting it doesn't handle Bayesian credible intervals or expression structure uncertainty.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sindy_runA
Read-onlyIdempotent
Inspect

Sparse Identification of Nonlinear Dynamics (SINDy).

Recovers governing differential equations (dx/dt = f(x)) from time
series data. Returns human-readable sparse expressions. Fast (seconds).
For algebraic y = f(x) relationships without time structure, use
pysr_run instead.

Pricing: free tier up to 100 rows and 8 variables. Beyond that,
$0.05 + $0.01 per 100 extra rows + $0.01 per extra variable squared,
via x402 (USDC on Base) or MPP/Stripe. MPP/Stripe adds a flat $0.35
per-transaction fee (Stripe processing), so the MPP challenge amount
in a `payment_required` response is $0.35 higher than the x402 amount
for the same base price; x402 gets the lower rate. Omit `payment`
for free-tier requests; paid requests without a valid credential
receive a `payment_required` result with pricing and accepted schemes.
Full pricing table as structured JSON: occam://pricing

Advisory limits: jobs over 500,000 rows or 50 variables are accepted
but may not converge within the time budget; the response carries a
top-level `warning` the agent should surface and treat as tentative.

If `feature_names` is supplied, its length must equal the number of
data columns; a mismatch is rejected with a validation error.

Rate limit: 10 requests/hour per IP, 200/hour global, max queue
depth 20 (shared with pysr_run and pysr_uncertainty).

Response (success) includes `equations[]` (each with `variable`,
`equation`, `expression`, `expression_latex`, `r2`), `library_terms`,
`nonzero_terms`, `elapsed_seconds`, `canonical_match` (dict with
`system`, `form`, `variable_map`, `parameter_map`, `confidence` if
the discovered system matches one of Lorenz / Lotka-Volterra /
Van der Pol / Duffing; `null` otherwise), optional `warning`,
optional `_meta` (MPP receipt on paid calls). Full response and
payment-required schemas: occam://tool-schemas

Example request:
  data=[[1.0, 0.0], [0.95, -0.31], [0.81, -0.59]], t=[0.0, 0.1, 0.2],
  feature_names=["x", "y"], poly_degree=2, threshold=0.1

Policy: occam://privacy-policy — Citation: occam://citation-info
ParametersJSON Schema
NameRequiredDescriptionDefault
tYesTimestamps corresponding to each row of data. Length must match row count.
dataYes2D array of time series data. Each row is a timestep, each column is a state variable. Free tier: 100 rows, 8 variables. Paid tier: up to 500,000 rows, 50 variables.
paymentNoPayment credential as a JSON string. Required when the dataset exceeds the free tier (100 rows, 8 variables). Omit for free-tier requests. For x402: {"transaction":"0x...","network":"...","priceToken":"..."}. For MPP/Stripe: {"challenge":{...},"payload":"..."}.
max_iterNoMaximum STLSQ optimizer iterations. Default 20.
thresholdNoSTLSQ sparsity threshold. Higher values produce sparser equations. Default 0.1.
poly_degreeNoPolynomial library degree for SINDy candidate functions. Default 2.
feature_namesNoNames for each variable/feature column. Defaults to x0, x1, ...

Output Schema

ParametersJSON Schema
NameRequiredDescription
warningNo
equationsNo
library_termsNo
nonzero_termsNo
canonical_matchNo
elapsed_secondsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it discloses rate limits ('10 requests/hour per IP, 30/hour global'), performance characteristics ('Fast (seconds)'), and provides links to data policy and citation info. While annotations cover readOnlyHint=true and destructiveHint=false, the description adds practical operational constraints that help the agent use the tool appropriately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear sections: purpose statement, when to use, what it doesn't do, rate limits, examples, and policy links. Every sentence earns its place, and the information is front-loaded with the core purpose in the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex mathematical tool with no output schema, the description provides excellent context: clear purpose, usage guidelines, behavioral constraints, and a detailed example showing both request and response format. The main gap is that without an output schema, the description could more explicitly document the full response structure, though the example provides substantial guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 6 parameters thoroughly. The description provides example usage showing how parameters work together, but doesn't add significant semantic meaning beyond what's in the schema. The baseline score of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific purpose: 'Discovers governing differential equations from time series data' with the verb 'discovers' and resource 'differential equations'. It explicitly distinguishes from sibling pysr_run by stating 'SINDy is specifically for recovering differential equations (dx/dt = f(x)) from time series' while pysr_run handles algebraic relationships.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Best for systems where you have time-resolved measurements of multiple state variables') and when not to use it ('This tool does NOT find algebraic relationships between columns — use pysr_run for that'). It names the specific alternative tool and clearly defines the scope boundary.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources