Occam
Server Details
Finds the simplest equation consistent with your data. SINDy and PySR symbolic regression via MCP.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
3 toolsfeature.requestAInspect
Request a feature that Occam doesn't support yet.
Use this when you need a capability that Occam doesn't currently offer. Requests are logged and used to prioritize development.
| Name | Required | Description | Default |
|---|---|---|---|
| description | Yes | A short description of the feature you need. Examples: 'LaTeX output for equations', 'support for ODE constraints', 'GPU-accelerated search', 'larger dataset limits'. Helps prioritize development. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint: false (write operation) and destructiveHint: false. The description adds valuable behavioral context that 'Requests are logged and used to prioritize development,' explaining the side effect (logging) and downstream impact (development prioritization) beyond the structured annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence states purpose immediately; second provides usage context. No redundant information or verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter logging tool with no output schema requirement, the description adequately covers purpose, usage trigger, and behavioral outcome (logging/prioritization). No gaps given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'description' parameter fully documented in the schema including examples. The tool description provides no additional parameter-specific semantics, which is appropriate when the schema carries full documentation burden (baseline 3).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Request[s] a feature that Occam doesn't support yet' with a specific verb and resource. It clearly distinguishes from siblings pysr.run and sindy.run (which execute algorithms) by positioning this as a meta/feedback mechanism.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('Use this when you need a capability that Occam doesn't currently offer'). Lacks explicit when-NOT-to-use or named alternatives, though the distinction from algorithm-execution siblings is clear from context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pysr.runARead-onlyInspect
Evolutionary Symbolic Regression (PySR).
Discovers algebraic equations from feature/target data.
Returns a Pareto front of expressions ranked by the tradeoff
between complexity and accuracy. Slower than SINDy (10-60s).
Best for finding closed-form relationships without time structure.
Data policy: https://occam.fit/privacy — Citation info: https://occam.fit/cite| Name | Required | Description | Default |
|---|---|---|---|
| X | Yes | 2D array of input features. Each row is an observation, each column is a feature. Max 1,000 rows, 10 features. | |
| y | Yes | Target values, one per row of X. | |
| populations | No | Number of evolutionary populations for the search. Default 15, max 20. | |
| feature_names | No | Names for each variable/feature column. Defaults to x0, x1, ... | |
| max_complexity | No | Maximum expression tree size. Higher allows more complex expressions. Default 20, max 25. | |
| timeout_seconds | No | Wall clock time limit in seconds. Default 60, max 60. | |
| unary_operators | No | Allowed unary operators. Options: sin, cos, tan, exp, log, log2, log10, sqrt, abs, sinh, cosh, tanh. Default: sin, cos, exp, log, sqrt. | |
| binary_operators | No | Allowed binary operators. Options: +, -, *, /, ^. Default: +, -, *, /. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only safety (readOnlyHint=true), and the description adds valuable behavioral context: explains return format ('Pareto front of expressions ranked by...complexity and accuracy'), time complexity ('10-60s'), and data policy constraints. Does not contradict annotations. Could improve by mentioning timeout behavior or failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured and front-loaded. Opens with technical identifier, immediately follows with action and return value, then comparative performance, then use-case constraints. Every sentence adds distinct value (purpose, output, performance, suitability, compliance). No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the return value ('Pareto front of expressions'). Covers algorithm type, performance characteristics (10-60s), computational scope (closed-form, no time structure), and external links for compliance. Appropriately complete for a complex ML tool, though could mention behavior on timeout or convergence failure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear documentation for all 8 parameters (X, y, populations, feature_names, max_complexity, timeout_seconds, operators). Description provides no additional parameter guidance, but the baseline of 3 is appropriate when schema carries full semantic load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states it 'Discovers algebraic equations from feature/target data' using 'Evolutionary Symbolic Regression (PySR)'. Clearly distinguishes from sibling tool sindy.run by noting it is 'Slower than SINDy' and specifically 'Best for finding closed-form relationships without time structure', implying SINDy handles time series.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit comparative guidance: notes performance tradeoff ('Slower than SINDy (10-60s)') and domain suitability ('Best for...without time structure'). This clearly signals when to prefer this over sindy.run and sets performance expectations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sindy.runARead-onlyIdempotentInspect
Sparse Identification of Nonlinear Dynamics (SINDy).
Discovers governing differential equations from time series data.
Returns human-readable sparse expressions. Fast (seconds).
Best for systems where you have time-resolved measurements of
multiple state variables and want to recover the dynamics.
Data policy: https://occam.fit/privacy — Citation info: https://occam.fit/cite| Name | Required | Description | Default |
|---|---|---|---|
| t | Yes | Timestamps corresponding to each row of data. Length must match row count. | |
| data | Yes | 2D array of time series data. Each row is a timestep, each column is a state variable. Max 10,000 rows, 20 variables. | |
| max_iter | No | Maximum STLSQ optimizer iterations. Default 20. | |
| threshold | No | STLSQ sparsity threshold. Higher values produce sparser equations. Default 0.1. | |
| poly_degree | No | Polynomial library degree for SINDy candidate functions. Default 2. | |
| feature_names | No | Names for each variable/feature column. Defaults to x0, x1, ... |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive behavior. The description adds valuable context about output format ('human-readable sparse expressions') and performance ('Fast (seconds)'), plus policy/citation links. However, it lacks details on error handling, computational limits, or what happens with ill-conditioned data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with no wasted words: it opens with the method name, states core functionality, describes outputs/performance, provides usage context, and ends with policy links. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description partially compensates by stating the tool returns 'human-readable sparse expressions.' However, it lacks specifics on the return structure (e.g., JSON format, field names) or error scenarios that would help agents handle responses correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mentions 'time series data' generally but does not add syntax details, validation rules, or semantic relationships between parameters (e.g., how 'threshold' affects sparsity) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Discovers governing differential equations from time series data' using the SINDy method, providing specific verbs and resources. However, it does not explicitly differentiate from the sibling tool 'pysr.run' (which also performs equation discovery), preventing a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear positive guidance with 'Best for systems where you have time-resolved measurements of multiple state variables and want to recover the dynamics.' This gives agents clear context on when to select this tool, though it lacks explicit exclusions or comparisons to alternatives like 'pysr.run'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!