Skip to main content
Glama

Server Details

Math.js MCP — wraps the mathjs.org API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-mathjs
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 7 of 7 tools scored. Lowest: 2.9/5.

Server CoherenceC
Disambiguation3/5

The tools have some distinct purposes, but there is significant overlap and confusion. 'ask_pipeworx' and 'discover_tools' both involve finding or using tools, which could cause misselection. 'remember', 'recall', and 'forget' form a clear memory management group, but 'ask_pipeworx' and 'evaluate' might be confused for general query handling. The descriptions help differentiate, but the set lacks clear boundaries.

Naming Consistency2/5

The naming is inconsistent with mixed conventions. 'ask_pipeworx' and 'discover_tools' use verb_noun patterns but include brand-specific terms, while 'convert_units', 'evaluate', 'forget', 'recall', and 'remember' are simple verbs or verb_noun. There is no uniform pattern across all tools, leading to a chaotic feel that reduces predictability.

Tool Count4/5

With 7 tools, the count is reasonable for a server that appears to mix mathematical computation with memory and tool discovery. It is slightly over-scoped due to the inclusion of unrelated functions like 'ask_pipeworx' and 'discover_tools', but overall, the number is manageable and not excessive.

Completeness2/5

The server has significant gaps in its tool surface. For a math-focused server, it lacks core operations like solving equations, matrix operations, or calculus functions, relying only on 'evaluate' and 'convert_units'. The inclusion of memory tools and Pipeworx-related tools creates a disjointed set that does not fully cover any single domain, leading to potential agent failures in mathematical tasks.

Available Tools

7 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool automatically selects data sources and fills arguments, handles natural language questions, and returns results. However, it doesn't mention limitations like response time, error conditions, or data source availability constraints that would be helpful for a tool with this complexity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured: first sentence states the core functionality, second explains the automation mechanism, third provides usage guidance, and final sentence offers three diverse examples. Every sentence earns its place with no wasted words, and the most important information (what the tool does) appears first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema and no annotations, the description provides excellent context about functionality, usage, and examples. However, it doesn't describe what format the answer will be in (text, structured data, etc.) or potential limitations, which would be helpful given the tool's complexity in automatically selecting and executing data sources.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'question' parameter. The description adds meaningful context by emphasizing 'plain English' and 'natural language,' providing three concrete examples that illustrate appropriate question formats and scope. This goes beyond the schema's basic documentation of the parameter type.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'). The description distinguishes this from sibling tools by emphasizing natural language input without needing to browse tools or learn schemas.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides three concrete examples illustrating appropriate use cases, and the natural language focus implicitly suggests alternatives (use other tools when you want direct control over specific data sources or parameters).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_unitsCInspect

Convert between units: length, weight, temperature, volume, time, etc. Returns converted value. E.g., "5 m to ft", "100 kg to lbs", "32 degF to degC". Use for unit conversions.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit (e.g., "cm", "lbs", "fahrenheit", "km/h")
fromYesSource unit (e.g., "inches", "kg", "celsius", "mph")
valueYesNumeric value to convert (e.g., 5)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns a string but doesn't cover error handling, supported unit types, conversion accuracy, performance characteristics, or any constraints like rate limits or authentication needs. This leaves significant gaps for a tool performing mathematical conversions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of just two sentences that directly state the tool's function and output. Every word contributes essential information without any redundancy or fluff, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete for a tool with three parameters and mathematical operations. It doesn't explain the return format beyond 'string', error cases, unit compatibility, or examples, leaving the agent with insufficient context to use the tool effectively in complex scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description doesn't add any parameter-specific information beyond what's already in the input schema, which has 100% coverage with clear descriptions for 'value', 'from', and 'to'. Since the schema fully documents the parameters, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: converting values between units using mathjs syntax and returning the result as a string. It specifies the verb ('convert'), resource ('value'), and mechanism ('mathjs unit syntax'), but doesn't explicitly differentiate from the sibling tool 'evaluate', which appears to be a different mathematical operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention the sibling tool 'evaluate' or any other conversion methods, nor does it specify prerequisites, limitations, or typical use cases beyond the basic functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it's a search operation (implied read-only, though not explicitly stated), returns a limited set of tools ('most relevant'), and suggests it's for initial discovery. However, it lacks details on error handling, rate limits, or authentication needs, leaving gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and concise, with two sentences that each earn their place: the first defines the purpose and output, the second provides critical usage guidelines. There is no wasted text, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search function with 2 parameters), no annotations, and no output schema, the description is fairly complete. It covers purpose, usage context, and output format, but lacks details on behavioral aspects like errors or performance, which could be important for a discovery tool in a large catalog.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('query' and 'limit') thoroughly. The description adds minimal value beyond the schema, mentioning the query as a 'natural language description' but not elaborating on semantics or usage examples. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resource ('tool catalog'), distinguishing it from siblings like 'convert_units' and 'evaluate' by focusing on discovery rather than conversion or evaluation. It explicitly mentions what it returns ('most relevant tools with names and descriptions'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This includes a specific condition (500+ tools) and timing (first), offering clear alternatives to not using it in smaller catalogs or as a secondary step.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

evaluateAInspect

Evaluate mathematical expressions: arithmetic, algebra, trigonometry, statistics. Returns computed result. E.g., "2+2", "sin(pi/2)", "sqrt(16)", "mean([1,2,3])". Use when you need to calculate or simplify math.

ParametersJSON Schema
NameRequiredDescriptionDefault
expressionYesMathematical expression to evaluate (e.g., "2 + 3 * 4", "sqrt(16)", "sin(pi/2)", "det([1,2;3,4])")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the return type ('computed result as a string') and hints at functionality ('Supports arithmetic, algebra, trigonometry, statistics, and more'), but lacks details on error handling, precision, computational limits, or authentication needs. It adds some value but not rich behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey the tool's purpose, scope, and output. Every sentence earns its place without redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no annotations, no output schema), the description is adequate but has clear gaps. It covers the basic purpose and output format, but lacks details on behavioral aspects like error cases or performance limits, which are important for a computational tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter 'expression' with examples. The description adds marginal value by reiterating the parameter's purpose ('Evaluate a mathematical expression') but does not provide additional syntax, format, or constraint details beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('evaluate') and resource ('mathematical expression'), and distinguishes it from the sibling tool 'convert_units' by focusing on computation rather than unit conversion. It specifies the scope of supported operations (arithmetic, algebra, trigonometry, statistics).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (evaluating mathematical expressions across various domains), but does not explicitly mention when not to use it or name alternatives. The sibling tool 'convert_units' is unrelated, so no explicit comparison is needed, but the description could note limitations (e.g., no symbolic algebra).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states this is a deletion operation, implying it's destructive/mutative, but doesn't disclose critical behavioral traits: whether deletion is permanent or reversible, what permissions are required, error handling (e.g., if key doesn't exist), or side effects. For a destructive tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste—it directly states the tool's action and target. It's appropriately sized for a simple tool with one parameter and is front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a destructive tool with no annotations and no output schema, the description is incomplete. It lacks information on behavioral consequences (e.g., permanence, error cases), expected outputs, or integration with sibling tools. The agent is left with significant gaps about how this tool behaves in practice.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'key' documented as 'Memory key to delete'. The description adds no additional meaning beyond this—it doesn't explain key format, constraints, or examples. With high schema coverage, the baseline is 3, as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and target resource ('a stored memory by key'), making the purpose immediately understandable. It doesn't explicitly distinguish from sibling tools like 'recall' (which likely retrieves memories) or 'remember' (which likely stores them), but the verb 'Delete' provides inherent differentiation from those operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing memory key), exclusions, or relationships to sibling tools like 'recall' (retrieve) or 'remember' (store). The agent must infer usage context solely from the tool name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: retrieving or listing memories based on key presence, and clarifies persistence across sessions ('in previous sessions'). However, it doesn't mention potential limitations like memory size, retrieval speed, or error handling for non-existent keys.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey purpose, usage, and parameter semantics. Every sentence earns its place: the first states the core functionality, and the second provides context and guidance without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (retrieve/list operations), no annotations, and no output schema, the description is mostly complete. It covers purpose, usage, and parameter behavior adequately. However, it lacks details on return values (e.g., format of retrieved memories or listed keys) and error cases, which would be helpful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds value by explaining the semantic effect of omitting the key ('omit to list all keys'), which clarifies the tool's dual functionality beyond the schema's technical specification. It doesn't provide additional format or constraint details, but the enhancement justifies a score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'), and distinguishes it from sibling tools like 'remember' (store) and 'forget' (delete). It explicitly mentions retrieving context saved earlier in the session or previous sessions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter ('omit key to list all keys'), offering clear alternatives within the same tool. This directly addresses when to use this tool versus alternatives like 'remember' for storing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the persistence model (authenticated vs. anonymous), the 24-hour limit for anonymous sessions, and the cross-tool context capability. It doesn't mention rate limits or error conditions, but covers the essential operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence states the core function, the second provides critical usage context and behavioral details. No wasted words, front-loaded with the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with no annotations and no output schema, the description provides excellent context about persistence models and usage scenarios. It doesn't describe return values or error conditions, but given the tool's relative simplicity and the comprehensive parameter documentation in the schema, it's nearly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents both parameters thoroughly. The description doesn't add significant meaning beyond what's in the schema properties, though it provides context about what types of data should be stored (findings, addresses, preferences, notes). This meets the baseline expectation when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'forget' (delete) and 'recall' (retrieve). It explicitly identifies the tool's function as persistent storage for session data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls') and distinguishes between authenticated users (persistent memory) and anonymous sessions (24-hour duration). This gives clear context for appropriate usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.