numuersapi
Server Details
NumbersAPI MCP — wraps numbersapi.com (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-numbersapi
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 4 of 4 tools scored.
Multiple tools have unclear boundaries and overlapping purposes. 'math_fact' and 'number_fact' both provide facts about numbers, with 'math_fact' potentially being a subset of 'number_fact', while 'date_fact' is distinct but 'random_fact' could overlap with 'number_fact' in content. This ambiguity makes it difficult for an agent to reliably choose the right tool.
Tool names follow a consistent pattern throughout, using a clear noun_verb structure (e.g., date_fact, math_fact). All names are in snake_case and start with a noun describing the fact type, making them predictable and easy to understand.
With 4 tools, the count is reasonable for a simple fact-retrieval API, though it feels slightly thin given the potential for more varied fact types. Each tool has a distinct role in the domain, but the scope could be expanded without becoming overwhelming.
The toolset covers basic fact retrieval for dates, numbers, and random numbers, but there are notable gaps. For example, there is no tool for facts about ranges of numbers, historical events, or other trivia categories, which limits the server's utility for broader agent tasks.
Available Tools
4 toolsdate_factCInspect
Get an interesting fact about a specific calendar date.
| Name | Required | Description | Default |
|---|---|---|---|
| day | Yes | Day number (1–31) | |
| month | Yes | Month number (1–12) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It only states what the tool does ('Get an interesting fact') without mentioning any behavioral traits like rate limits, error handling, or what happens with invalid dates (e.g., February 30). This leaves significant gaps in understanding how the tool behaves beyond its basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is front-loaded and appropriately sized for a simple tool, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema, no annotations), the description is minimal but adequate for basic understanding. However, it lacks completeness in areas like behavioral context (e.g., error cases, fact sources) and usage guidelines, which could help an agent use it more effectively in varied scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with clear descriptions for both parameters ('day' and 'month'), so the schema does the heavy lifting. The description adds no additional meaning beyond implying that these parameters specify the date for the fact, which aligns with the schema. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('interesting fact about a specific calendar date'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'math_fact' or 'number_fact' beyond the calendar date focus, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'random_fact' or other sibling tools. It lacks context about scenarios where this tool is preferred, such as for date-specific queries rather than general random facts, leaving the agent with minimal usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
math_factBInspect
Get a mathematical fact about a specific number.
| Name | Required | Description | Default |
|---|---|---|---|
| number | Yes | The number to get a mathematical fact about (e.g., 1729) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'Get[s] a mathematical fact' but does not describe any behavioral traits such as rate limits, error handling, or what constitutes a 'mathematical fact' (e.g., trivia, properties). This leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that is front-loaded and wastes no words. It efficiently conveys the core purpose without unnecessary details, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is somewhat complete but lacks depth. It covers the basic purpose but does not address behavioral aspects or usage context, which are important for an agent to use it effectively. This results in an adequate but minimal description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds minimal meaning beyond the input schema, which has 100% coverage and fully documents the 'number' parameter. The description mentions 'a specific number' and provides an example (1729), but this does not significantly enhance the schema's information. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('mathematical fact about a specific number'), making it easy to understand what the tool does. However, it does not explicitly distinguish this tool from its siblings (date_fact, number_fact, random_fact), which all seem to provide different types of facts, so it misses full differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings (date_fact, number_fact, random_fact). It implies usage for mathematical facts about numbers but does not specify alternatives, exclusions, or context for selection among similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
number_factCInspect
Get an interesting trivia fact about a specific number.
| Name | Required | Description | Default |
|---|---|---|---|
| number | Yes | The number to get a fact about (e.g., 42) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but only states what the tool does, not how it behaves. It doesn't mention whether this is a read-only operation, what kind of API it calls, potential rate limits, error conditions, or what format the fact will be returned in. The description adds minimal behavioral context beyond the basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise - a single sentence that clearly communicates the tool's purpose with zero wasted words. It's front-loaded with the essential information and earns its place efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description is insufficiently complete. It doesn't describe what format the fact will be returned in, potential limitations (e.g., number ranges supported), error conditions, or how this differs from sibling tools. The description leaves too many contextual questions unanswered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'number' clearly documented in the schema. The description doesn't add any additional parameter semantics beyond what's already in the schema, so it meets the baseline score of 3 for adequate coverage when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('interesting trivia fact about a specific number'), making it immediately understandable. However, it doesn't explicitly distinguish this tool from its sibling tools (date_fact, math_fact, random_fact), which all seem to provide different types of facts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its sibling tools. While it's clear this tool provides number facts, there's no indication of when to choose number_fact over math_fact or random_fact, nor any mention of prerequisites or constraints for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
random_factBInspect
Get a trivia fact about a randomly chosen number.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool returns a trivia fact, but doesn't disclose behavioral traits such as rate limits, error handling, or what 'randomly chosen' entails (e.g., range, distribution). For a tool with zero annotation coverage, this leaves significant gaps in understanding its operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('Get a trivia fact') and specifies the scope ('about a randomly chosen number'). There is zero waste, and it's appropriately sized for a simple tool with no parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is complete enough to convey the basic purpose. However, it lacks details on output format (e.g., structure of the fact) and behavioral context, which could be important for an agent to use it effectively, though not critical for such a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description doesn't add param info, but with no parameters, the baseline is 4 as it adequately addresses the lack of inputs by implying randomness without requiring user input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('trivia fact about a randomly chosen number'). It distinguishes itself from siblings like 'date_fact', 'math_fact', and 'number_fact' by specifying 'randomly chosen number' rather than requiring input, though it doesn't explicitly contrast with 'number_fact' which might also handle numbers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description implies usage for random number trivia but doesn't mention when to choose it over siblings like 'number_fact' (which might require a specific number) or other fact tools. There's no explicit context or exclusions stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!