Skip to main content
Glama

Server Details

IPInfo MCP — wraps ipinfo.io (free tier, no auth required for basic usage)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-ipinfo
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 7 of 7 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation3/5

The tools have some distinct purposes, but there is notable overlap and ambiguity. For example, 'ask_pipeworx' and 'discover_tools' both involve finding or accessing tools, which could confuse an agent about when to use each. However, other tools like 'get_my_ip' and 'lookup_ip' are clearly differentiated, and memory tools ('remember', 'recall', 'forget') form a coherent subset.

Naming Consistency2/5

The naming is inconsistent with mixed conventions and styles. Tools use snake_case (e.g., 'lookup_ip'), but there are deviations like 'ask_pipeworx' (which includes a brand name) and simple verbs like 'forget'. The verbs vary widely (e.g., 'ask', 'discover', 'get', 'lookup', 'recall', 'remember'), lacking a predictable pattern, which reduces clarity.

Tool Count4/5

With 7 tools, the count is reasonable and well-scoped for a server that combines IP geolocation with memory management and tool discovery. It's not excessive, and each tool appears to serve a purpose, though the mix of domains might feel slightly broad. This is borderline but acceptable for the apparent scope.

Completeness3/5

For the IP geolocation domain, the surface is complete with 'get_my_ip' and 'lookup_ip' covering core needs. However, the server includes unrelated tools like memory management and tool discovery, creating a disjointed set. While each subset is functional, the overall server lacks a cohesive domain, making it hard to assess gaps, but no obvious dead ends exist within the provided tools.

Available Tools

7 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does this well by explaining that Pipeworx 'picks the right tool, fills the arguments, and returns the result' - revealing the tool's intelligent routing behavior. It doesn't mention rate limits, authentication needs, or error conditions, but for a query tool with no annotations, this provides substantial behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and front-loaded: the first sentence establishes the core functionality, the second explains the automation benefit, and the third provides concrete examples. Every sentence earns its place with zero redundant information, making it highly efficient while remaining comprehensive.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter query tool with no output schema, the description provides excellent context about what the tool does and when to use it. The examples help illustrate the range of possible queries. The main gap is the lack of information about return formats or error handling, but given the tool's simplicity and the absence of an output schema, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage with the parameter well-documented as 'Your question or request in natural language'. The description adds minimal value beyond this by mentioning 'plain English' and providing examples, but doesn't elaborate on parameter constraints or formats beyond what the schema already states. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Ask a question', 'get an answer') and resources ('best available data source'), distinguishing it from siblings like lookup_ip or discover_tools by emphasizing natural language processing rather than specific technical operations. It explicitly mentions that Pipeworx handles tool selection and argument filling, which is unique functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: 'No need to browse tools or learn schemas — just describe what you need' clearly positions this as the tool for natural language queries when you don't want to manually select tools. The examples further illustrate appropriate use cases like factual questions, data lookups, and document retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it's a search operation that returns relevant tools, and it should be called first in large tool environments. However, it doesn't mention potential limitations like rate limits, authentication requirements, or error conditions that might be important for a discovery tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded. The first sentence states the core functionality, the second explains the return value, and the third provides crucial usage guidance. Every sentence earns its place with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search operation with 2 parameters) and no output schema, the description is reasonably complete. It explains what the tool does, when to use it, and what it returns. The main gap is the lack of output format details (what the returned tool information looks like), which would be helpful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't elaborate on query formulation strategies or limit implications). Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('search', 'returns') and resources ('Pipeworx tool catalog', 'most relevant tools with names and descriptions'). It distinguishes from siblings like 'get_my_ip' and 'lookup_ip' by focusing on tool discovery rather than IP-related operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This clearly indicates when to use this tool versus alternatives, establishing it as an entry point for tool discovery in large catalogs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. 'Delete' implies a destructive mutation, but there's no information about permissions required, whether deletion is permanent or reversible, what happens if the key doesn't exist, or any rate limits. The description states what happens but not how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise - a single sentence with zero wasted words that immediately communicates the core functionality. Every word earns its place, and the structure is front-loaded with the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive mutation tool with no annotations and no output schema, the description is insufficiently complete. It doesn't address critical context like what 'stored memory' means in this system, what confirmation or response to expect, error conditions, or integration with sibling tools in the memory management workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'key' parameter adequately. The description adds no additional semantic context about key format, constraints, or examples beyond what the schema provides, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and target resource ('a stored memory by key'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'recall' or 'remember', but the verb 'Delete' strongly implies this is a removal operation rather than retrieval or storage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'recall' (likely for retrieval) and 'remember' (likely for storage), there's no indication of when deletion is appropriate, what prerequisites exist, or what happens after deletion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_my_ipBInspect

Get your current IP address with geolocation data. Returns city, region, country, coordinates, org, postal code, timezone.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what information is retrieved (geolocation and network info) but doesn't mention behavioral traits like rate limits, authentication needs, response format, or potential errors. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose without any wasted words. It's front-loaded with the key action ('Get') and resource, making it easy to parse. Every part of the sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations, no output schema, and 0 parameters, the description is minimal. It states what the tool does but lacks context on behavioral aspects (e.g., response format, error handling) and doesn't differentiate from siblings. For a tool that retrieves potentially sensitive geolocation data, more completeness is needed to guide an agent effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100% (since there are no parameters to describe). The description doesn't need to add parameter semantics, so it naturally compensates by focusing on the tool's purpose. Baseline for 0 parameters is 4, as it appropriately avoids unnecessary parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get geolocation and network information for the current request's originating IP address.' It specifies the verb ('Get'), resource ('geolocation and network information'), and scope ('current request's originating IP address'). However, it doesn't explicitly differentiate from the sibling tool 'lookup_ip' (which presumably looks up other IPs), so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention the sibling tool 'lookup_ip' or clarify that this tool is specifically for the current request's IP, while 'lookup_ip' might be for arbitrary IPs. There's no explicit when/when-not usage advice, so it scores low.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_ipAInspect

Get geolocation and network info for any IP address (e.g., "8.8.8.8"). Returns city, region, country, coordinates, org, postal code, timezone.

ParametersJSON Schema
NameRequiredDescriptionDefault
ipYesIPv4 or IPv6 address to look up (e.g., "8.8.8.8")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the return data structure (city, region, etc.) and that it's a lookup operation, but lacks details on error handling, rate limits, authentication needs, or data freshness. It adequately describes what the tool does but misses some behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and followed by specific return details. Every sentence adds value with no wasted words, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is mostly complete. It clearly states the purpose, usage, and return values. However, without an output schema, it could benefit from more detail on the return format or error cases, but it's sufficient for basic understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the parameter 'ip' fully documented in the schema. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or constraints, so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get geolocation and network information') and resource ('for a specific IP address'). It distinguishes from the sibling tool 'get_my_ip' by specifying it's for looking up a provided IP rather than retrieving the user's own IP.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it's for a 'specific IP address,' which differentiates it from the sibling tool 'get_my_ip' that likely retrieves the user's own IP. However, it doesn't explicitly state when to use this tool versus alternatives or provide exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that memories can be retrieved from current or previous sessions, which is useful behavioral context. However, it doesn't mention potential limitations like memory size, retrieval speed, or error conditions, leaving some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve distinct purposes: the first explains the core functionality, and the second provides usage context. There's no wasted language, and information is front-loaded effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one parameter, 100% schema coverage, and no output schema, the description provides good contextual completeness. It explains what the tool does, when to use it, and how parameters affect behavior. The main gap is lack of output format details, which would be helpful given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context beyond the schema by explaining that omitting the key parameter triggers listing all memories. With 100% schema description coverage and only one parameter, this additional semantic guidance elevates the score above the baseline of 3 for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory by key', 'all stored memories'). It distinguishes from siblings by mentioning 'context you saved earlier' which relates to the 'remember' tool, showing differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter to list all memories, offering clear usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the storage mechanism (session memory), persistence differences between authenticated vs. anonymous users, and the 24-hour limit for anonymous sessions. This covers important operational context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured in two sentences. The first sentence states the core purpose, and the second provides important behavioral context. Every word earns its place with no redundancy or unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 parameters, 100% schema coverage, and no output schema, the description provides good contextual completeness. It explains the tool's purpose, usage context, and important behavioral details about persistence. The main gap is lack of information about return values or error conditions, which would be helpful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any meaningful parameter semantics beyond what's in the schema (e.g., it doesn't explain key constraints or value formatting). This meets the baseline expectation when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('store a key-value pair') and resource ('in your session memory'). It distinguishes from sibling tools like 'forget' (which likely removes) and 'recall' (which likely retrieves) by focusing on storage. The description goes beyond the name 'remember' by specifying the storage mechanism.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), which helps the agent understand its role in workflows. However, it doesn't explicitly state when NOT to use it or name alternatives (e.g., when to use 'recall' instead), which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.