Skip to main content
Glama

Server Details

OpenAQ MCP — wraps OpenAQ v2 API (free, no auth required)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-openaq
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: ask_pipeworx for direct queries, discover_tools for finding other tools, and memory utilities for storage. However, ask_pipeworx and discover_tools both involve data retrieval and could cause an agent to choose incorrectly if not careful.

Naming Consistency3/5

Mix of verb_noun patterns (ask_pipeworx, discover_tools) and single verbs (forget, recall, remember). The pattern is inconsistent, though all names are readable and action-oriented.

Tool Count4/5

Five tools is a reasonable count for a server that appears to combine a data querying interface with memory capabilities. It is slightly on the smaller side but still appropriate given the focused scope.

Completeness3/5

Core functionality for querying and memory is present, but there are gaps: no tool to update a stored memory, no direct listing of all available tools (discover_tools only returns relevant ones), and no way to interact with other data sources beyond the natural language interface.

Available Tools

5 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description reveals that Pipeworx picks the right tool and fills arguments automatically, which is key behavioral info. However, it does not mention any limitations, data recency, rate limits, or whether the tool can access all data sources. Since no annotations are provided, the description carries full burden but stops short of full disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences long with examples, front-loading the purpose. It is efficient and scannable, though the examples could be slightly trimmed without loss.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a single-parameter natural language interface without output schema, the description adequately sets expectations for the agent. It explains the orchestration behavior and gives usage examples. It does not cover error scenarios or limitations, but the simplicity of the tool justifies this level of completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds no extra detail about the question parameter beyond what the schema provides ('Your question or request in natural language'), which is sufficient but not additive.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool takes a plain English question and returns an answer from the best available data source. It explicitly contrasts with browsing tools or learning schemas, and the verb 'ask' plus examples like 'What is the US trade deficit with China?' make the purpose unmistakable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description instructs users to 'just describe what you need' and provides three concrete examples of appropriate questions. However, it does not specify when not to use this tool or mention any alternatives, though the sibling tool names (e.g., get_latest, get_locations) suggest other more specific tools exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavior. It states that the tool returns 'the most relevant tools with names and descriptions.' This is helpful but does not elaborate on how relevance is determined or if there are any side effects. However, it is clear that it is a read-only search operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, containing three sentences that each serve a purpose: stating what the tool does, what it returns, and when to use it. There is no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is complete. It covers purpose, usage guidance, and return value. No additional information is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and the input schema already provides good descriptions for both parameters (query and limit). The description reinforces the query parameter's purpose ('Natural language description...') and adds context for the tool's usage, but does not add new semantics beyond the schema. A score of 4 is appropriate because the schema is already very informative.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It specifies the action (search), resource (tool catalog), and method (natural language description). It also distinguishes itself by instructing to call it first when many tools are available.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This sets clear context and priority relative to other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries the full burden. It clearly indicates a destructive operation ('Delete') and specifies the key as identifier. However, it does not disclose whether deletion is permanent, reversible, or affects related data, which would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no filler. Every word contributes meaning. Front-loaded with action and target.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete-by-key tool with no output schema, the description is sufficiently complete. It defines the operation, target, and identifier. Could optionally add confirmation or idempotency info, but not necessary for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with one required parameter ('key'), described as 'Memory key to delete'. The description adds no further meaning beyond the schema, which is adequate given full coverage. Baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Delete') and resource ('stored memory by key'), clearly distinguishing it from sibling tools like 'recall' (retrieve) and 'remember' (store).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description is concise but does not explicitly state when to use this tool vs alternatives like 'recall' or 'remember'. The sibling names provide implicit context, but no direct guidance is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It correctly identifies read-only behavior (retrieve/list) without side effects. However, it does not disclose what happens if key is missing or if list is empty, nor does it describe the return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, each earning its place: first states core action, second provides usage context. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has simple interface (1 optional param, no output schema). Description is sufficient for an agent to use it correctly. Could mention that memories persist across sessions (already implied in 'saved earlier in the session or in previous sessions').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the key parameter is described in schema). Description adds context: key can be omitted to list all. This goes beyond schema's 'omit to list all keys' hint, reinforcing the optional nature.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses clear verb 'retrieve' with resource 'memory by key' and 'list all' for the omit-key case. Distinguishes retrieval from storage and deletion (sibling tools 'remember', 'forget').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States when to use ('retrieve context saved earlier') and describes two modes (by key or list all). However, does not explicitly exclude when to use other tools like 'ask_pipeworx' or 'discover_tools'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description fully carries the behavioral disclosure burden. It clearly states the tool is a write operation, explains persistence behavior (authenticated users get persistent memory; anonymous sessions last 24 hours), and implies it is non-destructive (store vs delete).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each with a distinct purpose: what it does, when to use, and behavioral caveat. No wasted words; information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 required string params, no output schema), the description covers purpose, usage, and persistence. Could mention memory size limits or key overwriting behavior, but not strictly necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description adds usage examples (e.g., 'subject_property') and clarifies that value is any text, but does not provide additional semantic nuance beyond what the schema's description fields already give.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'store' and the resource 'key-value pair in your session memory', distinguishing it from siblings like 'recall' (retrieval) and 'forget' (deletion). It gives concrete examples of usage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'save intermediate findings, user preferences, or context across tool calls'. Mentions persistence differences (authenticated vs anonymous), but does not explicitly contrast with alternatives like 'forget' or 'recall'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.