Skip to main content
Glama

Server Details

passive-aggression MCP — wraps StupidAPIs (requires X-API-Key)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-passive-aggression
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 5 of 6 tools scored.

Server CoherenceA
Disambiguation3/5

Most tools have clear distinct purposes, but ask_pipeworx overlaps significantly with the discover_tools and other tools since it claims to pick the right tool automatically, potentially confusing an agent on whether to use ask_pipeworx or a more specific tool.

Naming Consistency2/5

Naming conventions are inconsistent: most tools use snake_case (ask_pipeworx, discover_tools, passive_aggression_detect, forget, recall, remember) but 'ask_pipeworx' breaks the verb_noun pattern (verb instead of ask_pipeworx). Also, 'passive_aggression_detect' is overly specific compared to generic others.

Tool Count5/5

Six tools is a well-scoped set for a server that combines a general query tool, tool discovery, memory management, and a specific detection feature. Each tool earns its place without being overwhelming.

Completeness4/5

The tool set covers query, discovery, memory CRUD (create, read, delete), and a specialized detection function. Missing an update memory tool, but the memory operations are otherwise complete for the session context. The passive aggression detection is a single function, but it's likely the server's specialty.

Available Tools

5 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool automatically picks the right tool and fills arguments, which is helpful. However, it does not mention any limitations, error handling, or what happens when no data source is suitable. Since no annotations are provided, the description could do more to clarify behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at three sentences, with the purpose stated upfront. It includes examples which are valuable, though slightly longer than necessary. Every sentence adds value, so it earns a high score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is mostly complete. It explains the core functionality and provides examples. However, it could be more complete by mentioning potential limitations or how the tool selects data sources.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the 'question' parameter as 'Your question or request in natural language', and the description adds context with examples but does not provide additional semantic detail beyond what the schema offers. With 100% schema coverage, the baseline is 3, and the description does not exceed that.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool's purpose: answering natural language questions by automatically selecting the right data source. It provides concrete examples to illustrate its functionality, making it easy to understand what the tool does and distinguishing it from other tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use this tool: when you want to ask a question in plain English and get an answer without browsing tools or learning schemas. It does not explicitly state when not to use it or mention alternatives, but the examples give clear guidance on appropriate use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description must disclose behavior. It states the tool is a search and returns tool names and descriptions, but does not mention if it modifies state, requires authentication, or has rate limits. However, given the nature of a search tool, these omissions are minor, and the description adequately conveys its read-only behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, all essential: what the tool does, what it returns, and when to use it. No wasted words. The key instruction is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (2 parameters, no nested objects, no output schema), the description fully covers the tool's purpose, usage guidance, and parameter semantics. There is no missing information needed for an agent to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by explaining the 'query' parameter expects natural language descriptions (with examples), which goes beyond the schema's simple description. It also clarifies the 'limit' default and max, which are not in the schema. This extra context earns a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches a tool catalog by natural language description, returns relevant tools, and should be called first when many tools are available. The verb 'search' and resource 'tool catalog' are specific, and it distinguishes itself from siblings by being the discovery tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task,' providing clear when-to-use guidance. It does not need to mention when not to use because it is the primary discovery tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries full burden. It indicates a destructive action (delete), but does not disclose side effects, such as whether deletion is permanent or reversible, or if confirmation is required. The behavior is partially transparent but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the action and resource. No unnecessary words or details, achieving maximum efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 param, no output schema, no nested objects), the description is adequately complete. It covers purpose and key identification. However, it could briefly mention that the operation is permanent if applicable, but not a critical omission.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single parameter 'key', which is also described in the schema. The description adds no additional semantic meaning beyond what the schema already provides, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete'), the resource ('a stored memory'), and the method of identification ('by key'). It effectively differentiates from sibling tools like 'recall' and 'remember' by specifying deletion rather than retrieval or storage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a specific memory needs to be removed, but it does not provide explicit guidance on when to use this tool versus alternatives like 'remember' or 'recall'. No mention of prerequisites or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the behavioral burden. It explains the tool is a read operation (retrieve/list). However, it does not disclose whether listing all memories has performance implications, if the memories are persistent across sessions, or if there are size limits. Adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Front-loaded with the core function. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 1 optional parameter, no output schema, and no annotations. The description explains the two use cases but lacks details on return format, error behavior (e.g., key not found), or whether listing is limited. It is minimally sufficient but not complete for all scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (only one parameter with description). The description adds context that omitting the key lists all memories, which is already implied by the parameter being optional. Minimal added value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a stored memory by key, or lists all memories when key is omitted. It specifies the verb 'retrieve' and resource 'memory', and distinguishes the two modes of operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use it: 'retrieve context you saved earlier in the session or in previous sessions.' It implies when to omit the key (to list all). It does not explicitly mention when not to use it or alternative tools, but the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It clearly discloses behavioral traits: persistence depends on authentication (authenticated users get persistent memory; anonymous sessions last 24 hours). This adds value beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loading the primary purpose ('Store a key-value pair'), then usage guidance, then persistence details. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 required string params, no output schema), the description is complete: purpose, usage context, and persistence behavior are covered. It could optionally mention that values are overwritten on same key, but not necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% and both parameters ('key' and 'value') have descriptive examples and explanations in the schema. The description does not add further parameter-level meaning, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, specifying the verb 'store' and the resource 'key-value pair in session memory'. It distinguishes itself from siblings like 'recall' (which retrieves) and 'forget' (which deletes) by describing the saving action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly suggests when to use this tool: 'save intermediate findings, user preferences, or context across tool calls'. It implies usage for temporary data storage and differentiates persistence levels between authenticated and anonymous users, but does not explicitly mention when not to use or name alternatives like 'recall' or 'forget'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.