Chaos Index
Server Details
chaos-index MCP — wraps StupidAPIs (requires X-API-Key)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-chaos-index
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 5 of 5 tools scored. Lowest: 2.9/5.
Most tools are distinct: ask_pipeworx handles queries, discover_tools searches tools, and memory tools are separate. However, ask_pipeworx's broad 'answer from best data source' overlaps conceptually with discover_tools since both involve finding information. The boundary between asking a question and discovering tools is somewhat confused.
Memory tools (forget, recall, remember) follow a consistent verb pattern, but ask_pipeworx and discover_tools use different styles: one is imperative_compound, the other imperative. Additionally, 'pipeworx' feels like a brand name rather than a descriptive verb, breaking the pattern.
5 tools is a reasonable number, not too many or too few. However, the memory functions (forget, recall, remember) could perhaps be a single tool with different arguments, but the count is still appropriate for the server's apparent purpose of providing a flexible query and memory interface.
The server covers query answering, tool discovery, and simple memory storage. However, there's no tool for managing or updating stored memories (e.g., update memory) beyond deletion and retrieval, which could limit agent workflows. The memory CRUD is incomplete (no update), and there's no tool to list available data sources or manage preferences.
Available Tools
5 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden. It clearly states the tool is a query interface that delegates to other data sources, implying no side effects. However, it doesn't specify response format, latency expectations, or error handling, but this is acceptable for a natural language query tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (three sentences plus examples) and front-loaded with the core purpose. Every sentence adds value: the first defines the tool, the second explains the internal mechanism, the third gives usage guidance, and examples illustrate. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (single parameter, no output schema, natural language interface), the description is complete. It covers what the tool does, how it works, when to use it, and provides examples. The lack of return value explanation is acceptable for a conversational tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with only one parameter. The description adds meaning beyond the schema by explaining the 'question' parameter should be in natural language and providing examples. A minor gap: it doesn't clarify if follow-up questions are supported or if context is preserved.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It uses a specific verb ('ask'), defines the resource ('Pipeworx'), and distinguishes itself from siblings by explaining that it internally selects tools and fills arguments, unlike direct tool calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use: 'just describe what you need' and contrasts with browsing tools or learning schemas. It gives three concrete examples covering different domains, which helps the agent understand the tool's scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the search behavior and returns names/descriptions, but doesn't mention side effects, authentication needs, or rate limits. A score of 3 is adequate given the tool is read-only by nature, though more detail would help.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences and front-loaded with purpose. It is concise with no wasted words, though it could be slightly shorter by combining the first and third sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple parameters, the description covers the core use case well. It tells what it does, when to use it, and what to expect. Lacking return value details slightly reduces completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds context by explaining that 'query' is a natural language description with examples, and mentions defaults for 'limit'. This adds value but doesn't go beyond what the schema already indicates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs 'search' and 'returns', and identifies the resource as the Pipeworx tool catalog. It clearly distinguishes from siblings by stating 'Call this FIRST' and mentioning 500+ tools, which is a unique responsibility.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use it ('when you have 500+ tools available and need to find the right ones') and provides a usage directive ('Call this FIRST'). No alternatives are mentioned, but the 'FIRST' instruction implies prioritization over siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It states deletion but does not disclose consequences (e.g., whether deletion is permanent, if it requires confirmation, or if it affects other data).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence of 5 words, directly conveying the purpose. No unnecessary words, but could be slightly more informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 param, no output schema, no nested objects), the description is minimally adequate. However, it lacks details about the result or side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single 'key' parameter described as 'Memory key to delete'. The description adds minimal extra meaning beyond the schema, matching the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the resource ('stored memory by key'). It effectively distinguishes from siblings like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. For example, it does not mention that 'forget' removes a specific key, while 'remember' stores one, or that 'recall' retrieves without deleting.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It correctly describes the tool as read-only (retrieve/list) and indicates session persistence. Could add that omitting key lists all keys but that's already in description. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. The first sentence states core functionality; the second gives usage context. Front-loaded with the essential action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description could mention the format of returned memories. But for a simple retrieval tool with one optional parameter, it's sufficiently complete. The mention of 'previously stored' and 'earlier in the session or in previous sessions' covers persistence context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (key param described in schema). The description adds the semantics that omitting key lists all memories, which is not in schema. This clarifies the optional behavior effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It uses specific verbs ('Retrieve', 'list') and a resource ('previously stored memory'). It distinguishes from siblings like 'remember' (store) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says when to use this tool ('retrieve context you saved earlier') and implies when not to use it (no key to list all). It provides clear context for the agent to decide between recall and other memory tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses behavioral traits: persistent memory for authenticated users, 24-hour expiration for anonymous sessions. This adds meaningful context beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each adding distinct value: what it does, when to use it, and behavioral details. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and two simple parameters, the description is complete. It explains purpose, usage, and memory persistence. Could mention that values are overwritten on duplicate keys, but not essential for a simple key-value store.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds some context about what values might be stored (e.g., findings, addresses) but does not elaborate on key naming conventions or constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, distinguishing it from siblings like 'recall' (retrieve) and 'forget' (delete). It uses specific verbs ('store') and resource ('session memory').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives explicit context for when to use this tool: to save intermediate findings, user preferences, or context across tool calls. However, it does not explicitly mention when not to use it or directly compare to alternatives like 'recall' or 'forget', so a slight deduction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!