Skip to main content
Glama

Server Details

CATAAS MCP — Cat as a Service (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-cataas
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 8 of 8 tools scored. Lowest: 3.5/5.

Server CoherenceB
Disambiguation3/5

The set mixes Pipeworx meta-tools (ask_pipeworx, discover_tools, remember/forget/recall) with Cataas cat-image tools (cat_by_tag, list_tags, random_cat). Within each subgroup, tools are distinct, but the overall set feels like two different servers combined, causing potential confusion about when to use which subgroup.

Naming Consistency3/5

Cataas tools use lowercase_snake_case (cat_by_tag, list_tags, random_cat) while Pipeworx tools use plain lower-case words (ask_pipeworx, discover_tools) and memory verbs (remember, forget, recall). No consistent pattern across the whole set.

Tool Count4/5

8 tools is a reasonable count, but the set is split into two distinct domains (cat images and Pipeworx utilities). Each domain individually would be well-scoped, but combined they feel slightly overloaded for a single server.

Completeness2/5

The Cataas side is complete for basic cat image retrieval (list_tags, cat_by_tag, random_cat), but the Pipeworx side is incomplete: ask_pipeworx and discover_tools are high-level, while memory tools are generic. The server's purpose is unclear, leading to gaps in both domains.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the behavioral burden. It describes the tool as selecting the right tool and filling arguments, implying it may call other tools. This is a significant behavioral trait. However, it does not disclose potential side effects, rate limits, or whether the tool can fail (e.g., if no data source matches). Score 3 because it adds some context beyond the schema but lacks full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences plus examples. It is front-loaded with the purpose and key behavior. The examples are helpful but add length. It is appropriately concise for the complexity of the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one parameter, no output schema, and no annotations, the description covers the essential behavior and provides usage examples. It could mention what happens if the tool cannot answer (e.g., fallback behavior), but for a simple tool, it is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one required parameter 'question', whose description is 'Your question or request in natural language'. The tool description elaborates on what constitutes a good question via examples (e.g., 'What is the US trade deficit with China?'), which adds value. However, the schema already provides adequate description, so baseline is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verbs ('Ask', 'picks', 'fills', 'returns') and clearly identifies the resource ('best available data source'). It explicitly distinguishes itself from sibling tools by explaining that the agent does not need to browse tools or learn schemas, which is a key differentiator from other tools like cat_by_tag or discover_tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use this tool: when the user has a natural language question and does not want to manually select tools. It gives examples of appropriate usage. However, it does not explicitly state when NOT to use it or mention alternatives, so it scores a 4.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cat_by_tagAInspect

Get a random cat image matching a specific tag (e.g., 'orange', 'cute', 'sleepy'). Returns image URL, cat ID, and tags.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagYesTag to filter cats by (e.g. "cute", "orange", "grumpy"). Use list_tags to see available tags.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds context that the tool returns image URL, cat ID, and tags, which is useful. Since annotations are empty, the description carries the burden, but it does not disclose any destructive potential or side effects (unlikely for a read tool). Could mention that the result is random among matching tags, but acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences. First sentence states purpose, second gives usage guidance, third describes return value. No wasted words, front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (1 parameter, no output schema, no nested objects), the description covers purpose, usage, and return value completely. No gaps for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with detailed description for the single parameter, including example values and instruction to use list_tags. Description adds no additional parameter semantics beyond what schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'get' and resource 'random cat image matching a specific tag'. Distinguishes from sibling tools like list_tags (which discovers tags) and random_cat (which likely returns any random cat without tag filter).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs to use list_tags first to discover available tags, preventing incorrect tag values. This provides clear when-to-use guidance and distinguishes the tool's workflow dependency.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the tool's purpose and recommends calling it first, but does not mention potential behavioral traits like side effects (none expected), authentication requirements, rate limits, or performance characteristics (e.g., response time with large catalogs). The description is adequate but not rich in behavioral detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, consisting of three sentences that are front-loaded with the core purpose, then returning output, and finally usage advice. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (search with query and optional limit, no output schema, no nested objects) and 100% schema coverage, the description is largely complete. It explains what the tool does, what it returns, and when to use it. One could argue it could mention that the output is a list of tools with names and descriptions (already stated), but it lacks mention of ranking or result format details, which are not critical for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes both parameters. The description adds context for the 'query' parameter by providing examples (e.g., 'analyze housing market trends') and for 'limit' by stating defaults and maximums. However, the description does not explain edge cases (e.g., empty results) or provide additional nuance beyond schema examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb phrases: 'Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions.' It clearly distinguishes itself from sibling tools, which are about memory, cats, tags, and a general Q&A, by explicitly stating its role as a tool discovery mechanism for a large catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This tells the agent when to use it (first step for tool discovery) and implies it's not for direct task execution, which differentiates it from siblings like ask_pipeworx or cat_by_tag.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry full burden. It states deletion but lacks details on irreversibility, error handling (e.g., if key doesn't exist), or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no filler, front-loaded with action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with 1 param and no output schema; description covers basics but lacks behavioral context for a destructive operation (e.g., permanence, error responses).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with description for 'key', but description adds no new semantics beyond 'Memory key to delete' already in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Delete' and resource 'stored memory by key', clearly distinguishing it from siblings like 'recall' (read) and 'remember' (create).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or alternatives guidance, but the tool name and description imply it is for deleting a specific memory by key, which is distinct from other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tagsAInspect

List all available cat tags for filtering. Use tag names with cat_by_tag to find cats by appearance or behavior.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It clearly states the tool returns a list of tags, and implies it's a read-only operation (listing). No destructive or authentication concerns are needed for this simple listing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences. The first states the action, the second provides usage guidance. Every word is useful and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, no output schema, and no annotations, the description is complete. It explains what the tool does and how to use its results, which is sufficient for such a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has no parameters, so description does not need to add parameter details. It correctly notes that the tool has no required inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all available cat tags and its purpose as a lookup for another tool (cat_by_tag). It uses specific verb 'list' and resource 'cat tags', and distinguishes from siblings like cat_by_tag and random_cat.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly mentions using the returned tags with cat_by_tag, providing clear context for when to use this tool. However, it doesn't explicitly state when not to use it or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

random_catAInspect

Get a random cat image. Returns image URL, cat ID, and associated tags.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden of behavioral disclosure. It states the tool is a read operation (get) and returns specific fields. However, it does not mention external service dependency, potential latency, or rate limits. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: two sentences that clearly state purpose and output. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description adequately covers what the tool does and what it returns. It could mention that the image is a URL or that tags might be empty, but it's sufficient for a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has no parameters, so description correctly adds meaning by explaining the tool takes no input and returns random data. This is adequate and adds value beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool returns a random cat image from CATAAS, listing what is returned (URL, ID, tags). While it distinguishes from 'cat_by_tag' implicitly, it does not explicitly differentiate from other cat-related tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance. The description implies usage for random cat images but does not mention alternatives like 'cat_by_tag' for tagged cats. No exclusion criteria are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description discloses that omitting key lists all memories. However, does not mention side effects (e.g., whether retrieval modifies memory) or access/rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that front-load the purpose and usage. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one optional parameter and no output schema, the description sufficiently covers purpose and usage. Could mention return format briefly, but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters. Description restates the key parameter's behavior (omit to list all), but does not add new details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves a memory by key or lists all memories when key is omitted, distinguishing it from remember (save) and forget (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (to retrieve context from earlier), and implies when not to use (for saving/forgetting). Differentiates from sibling tools remember and forget.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses key behavioral traits: memory persistence depends on authentication (persistent vs 24-hour). No annotations provided, so description carries full burden; it addresses the most important aspect. Could mention that overwriting a key replaces the previous value, but not a major gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no fluff. Front-loaded with core action, then usage guidelines, then behavioral note. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given tool simplicity (2 params, no output schema, no annotations), description fully covers purpose, usage, and behavioral context. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good descriptions. Description adds usage context (what kinds of values to store) but does not need to add more since schema already explains parameters well. Baseline 3, +1 for reinforcing practical examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'store' and resource 'key-value pair' in session memory. It clearly distinguishes from sibling tools like 'forget' and 'recall' by naming them and explaining the scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: to save intermediate findings, user preferences, or context across tool calls. Also notes persistence differences between authenticated vs anonymous users, guiding usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.