Skip to main content
Glama

Shakespeare Insult

Server Details

shakespeare-insult MCP — wraps StupidAPIs (requires X-API-Key)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-shakespeare-insult
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 6 of 6 tools scored.

Server CoherenceB
Disambiguation3/5

The tools have clear purposes, but 'ask_pipeworx' overlaps with the other tools by providing a generic query interface that may subsume their functions, causing potential confusion about which tool to use.

Naming Consistency4/5

Most tool names use verb_noun pattern (ask_pipeworx, discover_tools, forget, recall, remember), but 'shakespeare_insult_generate' deviates with a noun_verb_infinitive pattern, making it slightly inconsistent.

Tool Count3/5

6 tools is a reasonable count, but the set includes both memory tools and a Shakespeare insult generator, which seem unrelated, suggesting the server's scope is unclear or too broad.

Completeness2/5

The memory tools lack an update or search functionality, and the insult generator stands alone without any related tools (e.g., list insults, customize). The server appears to be a random collection rather than a coherent domain.

Available Tools

5 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses that Pipeworx picks the right tool and fills arguments, abstracting away tool browsing. With no annotations, the description fully covers behavioral traits like automated tool selection and result return.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise three-sentence description plus examples. Front-loaded with purpose. Could be slightly more structured (e.g., list examples), but no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given single parameter, no output schema, and no annotations, the description is complete for its simplicity. It explains how the tool works and what to expect. Could add more about limitations, but not necessary for a straightforward Q&A tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers 100% with a clear description. The description adds examples and context beyond schema, like 'plain English' and 'no need to browse tools', enhancing meaning for the single parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it answers plain English questions by selecting the best data source. Specific verb 'ask' and resource 'Pipeworx' with clear examples distinguishing it from sibling tools like discover_tools or shakespeare_insult_generate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to describe what you need in natural language, with examples. However, no guidance on when not to use it (e.g., for non-factual questions or specific tool needs), but context signals and sibling names imply it's the general Q&A tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool 'returns the most relevant tools with names and descriptions' and mentions default limit and max limit. However, it doesn't disclose whether the tool is read-only, any side effects, rate limits, or how relevance is determined. For a search tool, the description is adequate but could be more explicit about non-destructive behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, front-loads the purpose, and each sentence serves a clear function: first explains what it does, second gives usage guidance. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is no output schema, the description should mention what the return value contains. It does: 'Returns the most relevant tools with names and descriptions.' This is sufficient. The tool is simple (2 params, no enums, no nested objects), so the description covers the essentials. Could mention that it returns a list, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both parameters have descriptions). The description adds value by explaining the query parameter format with examples (e.g., 'analyze housing market trends') and clarifying that limit has a default (20) and max (50). However, the schema already provides parameter names and descriptions, so the description's additional context is moderate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It specifies the verb 'search', the resource 'tool catalog', and the action 'returns the most relevant tools with names and descriptions.' This distinguishes it from siblings like 'ask_pipeworx' (which likely answers questions) and other memory tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear when-to-use guidance, especially given the large number of tools (500+). It also implies not to call this when you already know the tool, or when fewer tools are available.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It correctly states the delete operation but does not disclose behavioral traits like whether deletion is permanent, if confirmation is required, or effects on other memories. Basic transparency, but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, front-loading the action and object. No wasted words; every part is essential.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple tool with 1 required parameter, no output schema, and no nested objects, the description is nearly complete. It lacks only behavioral nuance (e.g., irreversibility) which would make it perfect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% coverage and already describes the key parameter. The description adds the context that the key identifies the memory to delete, which aligns with the schema. No additional meaning beyond the schema is necessary, so a score of 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete'), the resource ('stored memory'), and the identifier ('by key'). It is specific and distinguishes itself from siblings like 'recall' and 'remember' which handle retrieval and storage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (when you need to delete a memory by key) but provides no guidance on when not to use it or alternatives. Since there are related tools for memory operations, explicit guidance would be helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that the tool retrieves or lists memories, but does not mention any behavioral traits like persistence across sessions, limitations on key format, or what happens if key is missing. Given no annotations, the description is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, clear and front-loaded. No extraneous words. Slightly wordy with 'previously stored' and 'earlier in the session or in previous sessions', but acceptable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple tool with 1 optional parameter and no output schema, the description covers the core functionality. It explains both retrieval modes (by key vs list) and the context of use (previously stored memories). No major gaps for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds meaning by explaining the effect of omitting the 'key' parameter (list all keys) and the purpose ('retrieve context you saved earlier'). This goes beyond the schema's simple 'omit to list all keys'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'retrieve' and the resource 'stored memory', and distinguishes between retrieving by key vs listing all. This differentiates it from siblings like 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool ('to retrieve context you saved earlier') and implicitly distinguishes from 'remember' (store) and 'forget' (delete). However, it does not explicitly mention when not to use it or provide alternative tools for other scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses important behavioral traits: persistence varies by authentication status (authenticated users get persistent memory; anonymous sessions last 24 hours). However, it doesn't mention overwrite behavior (same key overwrites?), storage limits, or data privacy implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, front-loaded with purpose, then usage guidance, then behavioral note. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (2 string params, no output schema, no nested objects), the description covers the essential behavioral aspects. It could mention that values are plain text (already implied) or maximum size limits, but overall it's sufficient for a straightforward key-value store.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% and provides clear parameter meanings (key examples, value as any text). The description adds general context but no additional parameter-specific details beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb+resource ('Store a key-value pair in your session memory') and clearly distinguishes from siblings like 'recall' (retrieve) and 'forget' (delete), making its purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('save intermediate findings, user preferences, or context across tool calls') and differentiates persistence based on authentication. It does not explicitly mention when not to use it or compare to alternatives, but the sibling distinction is implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.