Press Release
Server Details
press-release MCP — wraps StupidAPIs (requires X-API-Key)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-press-release
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 5 of 5 tools scored.
Most tools have distinct purposes: ask_pipeworx answers questions via an agent, discover_tools finds tools, and the memory trio handles storage. However, ask_pipeworx and discover_tools both involve 'finding' information—ask_pipeworx could potentially find tools too, causing slight ambiguity.
Memory tools consistently use verbs (forget, recall, remember) and discover_tools and ask_pipeworx follow a verb_noun pattern. However, ask_pipeworx doesn't clearly indicate its function as a question-answering tool, and discover_tools could be seen as inconsistent with the others.
Five tools is appropriate for a press release server. The set covers core operations: querying, discovering capabilities, and memory management. Not too few, not too many.
The server focuses on a Q&A agent with memory, which is a well-defined scope. However, it lacks tools for managing memories beyond simple CRUD (e.g., search memory, clear all memories) and the discovery tool feels redundant if ask_pipeworx already selects tools.
Available Tools
5 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden for behavioral transparency. It discloses that the tool selects the best data source, fills arguments, and returns results. However, it does not explain what happens if the question cannot be answered or if multiple tools are relevant.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (3 sentences plus examples) and front-loaded with the core action. The examples are helpful but could be integrated more succinctly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (1 param, no output schema) and the tool's straightforward purpose, the description provides sufficient completeness for an agent to invoke it correctly. It covers what the tool does and how to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds meaning beyond the schema by explaining how the parameter is used ('describe what you need') and providing examples, but does not specify format constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: asking questions in plain English and getting answers from the best available data source. It distinguishes itself from siblings by emphasizing natural language input and automatic tool selection, contrasting with tools like discover_tools or remember.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for use (natural language queries) and includes examples, but does not explicitly state when not to use it or mention alternatives. Given the tool's unique role as a natural language interface, this is adequate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behaviors: it searches by natural language description, returns the most relevant tools, and is designed for discovery. Since no annotations are provided, the description must carry the burden, and it does so effectively by stating the search mechanism and result type. A slight deduction because it doesn't mention any potential limitations (e.g., indexing delay) or side effects, but overall it's clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with three sentences that each add value: the first states the action, the second describes the result, and the third gives usage guidance. No unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there is no output schema, the description adequately explains what the tool returns (most relevant tools with names and descriptions). The context of having 500+ tools and the instruction to call it first provides sufficient completeness for a search/discovery tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already has high coverage (100%) with descriptions for both 'query' and 'limit'. The tool description adds value by framing how to use the 'query' parameter ('natural language description of what you want to do') and hints at the purpose of the tool, which complements the schema. However, it doesn't add additional details about 'limit' beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb-resource pair ('Search the Pipeworx tool catalog') and clearly states the purpose: returning relevant tool names and descriptions. It distinguishes itself by instructing the agent to call this FIRST when many tools are available, which sets it apart from siblings like 'ask_pipeworx' or 'recall'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear context for usage and implies alternatives (other tools are for different purposes).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the behavioral burden. It clearly states the action is destructive (delete), which is sufficient. However, it doesn't disclose whether deletion is irreversible or if there are side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence with no wasted words. It is front-loaded with the action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema), the description is complete enough for the agent to understand its function. No additional details are strictly necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes the 'key' parameter as 'Memory key to delete'. The description adds no additional meaning beyond what the schema provides, so a baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Delete' and the resource 'a stored memory by key', making the tool's purpose unambiguous. It effectively distinguishes from siblings like 'remember' (create) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (when you need to remove a stored memory) but does not explicitly state when not to use it or mention alternatives among siblings like 'recall' for reading.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It mentions retrieving previously stored memories, implying it is a read-only operation. However, it doesn't disclose whether retrieval affects memory state (e.g., marks as accessed) or any limits on number of memories.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no wasted words. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and single optional parameter, description adequately covers retrieval behavior. Could mention if listing all memories returns keys only or full content.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers parameter 'key' with description, but description adds meaning by explaining that omitting key lists all memories. This adds value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb (retrieve/list), resource (memory by key), and distinguishes from 'remember' by specifying retrieval vs. storage. Also clarifies that omitting key lists all memories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use ('to retrieve context you saved earlier') and notes that omitting key lists all memories. Could be improved by mentioning when to use alternatives like 'remember' or 'forget'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full burden. It discloses behavioral traits: persistence based on authentication (persistent for authenticated users, 24-hour for anonymous). It also implies the tool is a write operation (store) and doesn't read or delete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each adding value: purpose, use cases, and persistence behavior. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and two simple parameters, the description is nearly complete. It covers what the tool does, when to use it, and persistence behavior. Could optionally mention size limits or overwrite behavior, but not necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the schema already provides clear descriptions for both parameters (key with examples, value as free text). The description adds context about the purpose of the parameters (intermediate findings, preferences, notes) but does not add technical details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'store' and the resource 'key-value pair in session memory'. It distinguishes itself from sibling tools like 'recall' (retrieve) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool: to save intermediate findings, user preferences, or context across tool calls. It also mentions authentication persistence and anonymous session limits, though it doesn't explicitly say when not to use it or compare to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!