Skip to main content
Glama

Server Details

Trademarks MCP — USPTO TSDR trademark lookup

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-trademarks
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation4/5

Tools are mostly distinct: ask_pipeworx is an umbrella tool, trademark lookups by registration/serial/document are clearly separated, and memory tools (remember/recall/forget) form their own category. However, ask_pipeworx overlaps conceptually with discover_tools and the specific trademark tools, potentially causing confusion about which to use first.

Naming Consistency3/5

Naming is inconsistent: ask_pipeworx and discover_tools use imperative phrases with product name, while trademark tools follow get_trademark_by_X pattern, and memory tools use single verbs (remember, recall, forget). No consistent verb_noun pattern across the set.

Tool Count4/5

8 tools is reasonable for a trademark-focused MCP server. The inclusion of a general-purpose query tool (ask_pipeworx) and memory tools broadens scope slightly, but overall count is appropriate.

Completeness3/5

Covers core trademark lookups (by registration, serial, documents) but lacks update/delete for trademark data (likely read-only by nature). Missing tools for searching trademarks by text or owner. Memory tools add tangential functionality. Notable gap: no search by trademark text or owner.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool picks the right tool and fills arguments, implying autonomy and potential side effects, but does not disclose specific behaviors like what tools it uses or if it modifies state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (3 sentences) with a clear front-loaded purpose. It includes examples that add value without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input (one natural language parameter) and no output schema, the description is sufficiently complete for an agent to understand how to invoke the tool. The examples cover typical use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameter 'question' is already described as 'Your question or request in natural language.' The description adds examples but not additional semantic details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool accepts a plain English question and returns an answer from the best data source. It uses a specific verb ('ask') and resource ('question'), and differentiates from sibling tools by emphasizing natural language interaction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use this tool (when you have a plain English question) and implies not to use other tools by saying 'No need to browse tools or learn schemas.' It provides example questions for guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are not provided, so the description must fully cover behavioral aspects. It states the tool returns the most relevant tools with names and descriptions, but does not disclose whether the search is purely semantic, any rate limits, or if it has side effects. No contradictions, but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each adding value: what it does, what it returns, and when to use it. No redundancy, well front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, no output schema, no nested objects) and the presence of 7 sibling tools, the description is nearly complete. It lacks detail on how results are ranked or whether it indexes all tool properties, but the guidance is sufficient for selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and both parameters have descriptions in the schema. The description adds minimal extra meaning beyond the schema, mentioning 'natural language' for query and default/max for limit. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches a tool catalog using natural language and returns relevant tools. It specifies the resource ('Pipeworx tool catalog'), the action ('search'), and distinguishes itself from siblings by being the discovery tool to call first.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear context on when to use the tool, implying it should be used before other tools like get_trademark_* or ask_pipeworx.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It clearly states the action is destructive ('Delete'), which implies mutability, but does not disclose whether deletion is reversible, requires confirmation, or affects other data. Minimal but acceptable for a simple deletion.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no filler. It is front-loaded with the action and resource, and every word is necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one required parameter, no output schema, and straightforward purpose, the description is complete enough. It explains what the tool does and what parameter is needed. No return value explanation is required given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% coverage with a single required parameter 'key', and its description is 'Memory key to delete'. The description adds no further semantic detail (e.g., format, length, or example). Baseline 3 is appropriate since schema already explains the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and the resource ('a stored memory by key'). It distinguishes the tool from siblings like 'remember' (create) and 'recall' (retrieve), though the phrase 'by key' is slightly ambiguous without specifying that the key identifies the memory.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a memory needs to be removed, but provides no guidance on when not to use it (e.g., if memory is shared) or alternatives (e.g., 'recall' to check before deleting). No exclusion or comparison with siblings is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trademark_by_registrationAInspect

Look up a US trademark by registration number. Returns status, owner, mark text, goods/services, and classification. Requires USPTO API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNoUSPTO API key
registration_numberYesUSPTO registration number (e.g., "1234567")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It mentions requires API key (a behavioral trait) but does not disclose rate limits, data freshness, or potential errors. Adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, then details and requirements. Efficient and clear, though could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has only 2 params with full schema coverage, no output schema. Description covers purpose, return fields, and auth requirement. Sufficient for this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description adds no extra meaning beyond what the schema already provides for the two parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies 'Look up a US trademark by registration number' with clear verb ('look up') and resource ('trademark'), and lists returned fields (status, owner, etc.). It distinguishes from siblings like get_trademark_by_serial by focusing on registration number.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly states 'Requires USPTO API key', guiding the agent to provide that parameter. No exclusion or alternative mentioned, but the context of sibling tools (e.g., get_trademark_by_serial) provides implicit differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trademark_by_serialAInspect

Look up a US trademark by serial number. Returns status, owner, filing/registration dates, goods/services, and classification. Requires USPTO API key (free at account.uspto.gov).

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNoUSPTO API key (register free at account.uspto.gov/api-manager)
serial_numberYesUSPTO serial number (e.g., "97123456")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden. It correctly notes that an external API key is required, which is a critical behavioral detail not obvious from the schema alone. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, then key fields, then prerequisite. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple tool with 2 well-described params, no output schema, and no nested objects, description covers purpose, parameters, and a critical prerequisite (API key). Complete for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description adds value by explaining the purpose of the api_key parameter (requirement for USPTO) and serial_number (e.g., format example), going beyond the schema's own descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies exact resource (US trademark by serial number) and action (look up), and lists key returned fields (status, owner, dates, goods/services, classification), clearly distinguishing it from siblings like get_trademark_by_registration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States requirement for USPTO API key and where to obtain it, but does not explicitly mention when to use this tool versus alternative tools (e.g., when to use serial vs registration number).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trademark_documentsAInspect

Get the prosecution history (office actions, responses, etc.) for a trademark by serial number. Requires USPTO API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNoUSPTO API key
serial_numberYesUSPTO serial number
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description explicitly states that the tool requires a USPTO API key, which is a critical behavioral detail not captured in annotations (which are empty). It also clarifies the nature of the data (prosecution history: office actions, responses, etc.). However, no information about rate limits, pagination, or error handling is provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose and a key requirement. Every word serves a purpose, and there is no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only two parameters, no output schema, and no annotations, the description adequately covers the purpose and the api_key requirement. However, it could mention what the tool returns (e.g., format or structure) to improve completeness for an agent without output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already provides clear descriptions for both parameters. The description adds context by indicating that api_key is a requirement and serial_number identifies the trademark, but these are already implied by the schema. No additional semantics beyond the schema are provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves prosecution history for a trademark by serial number, using the verb 'get' and specifying the resource (prosecution history) and identifier (serial number). It also notes a requirement (USPTO API key), and distinguishes from sibling tools like get_trademark_by_serial which likely retrieves basic data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to retrieve prosecution history) and mentions a prerequisite (USPTO API key). It does not explicitly state when not to use it, but the sibling tools (e.g., get_trademark_by_serial) imply alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description does not disclose any behavioral traits beyond what is obvious from the tool name and schema. No annotations are provided, so the description carries the full burden, but it lacks details on side effects, persistence guarantees, or session scope.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, with two sentences that efficiently convey the purpose and usage. No unnecessary words, but it could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (single optional parameter, no output schema, no annotations), the description is adequate but could mention return format or whether memories persist across sessions. It does not fully compensate for missing annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description adds minimal extra meaning. It explains that omitting key lists all memories, which aligns with the schema's 'omit to list all keys' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'retrieve' and resource 'memory by key', and distinguishes between retrieving a specific key and listing all memories. It differentiates from sibling tools like 'remember' and 'forget' by specifying this is for reading stored context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says to use this for retrieving context saved earlier, implying when to use it. It does not explicitly state when not to use it or provide alternatives, but the context is clear for a memory retrieval tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses persistence behavior: 'Authenticated users get persistent memory; anonymous sessions last 24 hours'. This adds valuable context beyond the schema about data lifespan.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with front-loaded purpose. The first sentence states the core function, the second provides usage scenarios, the third adds behavioral notes. Every sentence adds value, but could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple key-value store operation, no output schema, and no nested objects, the description is complete. It covers purpose, usage, and behavioral traits adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with examples for 'key' and description for 'value'. The description adds minimal extra meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, specifying the verb 'Store' and resource 'key-value pair in your session memory'. It distinguishes itself from siblings like 'forget' and 'recall' by focusing on saving data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases: 'save intermediate findings, user preferences, or context across tool calls'. It implies when to use (for persisting data) but doesn't explicitly contrast with 'forget' or 'recall' for when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.