Dockerhub
Server Details
Docker Hub MCP — wraps the Docker Hub v2 API (free, no auth required for public data)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-dockerhub
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 8 of 8 tools scored.
Most tools are distinct: get_image, get_tags, and search_images are clearly Docker-focused, while ask_pipeworx and discover_tools seem to be general-purpose utilities. However, ask_pipeworx overlaps with discover_tools by also finding tools, and the memory tools (remember, recall, forget) feel unrelated to Docker operations, causing potential confusion about the server's primary domain.
Tool names follow a consistent verb_noun pattern: ask_pipeworx, discover_tools, get_image, get_tags, search_images, remember, recall, forget. The only minor deviation is 'ask_pipeworx' using a proper noun, but overall the pattern is clear.
8 tools is a reasonable count for a Docker-focused server, but 3 of them (ask_pipeworx, discover_tools, memory tools) are not Docker-specific, reducing the effective count to 5 for Docker operations. This is slightly below the ideal range but still acceptable.
The Docker operations cover search, metadata retrieval, and tag listing, but lack core actions like pulling, pushing, managing repositories, or handling authentication. The general-purpose tools (ask_pipeworx, discover_tools, memory) don't fill these gaps, leaving significant missing functionality for a Docker-focused server.
Available Tools
8 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral traits. It explains that the tool picks the right tool and fills arguments, but does not disclose limitations, potential errors, or whether it can handle follow-ups. The description is adequate but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at three sentences, front-loaded with the core purpose, and includes examples. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input (one string parameter), no output schema, and the high-level intent of the tool, the description provides sufficient context for an agent to understand its use. The examples further clarify its scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds context that the question should be in plain English and provides examples, which adds value beyond the schema's 'natural language' hint.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts a plain English question and returns an answer from the best available data source, distinguishing it from sibling tools that are about tool discovery, memory, or image retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance to use the tool by asking questions in natural language, and gives examples. It does not explicitly state when not to use it or mention alternatives, but the context of sibling tools implies it's the primary question-answering tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description reveals the tool's behavior: it returns 'the most relevant tools with names and descriptions' and is intended as a discovery step. With no annotations, the description carries the burden, and it does well by explaining what the tool returns and its role in a workflow.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences, each adding value. The first sentence defines the action, the second provides usage context. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema), the description is complete. It explains what the tool does, when to use it, and what to expect in return. No additional details are necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description doesn't add extra meaning beyond the schema. The parameter descriptions are clear in the schema, so the description doesn't need to elaborate. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It specifies the verb 'search', the resource 'tool catalog', and the action of returning relevant tools, distinguishing it from sibling tools like 'ask_pipeworx' or 'search_images'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use it: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear guidance on prioritization and context, making it highly effective.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It clearly states the action is destructive ('delete') and specifies the identifier. However, it does not disclose effects on related data (e.g., if key doesn't exist, idempotency) or authorization needs. Adequate for a simple operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. Front-loaded with action and resource. Perfectly concise for a simple operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 required param, no output schema), the description is sufficiently complete. It explains the purpose and identifies the key. No additional info about return value or side effects is needed for a straightforward delete operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds context by stating the key identifies the memory to delete, confirming it's the target. The parameter is described in schema as 'Memory key to delete', which is clear; description reinforces this.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action ('delete'), the resource ('stored memory'), and the identifier ('by key'). It is specific and distinguishes from siblings like 'remember' (create) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives are mentioned. However, the single required parameter implies direct key-based deletion, and the context of sibling tools (remember, recall) suggests this is for removal after retrieval. No guidance on when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_imageAInspect
Get detailed metadata for a Docker Hub repository: description, pull count, stars, last updated date, and official status. Use before pulling an image to verify quality and currency.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Repository name (e.g., "nginx", "redis") | |
| namespace | Yes | Repository namespace — use "library" for official images (e.g., "library", "bitnami") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It correctly indicates a read-only operation (get metadata) without suggesting mutations. However, it does not disclose any behavioral details such as authentication requirements, rate limits, or what happens if the repository does not exist. The description is accurate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that clearly states the tool's action and the data fields returned. No wasted words; the structure is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (2 parameters, no output schema, no nested objects) and the sibling tools, the description is largely complete. It explains what the tool returns and gives context for the 'namespace' parameter. A minor gap is the lack of mention that 'name' should not include the namespace prefix, but overall it is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both parameters have descriptions in the schema). The tool description adds value by explaining that 'namespace' should be 'library' for official images, but does not elaborate on the parameters beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get metadata') and clearly identifies the resource ('Docker Hub repository') and the specific fields returned (pull count, star count, etc.), making the tool's purpose immediately clear and distinct from siblings like 'get_tags' or 'search_images'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this tool is for retrieving metadata rather than tags or searching, but it does not explicitly state when to use this tool versus alternatives (e.g., 'get_tags' for tag lists, 'search_images' for discovery). No when-not-to-use guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tagsAInspect
List available tags for a Docker image sorted by recency. Returns tag name, digest, size, and push date. Use to find and select specific versions to pull.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Repository name | |
| limit | No | Number of tags to return (default 20, max 100) | |
| namespace | Yes | Repository namespace (use "library" for official images) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It states ordering and return fields, but does not mention whether the tool is read-only (likely) or any limits, rate limiting, or authentication needs. The default and max for 'limit' are in the schema but not repeated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundancy. First sentence states purpose and ordering, second lists return fields. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers the main return fields. Parameter descriptions are clear. No nested objects. Missing guidance on when to use pagination or the 'limit' parameter's effect, but overall complete for a list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds meaning by specifying that 'namespace' should be 'library' for official images, which is not in the schema description. This clarifies usage beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies a clear action ('List available tags'), a resource ('Docker Hub image'), ordering ('by last pushed date'), and returned fields ('tag name, digest, size, and last pushed timestamp'). This distinguishes it from siblings like 'get_image' which likely focuses on a single image.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving tags but does not explicitly state when to use this vs alternatives like search_images. No exclusions or prerequisites are mentioned, though the parameter description for 'namespace' provides a hint about official images.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that listing all memories is possible by omitting key, but does not mention if the operation is read-only or if there are any side effects, such as marking memories as accessed. It also does not discuss persistence across sessions, though the description hints at cross-session retrieval.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, using two sentences to convey purpose and usage. It is front-loaded with the main action. Minor improvement: could be more tightly integrated with the alternative behavior (list vs retrieve) without repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not explain the return format (e.g., string, object, list of keys). For a simple retrieval tool, this is acceptable but could be improved. The tool has low complexity, and the description covers the core use case.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter 'key'. The description adds the nuance that omitting key lists all memories, which goes beyond the schema description. However, it does not specify the format of the key or any constraints (e.g., case sensitivity, allowed characters). Baseline 3 is appropriate as schema already covers the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a stored memory by key or lists all memories when key is omitted. The verb 'retrieve' is specific and the resource 'memory' is well-defined. It distinguishes from sibling tools like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool: to retrieve context saved earlier. It also mentions the behavior when omitting key (list all). However, it does not explicitly contrast with sibling tools like 'search_images' or 'get_image', which might also retrieve stored data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses persistence behavior: authenticated users get persistent memory, anonymous sessions last 24 hours. This adds significant context beyond just 'store key-value'. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, all essential: first states core action, second gives usage context, third adds behavioral detail. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, no nested objects, and only 2 parameters, description covers purpose, usage, and persistence. Could mention value size limits or overwrite behavior, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and schema already describes both parameters with examples. Description does not add additional meaning beyond what schema provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool stores a key-value pair in session memory, specifying the purpose (saving intermediate findings, user preferences, context) and distinguishing it from siblings like 'recall' (retrieval) and 'forget' (deletion).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'use this to save intermediate findings, user preferences, or context across tool calls', providing good guidance. However, no explicit when-not-to-use or mention of alternatives for similar tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_imagesAInspect
Search Docker Hub for container images by keyword. Returns repository name, description, pull count, star count, and official status. Use when finding images for deployment.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (default 10, max 100) | |
| query | Yes | Search query (e.g., "nginx", "postgres") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It correctly implies a read-only operation by stating 'Search' and returning metadata. However, it does not mention if the tool has rate limits, requires authentication, or how pagination works (beyond the limit parameter).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that directly convey purpose and return data. No fluff or redundancy. Front-loaded with the primary action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there is no output schema, the description helpfully lists return fields. The tool has only two parameters (both described in schema) and no nested objects. The description is sufficient for an agent to understand what the tool does and what it returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add parameter-level details beyond the schema; it only mentions return fields. No additional semantic value is provided for parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it searches Docker Hub for public images and lists specific return fields. The verb 'search' combined with 'Docker Hub' and 'public images' precisely defines the resource and action, distinguishing it from siblings like 'get_image' and 'get_tags'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. While siblings like 'get_image' and 'get_tags' imply different use cases (specific image vs tags), the description does not clarify trade-offs or provide conditions for choosing this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!