Skip to main content
Glama

Server Details

PyPI MCP — wraps the PyPI JSON API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-pypi
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 8 of 8 tools scored. Lowest: 3.4/5.

Server CoherenceC
Disambiguation3/5

The tools 'get_package' and 'get_release' overlap in retrieving package metadata, potentially causing confusion. The memory tools (remember, recall, forget) form a separate subsystem unrelated to PyPI. The 'ask_pipeworx' tool claims to pick the right tool itself, which can disambiguate but also adds confusion about when to use it vs. other tools.

Naming Consistency2/5

Tool names are inconsistent: 'ask_pipeworx', 'discover_tools', 'get_package', 'get_release' use different verb styles (ask vs. discover vs. get). Memory tools use single verbs (remember, recall, forget) without noun context. No clear pattern is followed across the set.

Tool Count3/5

With 8 tools, the count is reasonable. However, 4 tools (ask_pipeworx, discover_tools, memory operations) are not core to the PyPI domain, diluting focus. The actual PyPI functionality is covered by only 3 tools (get_package, get_release, search_packages), which is thin.

Completeness2/5

For a PyPI server, only basic read operations are present (search, get package, get release). Missing common operations like listing all versions, searching by keyword, or retrieving download statistics. The presence of unrelated memory tools does not compensate for these gaps.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description explains that the tool internally picks the right tool and fills arguments, which is a behavioral detail beyond the input schema. No annotations are provided, so the description compensates well by clarifying the tool's autonomous nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, uses two sentences plus three examples, and front-loads the core purpose. Every sentence adds value, and the examples quickly convey usage without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (one parameter, no output schema), the description is complete enough. It explains the tool's purpose, usage, and examples. It could mention that the answer may come from different sources, but this is implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description is not required to add parameter info. However, it provides useful context by explaining that the question should be in natural language, which complements the schema's 'Your question or request in natural language' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool accepts a natural language question and returns an answer by selecting the appropriate data source. It distinguishes itself from siblings by acting as a unified query interface, avoiding the need to browse individual tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool (for any natural language query) and provides examples. It does not explicitly say when not to use it or list alternatives, but the examples and the 'best available data source' phrasing imply broad applicability.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes the tool as a search returning relevant tools, but does not disclose behavior like whether it modifies state, requires authentication, or has rate limits. The description is adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no wasted words. Each sentence adds value: what it does, what it returns, and when to use it.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a search over tool names/descriptions (no complex output schema), the description is nearly complete. It lacks details about sorting or default limit behavior, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds context for the query parameter (example queries) but does not add beyond what the schema's descriptions already provide for limit.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches a tool catalog by natural language query, returning relevant tool names and descriptions. It distinguishes from siblings by emphasizing discovery among 500+ tools, contrasting with siblings like ask_pipeworx or search_packages.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear when-to-use guidance, implying it should be used before other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. 'Delete' indicates destructive action, but no details on irreversibility, authorization, or side effects are given.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence, front-loading the action and resource. No extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (1 param, no output schema), the description covers the basic purpose. However, it lacks context on effects, confirmation, or error states.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the 'key' parameter with full coverage (100%). The description adds no further meaning beyond 'by key', which is already implied by the schema's 'Memory key to delete'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Delete', resource 'a stored memory', and the mechanism 'by key'. It distinguishes from 'remember' (store) and 'recall' (retrieve) siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for deleting a specific memory when you know the key, but does not provide explicit guidance on when to use alternatives or when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_packageBInspect

Get full package metadata including all released versions, dependencies, Python version requirements, and license info.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesPyPI package name
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries the full burden. It discloses that the tool returns metadata including specified fields, but does not mention any side effects, rate limits, or other behavioral traits. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the purpose and key details. No filler words, but could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the return content adequately. However, for a tool with simple parameters and no annotations, the description is sufficiently complete but leaves some behavioral questions unanswered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameter 'name' is already documented in the schema. The description does not add additional semantic context beyond what the schema provides, warranting a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves full metadata for a PyPI package, listing specific fields. It distinguishes from sibling 'get_release' by focusing on the package-level overview vs a specific release.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for fetching package metadata, but does not explicitly state when to use this over alternatives like search_packages or get_release. No exclusions or context are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_releaseAInspect

Get details for a specific package version (e.g., "requests==2.31.0"). Returns Python requirements, release date, and download URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesPyPI package name
versionYesVersion string (e.g., "2.28.2")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It correctly identifies the tool as a read operation (no destructive side effects) and lists the type of data returned. However, it does not disclose behavior like error handling (e.g., if version doesn't exist), rate limits, or the structure of the response. A 3 is appropriate given the moderate transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the purpose and key output details without any fluff. It is front-loaded with the core action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with two well-documented parameters and no output schema, the description is nearly complete. It covers what the tool returns and the required inputs. A minor gap is the lack of mention of error cases or return format, but the tool is straightforward enough that this is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning both parameters ('name' and 'version') are already described clearly in the schema. The description adds no additional semantic meaning beyond what the schema provides, so baseline 3 is correct.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get'), the resource ('metadata for a specific version of a PyPI package'), and the key details included ('requires_python, upload time, and download URLs'). It effectively distinguishes itself from the sibling 'get_package' (which likely returns metadata for the latest version or a broader overview) by specifying 'specific version'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for retrieving metadata for a specific version, which is clear. However, it does not explicitly state when to use this tool versus alternatives like 'search_packages' or 'get_package', nor does it mention any prerequisites or context (e.g., the package must exist, version must be valid).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It explains the key behavior (omit key to list all), but does not mention any side effects, performance implications, or limitations (e.g., number of memories stored).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, each adding value: first sentence states the core function, second provides usage context. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is simple (single optional parameter, no output schema, no nested objects), the description is fairly complete. It explains both retrieval and listing, and provides usage context. Missing is mention of return format (e.g., plain text or structured data), but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter, so baseline is 3. The description adds meaning by explaining the parameter's purpose and the behavior when omitted, but does not provide additional detail beyond what the schema already describes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a stored memory by key or lists all memories if key is omitted. It distinguishes its purpose from siblings like 'remember' (store) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context for when to use it: 'to retrieve context you saved earlier in the session or in previous sessions.' However, it does not explicitly state when not to use it or mention alternatives like 'search_packages' which may serve different context retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses persistence differences between authenticated users (persistent) and anonymous sessions (24-hour expiry). No annotations provided, so this description carries the full burden and does well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, clear and direct. The first sentence states the core action, second adds context. Could be slightly more compact but no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple store tool with two parameters and no output schema, the description covers purpose, usage, and behavioral differences (persistence). Adequately complete for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with descriptive parameter descriptions. The description adds context on purpose and memory lifetime but not new parameter info. Baseline 3 is elevated to 4 because of the rich schema descriptions and added context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool stores a key-value pair in session memory, with examples of use cases and differentiation from siblings like recall and forget.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (save intermediate findings, user preferences, context across calls), but does not mention when not to use or compare to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_packagesAInspect

Search PyPI for packages by name. Returns latest version, summary, author, license, and project URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesExact PyPI package name (e.g., "requests", "numpy")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It discloses the tool is a read-only lookup (returns data, no side effects) and notes PyPI API limitation. However, it doesn't mention potential network latency, error handling, or rate limits, which are relevant for a remote API call.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose and return data, second provides critical usage constraint. No wasted words. Information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given single parameter, no output schema, and clear schema coverage, the description sufficiently covers purpose, usage, and constraints. Could mention that it returns only the latest version (not historical), but not required for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the only parameter ('Exact PyPI package name'). The description reinforces the exact name requirement, adding context beyond the schema. No additional parameters to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Look up') and resource ('PyPI package by exact name'), clearly stating it returns version, summary, author, license, and project URLs. It distinguishes from siblings like 'get_package' by emphasizing exact name matching and PyPI limitation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use this tool (exact name lookup) and warns against using it for keyword search, noting that PyPI has no keyword search API. This effectively guides the agent to use alternative tools like 'search_calls_extensive' if needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.