Skip to main content
Glama
KemingHe

Python Dependency Manager Companion

by KemingHe

search_py_dep_man_docs

Search official Python dependency manager documentation to find verified commands, workflow guides, and troubleshooting solutions for pip, Poetry, uv, Conda, and PDM.

Instructions

Find comprehensive answers from latest official Python dependency manager documentation.

๐ŸŽฏ CORE VALUE: Access to authoritative, up-to-date official docs that general knowledge can't provide.

โšก EXECUTION STRATEGY (85%+ first-call success):

  • MANDATORY: Multi-call progress report format:

    ### ๐Ÿ“Š [Topic] Research - Progress Report [X]
    
    - โœ… **Gathered**: [key findings]
    - ๐Ÿ”„ **Next**: [specific gap]
    - ๐ŸŽฏ **Goal**: [deliverable]
  • WHY: Users lose confidence without progress visibility; structured updates prevent confusion

  • WHEN: Use progress report format for ANY multi-step research (migration guides, comparisons, complex tutorials)

  • ENFORCEMENT: Show progress header AFTER EVERY INDIVIDUAL TOOL CALL, not just at final response (why: continuous user confidence)

  • TIMING: Tool call 1 โ†’ Progress Report 1 โ†’ Tool call 2 โ†’ Progress Report 2 โ†’ etc. โ†’ Final Answer (why: step-by-step transparency)

  • PATTERNS: Start with proven query patterns below for maximum hit rate

๐ŸŽฏ PROVEN QUERY PATTERNS (use these exact phrases for maximum results):

  • Learning: "project setup tutorial", "workflow guide", "dependency management guide" (why: comprehensive coverage)

  • Commands: "command reference", "syntax comparison", "installation commands" (why: precise syntax)

  • Comparing: "tool A vs tool B", "migration guide", "feature comparison" (why: structured analysis)

  • Troubleshooting: "troubleshooting guide", "common errors", "best practices" (why: solution-focused)

๐Ÿง  RESPONSE OPTIMIZATION RULES:

  • Specific question โ†’ focused query + top_n 3-5 + bullet format + show progress (why: targeted precision)

  • Broad/ambiguous โ†’ comprehensive query + top_n 7-10 + ranked list + track gaps (why: exploration needed)

  • Tool comparison โ†’ "X vs Y" + no filter + top_n 7-10 + scoring table + cite sources (why: comprehensive coverage)

  • Command help โ†’ expand terms + top_n 5-7 + code examples + update progress (why: actionable guidance)

๐Ÿ“š CITATION REQUIREMENTS (builds user confidence):

  • MANDATORY: Cite for commands, claims, comparisons, best practices, migration steps, troubleshooting advice (why: user confidence)

  • DENSITY: 1 citation per major section, 2-3 for complex guides/tutorials (why: consistent coverage)

  • FORMAT: "According to the official X guide" or "Command reference shows" (why: developer-friendly)

  • PLACEMENT: Immediately after stating command syntax, making performance claims, or giving advice (why: contextual validation)

  • PROGRESS INTEGRATION: Include citations naturally within progress updates to show source validation (why: transparency)

๐Ÿšจ CRITICAL: Ground ALL responses in search results with citations (why: this tool's unique value over general knowledge).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch intent: 'workflow tutorial', 'command reference', 'best practices', 'troubleshooting', or 'comparison'
package_filterNoFocus on specific tool when comparing or learning tool-specific workflows
top_nNoNumber of top results to return - use more (7-10) for broad/ambiguous requests, fewer (3-5) for specific questions

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but fails to disclose critical behavioral traits such as the return format (excerpts, full pages, or summaries?), whether results areๅฎžๆ—ถ or cached, rate limits, or authentication requirements. Instead, it focuses heavily on response formatting instructions (progress reports, citations) that describe how the AI should present results to users rather than how the tool itself behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness1/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is severely bloated with meta-commentary ('why: user confidence'), emoji-laden headers, and extensive instructions about how the AI should format its final response to the user (progress report formats, citation placement). This content belongs in system prompts or tool-calling guidelines, not the tool description. Every sentence does not earn its place; much is redundant with the AI's general instruction set.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema (not shown but indicated in context signals) and 100% input schema coverage, the description adequately covers the tool's scope. It identifies the authoritative source (official docs) and citation requirements. However, it lacks clarity on the output structure despite the output schema existing, relying instead on formatting instructions that assume specific return types without describing them.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds significant value through the 'PROVEN QUERY PATTERNS' section, providing specific example strings for the query parameter (e.g., 'project setup tutorial', 'command reference'), and explaining the semantic rationale for top_n values in different contexts (targeted precision vs. exploration). This goes beyond the schema's basic type information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening sentence clearly states the tool 'Find[s] comprehensive answers from latest official Python dependency manager documentation,' identifying the specific resource (Python dependency manager docs) and action (find answers). The 'CORE VALUE' section further distinguishes it from general knowledge. However, the purpose is somewhat obscured by the extensive procedural instructions that follow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Extensive guidance is provided on when to use specific query patterns ('workflow tutorial' for learning, 'X vs Y' for comparisons, etc.) and when to adjust top_n (3-5 for specific questions, 7-10 for broad requests). While there are no sibling tools to differentiate from, the description effectively maps user intents to specific parameter configurations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/KemingHe/python-dependency-manager-companion-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server