Skip to main content
Glama
gemini2026

Documentation Search MCP Server

by gemini2026

compare_library_security

Compare security scores of multiple libraries to identify safer options for your project. Analyze vulnerabilities and get recommendations for informed selection.

Instructions

Compare security scores across multiple libraries to help with selection.

Args:
    libraries: List of library names to compare
    ecosystem: Package ecosystem for all libraries

Returns:
    Security comparison with rankings and recommendations

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
librariesYes
ecosystemNoPyPI

Implementation Reference

  • MCP tool handler for 'compare_library_security'. Compares security of multiple libraries by parallel scanning each with security_integration.get_security_summary, sorts by score, adds rankings and ratings, generates overall recommendation.
    async def compare_library_security(libraries: List[str], ecosystem: str = "PyPI"):
        """
        Compare security scores across multiple libraries to help with selection.
    
        Args:
            libraries: List of library names to compare
            ecosystem: Package ecosystem for all libraries
    
        Returns:
            Security comparison with rankings and recommendations
        """
        await enforce_rate_limit("compare_library_security")
    
        from .vulnerability_scanner import security_integration
    
        if len(libraries) > 10:
            return {"error": "Maximum 10 libraries allowed for comparison"}
    
        results = []
    
        # Scan all libraries in parallel for faster comparison
        scan_tasks = [
            security_integration.get_security_summary(lib, ecosystem) for lib in libraries
        ]
    
        try:
            summaries = await asyncio.gather(*scan_tasks, return_exceptions=True)
    
            for i, (library, summary_item) in enumerate(zip(libraries, summaries)):
                if isinstance(summary_item, Exception):
                    results.append(
                        {
                            "library": library,
                            "security_score": 0,
                            "status": "scan_failed",
                            "error": str(summary_item),
                        }
                    )
                else:
                    summary = summary_item
                    results.append(
                        {
                            "library": library,
                            "security_score": summary.get("security_score", 0),  # type: ignore
                            "status": summary.get("status", "unknown"),  # type: ignore
                            "vulnerabilities": summary.get("total_vulnerabilities", 0),  # type: ignore
                            "critical_vulnerabilities": summary.get(
                                "critical_vulnerabilities", 0
                            ),  # type: ignore
                            "recommendation": summary.get("primary_recommendation", ""),  # type: ignore
                        }
                    )
    
            # Sort by security score (highest first)
            results.sort(key=lambda x: x.get("security_score", 0), reverse=True)
    
            # Add rankings
            for i, result in enumerate(results):
                result["rank"] = i + 1
                score = result.get("security_score", 0)
                if score >= 90:
                    result["rating"] = "🛡️ Excellent"
                elif score >= 70:
                    result["rating"] = "✅ Secure"
                elif score >= 50:
                    result["rating"] = "⚠️ Caution"
                else:
                    result["rating"] = "🚨 High Risk"
    
            # Generate overall recommendation
            if results:
                best_lib = results[0]
    
                if best_lib.get("security_score", 0) >= 80:
                    overall_rec = (
                        f"✅ Recommended: {best_lib['library']} has excellent security"
                    )
                elif best_lib.get("security_score", 0) >= 60:
                    overall_rec = f"⚠️ Proceed with caution: {best_lib['library']} is the most secure option"
                else:
                    overall_rec = "🚨 Security concerns: All libraries have significant vulnerabilities"
            else:
                overall_rec = "Unable to generate recommendation"
    
            return {
                "comparison_results": results,
                "total_libraries": len(libraries),
                "scan_timestamp": datetime.now().isoformat(),
                "overall_recommendation": overall_rec,
                "ecosystem": ecosystem,
            }
    
        except Exception as e:
            return {
                "error": f"Security comparison failed: {str(e)}",
                "libraries": libraries,
                "ecosystem": ecosystem,
            }
  • Key helper function called by the handler. Fetches security summary for a single library using VulnerabilityScanner.scan_library, returns score, vuln counts, status, and recommendation.
    async def get_security_summary(
        self, library_name: str, ecosystem: str = "PyPI"
    ) -> Dict[str, Any]:
        """Get concise security summary"""
        try:
            report = await self.scanner.scan_library(library_name, ecosystem)
            return {
                "library": library_name,
                "security_score": report.security_score,
                "total_vulnerabilities": report.total_vulnerabilities,
                "critical_vulnerabilities": report.critical_count,
                "status": "secure" if report.security_score >= 70 else "at_risk",
                "primary_recommendation": (
                    report.recommendations[0]
                    if report.recommendations
                    else "No specific recommendations"
                ),
            }
        except Exception as e:
            return {
                "library": library_name,
                "security_score": 50.0,
                "error": str(e),
                "status": "unknown",
            }
  • Core scanning logic in VulnerabilityScanner. Performs parallel scans on OSV, GitHub advisories, Safety DB; aggregates vulnerabilities; computes security score and report.
    async def scan_library(
        self, library_name: str, ecosystem: str = "PyPI"
    ) -> SecurityReport:
        """
        Comprehensive vulnerability scan for a library
    
        Args:
            library_name: Name of the library (e.g., "fastapi", "react")
            ecosystem: Package ecosystem ("PyPI", "npm", "Maven", etc.)
    
        Returns:
            SecurityReport with vulnerability details
        """
        cache_key = f"{library_name}_{ecosystem}"
    
        # Check cache first
        if self._is_cached(cache_key):
            return self.cache[cache_key]["data"]
    
        vulnerabilities = []
    
        # Scan multiple sources in parallel
        scan_tasks = [
            self._scan_osv(library_name, ecosystem),
            self._scan_github_advisories(library_name, ecosystem),
            (
                self._scan_safety_db(library_name)
                if ecosystem.lower() == "pypi"
                else self._empty_scan()
            ),
        ]
    
        try:
            results = await asyncio.gather(*scan_tasks, return_exceptions=True)
    
            for result in results:
                if isinstance(result, list):
                    vulnerabilities.extend(result)
                elif isinstance(result, Exception):
                    print(f"Scan error: {result}", file=sys.stderr)
    
        except Exception as e:
            print(f"Vulnerability scan failed for {library_name}: {e}", file=sys.stderr)
    
        # Generate security report
        report = self._generate_security_report(
            library_name, ecosystem, vulnerabilities
        )
    
        # Cache the result
        self._cache_result(cache_key, report)
    
        return report
  • Data class defining the structure of security reports returned by scans, including scores, vuln counts, and details.
    class SecurityReport:
        """Comprehensive security report for a library"""
    
        library_name: str
        ecosystem: str  # "pypi", "npm", "maven", etc.
        scan_date: str
        total_vulnerabilities: int
        critical_count: int
        high_count: int
        medium_count: int
        low_count: int
        security_score: float  # 0-100, higher is better
        recommendations: List[str]
        vulnerabilities: List[Vulnerability]
        latest_secure_version: Optional[str]
    
        def to_dict(self) -> Dict[str, Any]:
            return {
                "library_name": self.library_name,
                "ecosystem": self.ecosystem,
                "scan_date": self.scan_date,
                "summary": {
                    "total_vulnerabilities": self.total_vulnerabilities,
                    "critical": self.critical_count,
                    "high": self.high_count,
                    "medium": self.medium_count,
                    "low": self.low_count,
                    "security_score": self.security_score,
                },
                "latest_secure_version": self.latest_secure_version,
                "recommendations": self.recommendations,
                "vulnerabilities": [vuln.to_dict() for vuln in self.vulnerabilities],
            }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the tool compares security scores and provides rankings/recommendations, but lacks critical behavioral details such as data sources, rate limits, authentication requirements, or whether it performs real-time scans versus cached data. This is insufficient for a mutation-like tool (comparison implies potential data fetching).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. The 'Args' and 'Returns' sections add structure, though they could be more integrated. There's minimal waste, but it could be slightly more polished for a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (comparing multiple libraries for security), lack of annotations, and no output schema, the description is incomplete. It doesn't explain the return format (e.g., what 'rankings and recommendations' entail), data freshness, or error handling, making it inadequate for reliable agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining 'libraries' as 'List of library names to compare' and 'ecosystem' as 'Package ecosystem for all libraries', which clarifies the parameters beyond their schema titles. However, it doesn't specify format constraints (e.g., library name conventions) or the default value for 'ecosystem', leaving gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare security scores across multiple libraries to help with selection.' It specifies the verb ('compare'), resource ('security scores'), and outcome ('help with selection'). However, it doesn't explicitly differentiate from sibling tools like 'snyk_scan_library' or 'suggest_secure_libraries', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'snyk_scan_library' (which might scan individual libraries) or 'suggest_secure_libraries' (which might recommend libraries), leaving the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gemini2026/documentation-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server