Skip to main content
Glama

tool_compare_sources

Analyze differences and similarities across multiple sources to identify common topics and variations in information.

Instructions

Compare information across multiple sources.

Analyzes differences and similarities between sources.

Args: topic: Topic being compared. sources: List of URLs (2-5) to compare.

Returns: Comparison report with common topics and differences.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topicYes
sourcesYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The implementation of compare_sources function.
    async def compare_sources(topic: str, sources: list[str]) -> str:
        """Compare information across multiple sources.
    
        Args:
            topic: Topic being compared.
            sources: List of URLs to compare.
    
        Returns:
            Comparison report showing differences and similarities.
    
        Example:
            >>> report = await compare_sources(
            ...     "Python async",
            ...     ["https://realpython.com/async", "https://docs.python.org/3/library/asyncio.html"]
            ... )
        """
        if len(sources) < 2:
            return "Error: Need at least 2 sources to compare"
    
        if len(sources) > 5:
            sources = sources[:5]  # Limit to 5 sources
    
        # Fetch all sources in parallel
        async def fetch_with_title(url: str) -> tuple[str, str, str | None]:
            """Fetch source and return (url, title, content)."""
            try:
                doc = await _scraper.fetch(url, retry=1)
                return (url, doc.title, doc.content)
            except Exception:
                return (url, "Failed", None)
    
        results = await asyncio.gather(*[fetch_with_title(url) for url in sources])
    
        # Build comparison report
        report_lines = [
            f"# Source Comparison: {topic}\n",
            "## Sources\n",
        ]
    
        for i, (url, title, _) in enumerate(results, 1):
            status = "✓" if results[i - 1][2] else "✗"
            report_lines.append(f"{i}. {status} [{title}]({url})")
    
        report_lines.append("\n## Content Analysis\n")
    
        # Extract key terms from each source
        import re
    
        all_words: list[list[str]] = []
        for _, _, content in results:
  • The MCP tool wrapper tool_compare_sources which calls compare_sources.
    async def tool_compare_sources(topic: str, sources: list[str]) -> str:
        """Compare information across multiple sources.
    
        Analyzes differences and similarities between sources.
    
        Args:
            topic: Topic being compared.
            sources: List of URLs (2-5) to compare.
    
        Returns:
            Comparison report with common topics and differences.
        """
        return await compare_sources(topic, sources)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions analyzing differences and similarities but doesn't specify how the comparison is performed (e.g., text analysis, semantic similarity), what limitations exist (e.g., source accessibility, content types), or potential side effects (e.g., rate limits, data storage). For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with four sentences that each add value: purpose statement, elaboration on analysis, parameter explanations, and return value description. It's front-loaded with the core functionality and wastes no words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (comparison tool with 2 parameters, no annotations, but with an output schema), the description is minimally adequate. The output schema existence means return values don't need explanation, but the description still lacks details on behavioral traits (e.g., how comparison works, error handling) and usage context. It covers basics but leaves gaps for effective tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It adds meaningful semantics: 'topic' is described as 'Topic being compared,' and 'sources' as 'List of URLs (2-5) to compare,' including a cardinality constraint (2-5 URLs). This clarifies beyond the basic schema types (string, array of strings), though it doesn't detail format requirements (e.g., URL validation) or provide examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare information across multiple sources' and 'Analyzes differences and similarities between sources.' This specifies the verb (compare/analyze) and resource (information across sources). However, it doesn't explicitly differentiate from sibling tools like tool_find_related or tool_monitor_changes, which might also involve comparison or analysis of sources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like tool_find_related, tool_monitor_changes, and tool_search_web that might overlap in functionality, there's no indication of specific contexts, prerequisites, or exclusions for using tool_compare_sources. The agent must infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Y4NN777/devlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server