Skip to main content
Glama
ilhankilic

YaparAI MCP Server

by ilhankilic

compare_competitors

Compare up to 4 competitors on key metrics like PageSpeed score, followers, post frequency, and product count to support SWOT analysis and positioning decisions.

Instructions

Compare 2–4 competitors on key metrics.

Returns latest metric snapshots for each competitor including PageSpeed score, total followers, posts in last 30 days, and product count. Use this as the basis for SWOT analysis or positioning decisions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
competitor_idsYes2–4 competitor UUIDs
org_idNoOptional — override the org bound to the API key

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main tool handler function. Validates that 2–4 competitor IDs are provided, then delegates to the HTTP client.
    async def compare_competitors(
        competitor_ids: list[str],
        org_id: str | None = None,
    ) -> dict:
        """
        Compare 2–4 competitors on key metrics.
    
        Returns latest metric snapshots for each competitor including
        PageSpeed score, total followers, posts in last 30 days, and
        product count. Use this as the basis for SWOT analysis or
        positioning decisions.
    
        Args:
            competitor_ids: 2–4 competitor UUIDs
            org_id: Optional — override the org bound to the API key
    
        Returns:
            {"metrics": [...]} — one entry per competitor with KPI snapshot.
        """
        if not 2 <= len(competitor_ids) <= 4:
            raise ValueError("competitor_ids must have 2 to 4 items")
        client = YaparAIClient()
        return await client.enterprise_compare_competitors(competitor_ids, org_id=org_id)
  • Registers the compare_competitors function as an MCP tool on the FastMCP server.
    mcp.tool(compare_competitors)
  • Type signature and docstring define the input schema (list of 2-4 competitor UUIDs, optional org_id) and output shape (dict with metrics).
    async def compare_competitors(
        competitor_ids: list[str],
        org_id: str | None = None,
    ) -> dict:
        """
        Compare 2–4 competitors on key metrics.
    
        Returns latest metric snapshots for each competitor including
        PageSpeed score, total followers, posts in last 30 days, and
        product count. Use this as the basis for SWOT analysis or
        positioning decisions.
    
        Args:
            competitor_ids: 2–4 competitor UUIDs
            org_id: Optional — override the org bound to the API key
    
        Returns:
            {"metrics": [...]} — one entry per competitor with KPI snapshot.
        """
        if not 2 <= len(competitor_ids) <= 4:
            raise ValueError("competitor_ids must have 2 to 4 items")
        client = YaparAIClient()
        return await client.enterprise_compare_competitors(competitor_ids, org_id=org_id)
  • HTTP client method that sends a POST request to /v1/public/enterprise/competitors/compare with the competitor_ids payload.
    async def enterprise_compare_competitors(
        self, competitor_ids: list[str], org_id: str | None = None
    ) -> dict:
        headers = {"X-Organization-Id": org_id} if org_id else {}
        return await self._request(
            "POST",
            "/v1/public/enterprise/competitors/compare",
            json={"competitor_ids": competitor_ids},
            headers=headers,
        )
  • Import of the compare_competitors function into the server module from the enterprise tools module.
    from yaparai.tools.enterprise import (
        list_competitors,
        get_competitor,
        compare_competitors,
        list_org_products,
        create_org_product,
        update_product_stock,
    )
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries transparency burden. Describes 'latest metric snapshots' and lists returned metrics, making behavior clear. No hidden destructive actions or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with front-loaded purpose. No unnecessary words. Efficiently conveys purpose, metrics, and use case.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, metrics, and use case. Has output schema. Missing potential limitations like data freshness or prerequisites, but sufficient for selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with good descriptions. Description adds context about the 2-4 competitor range and return metrics, but does not significantly extend schema info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it compares 2-4 competitors and lists specific metrics (PageSpeed, followers, posts, product count). Distinguishes from sibling tools like get_competitor (individual) and list_competitors (list).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states use for SWOT analysis or positioning decisions, providing good context. Lacks explicit when-not-to-use guidance, but siblings offer alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ilhankilic/yaparai-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server