Skip to main content
Glama
funinii

TrendRadar

by funinii

analyze_topic_trend

Analyzes topic trends by tracking popularity changes, detecting viral spikes, predicting future hotspots, and examining lifecycle patterns to understand topic evolution over time.

Instructions

统一话题趋势分析工具 - 整合多种趋势分析模式

重要:日期范围处理 当用户使用"本周"、"最近7天"等自然语言时,请先调用 resolve_date_range 工具获取精确日期:

  1. 调用 resolve_date_range("本周") → 获取 {"start": "YYYY-MM-DD", "end": "YYYY-MM-DD"}

  2. 将返回的 date_range 传入本工具

Args: topic: 话题关键词(必需) analysis_type: 分析类型,可选值: - "trend": 热度趋势分析(追踪话题的热度变化) - "lifecycle": 生命周期分析(从出现到消失的完整周期) - "viral": 异常热度检测(识别突然爆火的话题) - "predict": 话题预测(预测未来可能的热点) date_range: 日期范围(trend和lifecycle模式),可选 - 格式: {"start": "YYYY-MM-DD", "end": "YYYY-MM-DD"} - 获取方式: 调用 resolve_date_range 工具解析自然语言日期 - 默认: 不指定时默认分析最近7天 granularity: 时间粒度(trend模式),默认"day"(仅支持 day,因为底层数据按天聚合) threshold: 热度突增倍数阈值(viral模式),默认3.0 time_window: 检测时间窗口小时数(viral模式),默认24 lookahead_hours: 预测未来小时数(predict模式),默认6 confidence_threshold: 置信度阈值(predict模式),默认0.7

Returns: JSON格式的趋势分析结果

Examples: 用户:"分析AI本周的趋势" 推荐调用流程: 1. resolve_date_range("本周") → {"date_range": {"start": "2025-11-18", "end": "2025-11-26"}} 2. analyze_topic_trend(topic="AI", date_range={"start": "2025-11-18", "end": "2025-11-26"})

用户:"看看特斯拉最近30天的热度"
推荐调用流程:
1. resolve_date_range("最近30天") → {"date_range": {"start": "2025-10-28", "end": "2025-11-26"}}
2. analyze_topic_trend(topic="特斯拉", analysis_type="lifecycle", date_range=...)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topicYes
analysis_typeNotrend
date_rangeNo
granularityNoday
thresholdNo
time_windowNo
lookahead_hoursNo
confidence_thresholdNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The handler 'analyze_topic_trend_unified' in 'AnalyticsTools' class which orchestrates the different topic trend analysis modes.
    def analyze_topic_trend_unified(
        self,
        topic: str,
        analysis_type: str = "trend",
        date_range: Optional[Dict[str, str]] = None,
        granularity: str = "day",
        threshold: float = 3.0,
        time_window: int = 24,
        lookahead_hours: int = 6,
        confidence_threshold: float = 0.7
    ) -> Dict:
        """
        统一话题趋势分析工具 - 整合多种趋势分析模式
    
        Args:
            topic: 话题关键词(必需)
            analysis_type: 分析类型,可选值:
                - "trend": 热度趋势分析(追踪话题的热度变化)
                - "lifecycle": 生命周期分析(从出现到消失的完整周期)
                - "viral": 异常热度检测(识别突然爆火的话题)
                - "predict": 话题预测(预测未来可能的热点)
            date_range: 日期范围(trend和lifecycle模式),可选
                       - **格式**: {"start": "YYYY-MM-DD", "end": "YYYY-MM-DD"}
                       - **默认**: 不指定时默认分析最近7天
            granularity: 时间粒度(trend模式),默认"day"(hour/day)
            threshold: 热度突增倍数阈值(viral模式),默认3.0
            time_window: 检测时间窗口小时数(viral模式),默认24
            lookahead_hours: 预测未来小时数(predict模式),默认6
            confidence_threshold: 置信度阈值(predict模式),默认0.7
    
        Returns:
            趋势分析结果字典
    
        Examples (假设今天是 2025-11-17):
            - 用户:"分析AI最近7天的趋势" → analyze_topic_trend_unified(topic="人工智能", analysis_type="trend", date_range={"start": "2025-11-11", "end": "2025-11-17"})
            - 用户:"看看特斯拉本月的热度" → analyze_topic_trend_unified(topic="特斯拉", analysis_type="lifecycle", date_range={"start": "2025-11-01", "end": "2025-11-17"})
            - analyze_topic_trend_unified(topic="比特币", analysis_type="viral", threshold=3.0)
            - analyze_topic_trend_unified(topic="ChatGPT", analysis_type="predict", lookahead_hours=6)
        """
        try:
            # 参数验证
            topic = validate_keyword(topic)
    
            if analysis_type not in ["trend", "lifecycle", "viral", "predict"]:
                raise InvalidParameterError(
                    f"无效的分析类型: {analysis_type}",
                    suggestion="支持的类型: trend, lifecycle, viral, predict"
                )
    
            # 根据分析类型调用相应方法
            if analysis_type == "trend":
                return self.get_topic_trend_analysis(
                    topic=topic,
                    date_range=date_range,
                    granularity=granularity
                )
            elif analysis_type == "lifecycle":
                return self.analyze_topic_lifecycle(
                    topic=topic,
                    date_range=date_range
                )
            elif analysis_type == "viral":
                # viral模式不需要topic参数,使用通用检测
                return self.detect_viral_topics(
                    threshold=threshold,
                    time_window=time_window
                )
            else:  # predict
                # predict模式不需要topic参数,使用通用预测
                return self.predict_trending_topics(
                    lookahead_hours=lookahead_hours,
                    confidence_threshold=confidence_threshold
                )
    
        except MCPError as e:
            return {
                "success": False,
                "error": e.to_dict()
            }
        except Exception as e:
            return {
                "success": False,
                "error": {
                    "code": "INTERNAL_ERROR",
                    "message": str(e)
                }
            }
  • The actual implementation logic for 'get_topic_trend_analysis' which performs the day-by-day trend tracking and statistics calculation.
    def get_topic_trend_analysis(
        self,
        topic: str,
        date_range: Optional[Dict[str, str]] = None,
        granularity: str = "day"
    ) -> Dict:
        """
        热度趋势分析 - 追踪特定话题的热度变化趋势
    
        Args:
            topic: 话题关键词
            date_range: 日期范围(可选)
                       - **格式**: {"start": "YYYY-MM-DD", "end": "YYYY-MM-DD"}
                       - **默认**: 不指定时默认分析最近7天
            granularity: 时间粒度,仅支持 day(天)
    
        Returns:
            趋势分析结果字典
    
        Examples:
            用户询问示例:
            - "帮我分析一下'人工智能'这个话题最近一周的热度趋势"
            - "查看'比特币'过去一周的热度变化"
            - "看看'iPhone'最近7天的趋势如何"
            - "分析'特斯拉'最近一个月的热度趋势"
            - "查看'ChatGPT'2024年12月的趋势变化"
    
            代码调用示例:
            >>> tools = AnalyticsTools()
            >>> # 分析7天趋势(假设今天是 2025-11-17)
            >>> result = tools.get_topic_trend_analysis(
            ...     topic="人工智能",
            ...     date_range={"start": "2025-11-11", "end": "2025-11-17"},
            ...     granularity="day"
            ... )
            >>> # 分析历史月份趋势
            >>> result = tools.get_topic_trend_analysis(
            ...     topic="特斯拉",
            ...     date_range={"start": "2024-12-01", "end": "2024-12-31"},
            ...     granularity="day"
            ... )
            >>> print(result['trend_data'])
        """
        try:
            # 验证参数
            topic = validate_keyword(topic)
    
            # 验证粒度参数(只支持day)
            if granularity != "day":
                from ..utils.errors import InvalidParameterError
                raise InvalidParameterError(
                    f"不支持的粒度参数: {granularity}",
                    suggestion="当前仅支持 'day' 粒度,因为底层数据按天聚合"
                )
    
            # 处理日期范围(不指定时默认最近7天)
            if date_range:
                from ..utils.validators import validate_date_range
                date_range_tuple = validate_date_range(date_range)
                start_date, end_date = date_range_tuple
            else:
                # 默认最近7天
                end_date = datetime.now()
                start_date = end_date - timedelta(days=6)
    
            # 收集趋势数据
            trend_data = []
            current_date = start_date
    
            while current_date <= end_date:
                try:
                    all_titles, _, _ = self.data_service.parser.read_all_titles_for_date(
                        date=current_date
                    )
    
                    # 统计该时间点的话题出现次数
                    count = 0
                    matched_titles = []
    
                    for _, titles in all_titles.items():
                        for title in titles.keys():
                            if topic.lower() in title.lower():
                                count += 1
                                matched_titles.append(title)
    
                    trend_data.append({
                        "date": current_date.strftime("%Y-%m-%d"),
                        "count": count,
                        "sample_titles": matched_titles[:3]  # 只保留前3个样本
                    })
    
                except DataNotFoundError:
                    trend_data.append({
                        "date": current_date.strftime("%Y-%m-%d"),
                        "count": 0,
                        "sample_titles": []
                    })
    
                # 按天增加时间
                current_date += timedelta(days=1)
    
            # 计算趋势指标
            counts = [item["count"] for item in trend_data]
            total_days = (end_date - start_date).days + 1
    
            if len(counts) >= 2:
                # 计算涨跌幅度
                first_non_zero = next((c for c in counts if c > 0), 0)
                last_count = counts[-1]
    
                if first_non_zero > 0:
                    change_rate = ((last_count - first_non_zero) / first_non_zero) * 100
                else:
                    change_rate = 0
    
                # 找到峰值时间
                max_count = max(counts)
                peak_index = counts.index(max_count)
                peak_time = trend_data[peak_index]["date"]
            else:
                change_rate = 0
                peak_time = None
                max_count = 0
    
            return {
                "success": True,
                "topic": topic,
                "date_range": {
                    "start": start_date.strftime("%Y-%m-%d"),
                    "end": end_date.strftime("%Y-%m-%d"),
                    "total_days": total_days
                },
                "granularity": granularity,
                "trend_data": trend_data,
                "statistics": {
                    "total_mentions": sum(counts),
                    "average_mentions": round(sum(counts) / len(counts), 2) if counts else 0,
                    "peak_count": max_count,
                    "peak_time": peak_time,
                    "change_rate": round(change_rate, 2)
                },
                "trend_direction": "上升" if change_rate > 10 else "下降" if change_rate < -10 else "稳定"
            }
    
        except MCPError as e:
            return {
                "success": False,
                "error": e.to_dict()
            }
        except Exception as e:
            return {
                "success": False,
                "error": {
                    "code": "INTERNAL_ERROR",
                    "message": str(e)
                }
            }
  • The MCP tool function 'analyze_topic_trend' in 'server.py' that serves as the entry point and registers the tool.
    async def analyze_topic_trend(
        topic: str,
        analysis_type: str = "trend",
        date_range: Optional[Dict[str, str]] = None,
        granularity: str = "day",
        threshold: float = 3.0,
        time_window: int = 24,
        lookahead_hours: int = 6,
        confidence_threshold: float = 0.7
    ) -> str:
        """
        统一话题趋势分析工具 - 整合多种趋势分析模式
    
        **重要:日期范围处理**
        当用户使用"本周"、"最近7天"等自然语言时,请先调用 resolve_date_range 工具获取精确日期:
        1. 调用 resolve_date_range("本周") → 获取 {"start": "YYYY-MM-DD", "end": "YYYY-MM-DD"}
        2. 将返回的 date_range 传入本工具
    
        Args:
            topic: 话题关键词(必需)
            analysis_type: 分析类型,可选值:
                - "trend": 热度趋势分析(追踪话题的热度变化)
                - "lifecycle": 生命周期分析(从出现到消失的完整周期)
                - "viral": 异常热度检测(识别突然爆火的话题)
                - "predict": 话题预测(预测未来可能的热点)
            date_range: 日期范围(trend和lifecycle模式),可选
                        - **格式**: {"start": "YYYY-MM-DD", "end": "YYYY-MM-DD"}
                        - **获取方式**: 调用 resolve_date_range 工具解析自然语言日期
                        - **默认**: 不指定时默认分析最近7天
            granularity: 时间粒度(trend模式),默认"day"(仅支持 day,因为底层数据按天聚合)
            threshold: 热度突增倍数阈值(viral模式),默认3.0
            time_window: 检测时间窗口小时数(viral模式),默认24
            lookahead_hours: 预测未来小时数(predict模式),默认6
            confidence_threshold: 置信度阈值(predict模式),默认0.7
    
        Returns:
            JSON格式的趋势分析结果
    
        Examples:
            用户:"分析AI本周的趋势"
            推荐调用流程:
            1. resolve_date_range("本周") → {"date_range": {"start": "2025-11-18", "end": "2025-11-26"}}
            2. analyze_topic_trend(topic="AI", date_range={"start": "2025-11-18", "end": "2025-11-26"})
    
            用户:"看看特斯拉最近30天的热度"
            推荐调用流程:
            1. resolve_date_range("最近30天") → {"date_range": {"start": "2025-10-28", "end": "2025-11-26"}}
            2. analyze_topic_trend(topic="特斯拉", analysis_type="lifecycle", date_range=...)
        """
        tools = _get_tools()
        result = tools['analytics'].analyze_topic_trend_unified(
            topic=topic,
            analysis_type=analysis_type,
            date_range=date_range,
            granularity=granularity,
            threshold=threshold,
            time_window=time_window,
            lookahead_hours=lookahead_hours,
            confidence_threshold=confidence_threshold
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by explaining the prerequisite dependency on resolve_date_range for natural language dates, default behaviors (e.g., default 7 days if date_range not specified), and mode-specific parameter usage. However, it doesn't mention rate limits, authentication needs, or potential side effects, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (important note, args, returns, examples) and uses bullet points efficiently. While comprehensive, some redundancy exists (e.g., repeating date_range instructions), and the length is justified given the complexity. Every sentence serves a purpose in clarifying usage or parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, multiple analysis modes), no annotations, and an output schema present, the description is highly complete. It covers purpose, prerequisites, parameter semantics, usage examples, and behavioral context. The output schema handles return values, so the description appropriately focuses on input guidance and workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must fully compensate. It excels by providing detailed semantics for all 8 parameters: explaining required vs. optional status, listing analysis_type enum values with descriptions, specifying date_range format and acquisition method, detailing defaults, and clarifying which parameters apply to which analysis modes. This adds substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as '统一话题趋势分析工具 - 整合多种趋势分析模式' (unified topic trend analysis tool - integrating multiple trend analysis modes). It specifies the verb 'analyze' with the resource 'topic trends' and distinguishes itself from siblings like analyze_sentiment or get_trending_topics by focusing specifically on trend analysis rather than sentiment or listing trending topics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool, including a prerequisite step to call resolve_date_range for natural language date ranges. It also distinguishes usage by analysis_type (trend, lifecycle, viral, predict) and specifies when date_range is required (trend and lifecycle modes) versus optional or not applicable. The examples clearly demonstrate the recommended workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/funinii/TrendRadar'

If you have feedback or need assistance with the MCP directory API, please join our Discord server