Skip to main content
Glama
funinii

TrendRadar

by funinii

analyze_sentiment

Analyze sentiment and popularity trends in news articles to understand public opinion on specific topics across social media platforms.

Instructions

分析新闻的情感倾向和热度趋势

重要:日期范围处理 当用户使用"本周"、"最近7天"等自然语言时,请先调用 resolve_date_range 工具获取精确日期:

  1. 调用 resolve_date_range("本周") → 获取 {"start": "YYYY-MM-DD", "end": "YYYY-MM-DD"}

  2. 将返回的 date_range 传入本工具

Args: topic: 话题关键词(可选) platforms: 平台ID列表,如 ['zhihu', 'weibo', 'douyin'] - 不指定时:使用 config.yaml 中配置的所有平台 - 支持的平台来自 config/config.yaml 的 platforms 配置 - 每个平台都有对应的name字段(如"知乎"、"微博"),方便AI识别 date_range: 日期范围(可选) - 格式: {"start": "YYYY-MM-DD", "end": "YYYY-MM-DD"} - 获取方式: 调用 resolve_date_range 工具解析自然语言日期 - 默认: 不指定则默认查询今天的数据 limit: 返回新闻数量,默认50,最大100 注意:本工具会对新闻标题进行去重(同一标题在不同平台只保留一次), 因此实际返回数量可能少于请求的 limit 值 sort_by_weight: 是否按热度权重排序,默认True include_url: 是否包含URL链接,默认False(节省token)

Returns: JSON格式的分析结果,包含情感分布、热度趋势和相关新闻

Examples: 用户:"分析AI本周的情感倾向" 推荐调用流程: 1. resolve_date_range("本周") → {"date_range": {"start": "2025-11-18", "end": "2025-11-26"}} 2. analyze_sentiment(topic="AI", date_range={"start": "2025-11-18", "end": "2025-11-26"})

用户:"分析特斯拉最近7天的新闻情感"
推荐调用流程:
1. resolve_date_range("最近7天") → {"date_range": {"start": "2025-11-20", "end": "2025-11-26"}}
2. analyze_sentiment(topic="特斯拉", date_range={"start": "2025-11-20", "end": "2025-11-26"})

重要:数据展示策略

  • 本工具返回完整的分析结果和新闻列表

  • 默认展示方式:展示完整的分析结果(包括所有新闻)

  • 仅在用户明确要求"总结"或"挑重点"时才进行筛选

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topicNo
platformsNo
date_rangeNo
limitNo
sort_by_weightNo
include_urlNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The actual implementation of the 'analyze_sentiment' tool, which gathers news data, performs deduplication, sorts by weight, and generates an AI prompt for sentiment analysis.
    def analyze_sentiment(
        self,
        topic: Optional[str] = None,
        platforms: Optional[List[str]] = None,
        date_range: Optional[Dict[str, str]] = None,
        limit: int = 50,
        sort_by_weight: bool = True,
        include_url: bool = False
    ) -> Dict:
        """
        情感倾向分析 - 生成用于 AI 情感分析的结构化提示词
    
        本工具收集新闻数据并生成优化的 AI 提示词,你可以将其发送给 AI 进行深度情感分析。
    
        Args:
            topic: 话题关键词(可选),只分析包含该关键词的新闻
            platforms: 平台过滤列表(可选),如 ['zhihu', 'weibo']
            date_range: 日期范围(可选),格式: {"start": "YYYY-MM-DD", "end": "YYYY-MM-DD"}
                       不指定则默认查询今天的数据
            limit: 返回新闻数量限制,默认50,最大100
            sort_by_weight: 是否按权重排序,默认True(推荐)
            include_url: 是否包含URL链接,默认False(节省token)
    
        Returns:
            包含 AI 提示词和新闻数据的结构化结果
    
        Examples:
            用户询问示例:
            - "分析一下今天新闻的情感倾向"
            - "看看'特斯拉'相关新闻是正面还是负面的"
            - "分析各平台对'人工智能'的情感态度"
            - "看看'特斯拉'相关新闻是正面还是负面的,请选择一周内的前10条新闻来分析"
    
            代码调用示例:
            >>> tools = AnalyticsTools()
            >>> # 分析今天的特斯拉新闻,返回前10条
            >>> result = tools.analyze_sentiment(
            ...     topic="特斯拉",
            ...     limit=10
            ... )
            >>> # 分析一周内的特斯拉新闻(假设今天是 2025-11-17)
            >>> result = tools.analyze_sentiment(
            ...     topic="特斯拉",
            ...     date_range={"start": "2025-11-11", "end": "2025-11-17"},
            ...     limit=10
            ... )
            >>> print(result['ai_prompt'])  # 获取生成的提示词
        """
        try:
            # 参数验证
            if topic:
                topic = validate_keyword(topic)
            platforms = validate_platforms(platforms)
            limit = validate_limit(limit, default=50)
    
            # 处理日期范围
            if date_range:
                date_range_tuple = validate_date_range(date_range)
                start_date, end_date = date_range_tuple
            else:
                # 默认今天
                start_date = end_date = datetime.now()
    
            # 收集新闻数据(支持多天)
            all_news_items = []
            current_date = start_date
    
            while current_date <= end_date:
                try:
                    all_titles, id_to_name, _ = self.data_service.parser.read_all_titles_for_date(
                        date=current_date,
                        platform_ids=platforms
                    )
    
                    # 收集该日期的新闻
                    for platform_id, titles in all_titles.items():
                        platform_name = id_to_name.get(platform_id, platform_id)
                        for title, info in titles.items():
                            # 如果指定了话题,只收集包含话题的标题
                            if topic and topic.lower() not in title.lower():
                                continue
    
                            news_item = {
                                "platform": platform_name,
                                "title": title,
                                "ranks": info.get("ranks", []),
                                "count": len(info.get("ranks", [])),
                                "date": current_date.strftime("%Y-%m-%d")
                            }
    
                            # 条件性添加 URL 字段
                            if include_url:
                                news_item["url"] = info.get("url", "")
                                news_item["mobileUrl"] = info.get("mobileUrl", "")
    
                            all_news_items.append(news_item)
    
                except DataNotFoundError:
                    # 该日期没有数据,继续下一天
                    pass
    
                # 下一天
                current_date += timedelta(days=1)
    
            if not all_news_items:
                time_desc = "今天" if start_date == end_date else f"{start_date.strftime('%Y-%m-%d')} 至 {end_date.strftime('%Y-%m-%d')}"
                raise DataNotFoundError(
                    f"未找到相关新闻({time_desc})",
                    suggestion="请尝试其他话题、日期范围或平台"
                )
    
            # 去重(同一标题只保留一次)
            unique_news = {}
            for item in all_news_items:
                key = f"{item['platform']}::{item['title']}"
                if key not in unique_news:
                    unique_news[key] = item
                else:
                    # 合并 ranks(如果同一新闻在多天出现)
                    existing = unique_news[key]
                    existing["ranks"].extend(item["ranks"])
                    existing["count"] = len(existing["ranks"])
    
            deduplicated_news = list(unique_news.values())
    
            # 按权重排序(如果启用)
            if sort_by_weight:
                deduplicated_news.sort(
                    key=lambda x: calculate_news_weight(x),
                    reverse=True
                )
    
            # 限制返回数量
            selected_news = deduplicated_news[:limit]
    
            # 生成 AI 提示词
            ai_prompt = self._create_sentiment_analysis_prompt(
                news_data=selected_news,
                topic=topic
            )
    
            # 构建时间范围描述
            if start_date == end_date:
                time_range_desc = start_date.strftime("%Y-%m-%d")
            else:
                time_range_desc = f"{start_date.strftime('%Y-%m-%d')} 至 {end_date.strftime('%Y-%m-%d')}"
    
            result = {
                "success": True,
                "method": "ai_prompt_generation",
                "summary": {
                    "total_found": len(deduplicated_news),
                    "returned_count": len(selected_news),
                    "requested_limit": limit,
                    "duplicates_removed": len(all_news_items) - len(deduplicated_news),
                    "topic": topic,
                    "time_range": time_range_desc,
                    "platforms": list(set(item["platform"] for item in selected_news)),
                    "sorted_by_weight": sort_by_weight
                },
                "ai_prompt": ai_prompt,
                "news_sample": selected_news,
                "usage_note": "请将 ai_prompt 字段的内容发送给 AI 进行情感分析"
            }
    
            # 如果返回数量少于请求数量,增加提示
            if len(selected_news) < limit and len(deduplicated_news) >= limit:
                result["note"] = "返回数量少于请求数量是因为去重逻辑(同一标题在不同平台只保留一次)"
            elif len(deduplicated_news) < limit:
                result["note"] = f"在指定时间范围内仅找到 {len(deduplicated_news)} 条匹配的新闻"
    
            return result
    
        except MCPError as e:
            return {
                "success": False,
                "error": e.to_dict()
            }
        except Exception as e:
            return {
                "success": False,
                "error": {
                    "code": "INTERNAL_ERROR",
                    "message": str(e)
                }
            }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: data deduplication (news title deduplication across platforms), default values (e.g., limit=50, sort_by_weight=True), constraints (limit max=100), and output handling (full analysis vs. summaries only when requested). It lacks details on rate limits or error handling, but covers most operational aspects well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, date range handling, args, returns, examples, data strategy). Most sentences earn their place by providing essential guidance. It is slightly verbose in examples, but overall efficient and front-loaded with critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no annotations, 0% schema coverage, but has output schema), the description is highly complete. It covers purpose, usage, parameters, behavioral notes, examples, and output strategy. The output schema exists, so return values need not be detailed in the description, making this comprehensive for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must fully compensate. It provides detailed semantics for all 6 parameters: explains 'topic' as optional keywords, 'platforms' with examples and config references, 'date_range' with format and acquisition method, 'limit' with defaults and deduplication note, 'sort_by_weight' and 'include_url' with defaults and purposes. This adds substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '分析新闻的情感倾向和热度趋势' (analyze sentiment tendency and heat trends of news). It specifies the exact action (analyze) and resource (news), distinguishing it from siblings like 'analyze_topic_trend' or 'search_news' by focusing on sentiment and heat analysis rather than general trends or searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool, including prerequisites (calling 'resolve_date_range' for natural language dates), alternatives (e.g., using config defaults), and exclusions (e.g., not for summarization unless requested). It clearly differentiates from siblings by specifying its unique analysis focus.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/funinii/TrendRadar'

If you have feedback or need assistance with the MCP directory API, please join our Discord server