TrendRadar
Server Quality Checklist
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v1.0.3
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 14 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the output format (JSON with Markdown content) and date range format requirements, but doesn't cover important aspects like whether this is a read-only operation, potential rate limits, authentication needs, what data sources are used, or whether the generation is synchronous/asynchronous. For a report generation tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably structured with clear sections (Args, Returns) but contains some redundancy. The Chinese/English mixed title could be more concise, and the date_range format explanation is detailed but necessary given the schema gap. The '重要' (important) note about object format is valuable but could be integrated more smoothly. Overall efficient but not perfectly polished.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters), 0% schema coverage, no annotations, but presence of an output schema, the description does a reasonably complete job. It covers parameter meanings and formats thoroughly, specifies the return format, and provides examples. The main gaps are behavioral context and usage guidelines, but the output schema reduces the need to describe return values in detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description provides substantial parameter semantics beyond the bare schema. It explains report_type options (daily/weekly), documents date_range as an optional custom date range object, provides format specifications with examples, and warns about the object format requirement. This compensates well for the schema's lack of descriptions, though it doesn't fully explain all parameter implications.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as '自动生成热点摘要报告' (automatically generate hotspot summary reports) with daily/weekly options. It specifies the verb '生成' (generate) and resource '摘要报告' (summary report), distinguishing it from sibling tools like analyze_data_insights or get_trending_topics which perform different analytical functions. However, it doesn't explicitly differentiate from all siblings in terms of output format or scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, appropriate contexts, or comparisons with sibling tools like analyze_topic_trend or get_trending_topics that might provide similar insights. The only implicit usage cue is the report_type parameter, but no explicit when/when-not instructions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It implies a read-only operation by stating it '获取' (gets) information and returns JSON, but doesn't disclose behavioral traits like whether it requires authentication, has rate limits, affects system performance, or provides real-time vs. cached data. The description adds basic context (what it returns) but lacks critical operational details for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, starting with the core purpose in the first line. It uses three short sentences (in Chinese and English) that efficiently cover what the tool does and returns. There's minimal waste, though the bilingual format adds slight redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0 parameters, 100% schema coverage, and an output schema (implied by 'Has output schema: true'), the description doesn't need to explain parameters or return values. However, as a system status tool with no annotations, it should provide more context on usage scenarios or behavioral aspects (e.g., when to call it, what '健康检查' entails). It's adequate but has gaps in guidance and transparency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate. Baseline is 4 for 0 parameters, as it avoids unnecessary repetition and focuses on the tool's function.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '获取系统运行状态和健康检查信息' (get system running status and health check information) and specifies what it returns ('系统版本、数据统计、缓存状态等信息' - system version, data statistics, cache status, etc.). It distinguishes from siblings like get_current_config (likely configuration) or get_latest_news (news content). However, it doesn't explicitly contrast with all siblings, keeping it at 4 rather than 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention scenarios like system monitoring, debugging, or health checks, nor does it reference sibling tools like get_current_config (which might overlap with configuration aspects) or trigger_crawl (which might affect system status). Without any usage context, it scores low.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool returns JSON-formatted configuration information, which is helpful. However, it doesn't mention whether this is a read-only operation (implied but not explicit), potential rate limits, authentication requirements, or error conditions. The description adds some value but leaves significant behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections for purpose, arguments, and returns. It's appropriately sized with no wasted sentences. The only minor improvement would be integrating the parameter explanation more seamlessly rather than as a separate 'Args' block, but overall it's efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter) and the presence of an output schema (which handles return value documentation), the description is reasonably complete. It covers the core purpose and parameter semantics well. The main gap is lack of behavioral context around permissions, rate limits, or error handling, but for a simple configuration retrieval tool, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description provides excellent parameter semantics despite 0% schema description coverage. It clearly explains the 'section' parameter with specific enum values ('all', 'crawler', 'push', 'keywords', 'weights'), their meanings, and the default value. This fully compensates for the lack of schema documentation and adds meaningful context beyond what a bare schema would provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as '获取当前系统配置' (get current system configuration), which is a specific verb+resource combination. However, it doesn't differentiate from sibling tools like 'get_system_status' or explain how configuration retrieval differs from status checking, keeping it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, timing considerations, or comparison with sibling tools like 'get_system_status' that might overlap in functionality. The agent must infer usage context independently.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that returns are 'JSON format data insights analysis results' and provides important behavioral notes like date_range 'must be object format, cannot pass integer'. However, it doesn't mention rate limits, authentication requirements, data freshness, or whether this is a read-only vs. write operation. The description adds some behavioral context but leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (Args, Returns, Examples) and uses bullet points effectively. While somewhat lengthy due to detailed parameter explanations, every sentence earns its place by providing necessary information. The front-loaded statement establishes the tool's purpose immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with 0% schema coverage and no annotations, the description does an excellent job explaining parameters and providing usage examples. The presence of an output schema means the description doesn't need to detail return values. However, for a complex analytical tool with multiple modes, more behavioral context about performance, data sources, or limitations would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing comprehensive parameter documentation. It explains each parameter's purpose, valid values for insight_type, when parameters apply to specific modes, format requirements for date_range with examples, and default values for min_frequency and top_n. This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states this is a 'unified data insights analysis tool' that 'integrates multiple data analysis modes', with specific examples of insight types like platform comparison and keyword co-occurrence. It distinguishes itself from siblings by focusing on analytical insights rather than sentiment analysis, topic trends, or news retrieval. However, it doesn't explicitly contrast with all sibling tools like 'generate_summary_report' which might overlap.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use each insight_type mode, with examples showing appropriate parameter combinations. It explains that 'topic' is optional but applicable to 'platform_compare' mode, and 'date_range' is optional but shown in examples. However, it doesn't explicitly state when NOT to use this tool versus alternatives like 'analyze_topic_trend' or 'get_trending_topics' among the siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by explaining the relevance calculation (70% keyword overlap + 30% text similarity), threshold effects, limit behavior ('实际返回数量取决于相关性匹配结果,可能少于请求值'), and output format (JSON with relevance scores and time distribution). It also includes important data display policies about when to filter results. However, it doesn't mention rate limits, authentication needs, or potential errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and well-structured with clear sections (Args, Returns, important policies). Most sentences earn their place by providing essential information. However, the threshold explanation could be slightly more concise, and the structure mixes parameter details with behavioral information rather than strictly separating them.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, relevance scoring, historical search), no annotations, but with output schema present, the description provides excellent completeness. It covers purpose, all parameters in detail, behavioral traits, output format, and display policies. The output schema handles return structure, so the description appropriately focuses on semantic context rather than repeating schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description must fully compensate, which it does excellently. It provides detailed explanations for all 5 parameters: reference_text purpose, time_preset options with translations, threshold range and calculation details, limit defaults and constraints, and include_url token-saving rationale. This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '基于种子新闻,在历史数据中搜索相关新闻' (search for related news in historical data based on seed news). It specifies the verb '搜索' (search) and resource '相关新闻' (related news) in '历史数据' (historical data). However, it doesn't explicitly differentiate from sibling tools like 'search_news' or 'find_similar_news', which likely have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some implied usage context: it's for searching historical data based on a reference text, with time presets and relevance thresholds. However, it doesn't explicitly state when to use this tool versus alternatives like 'search_news' or 'find_similar_news', nor does it mention prerequisites or exclusions. The '重要:数据展示策略' section offers output behavior guidance but not usage selection guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the data source (config/frequency_words.txt) and that users can customize this list, which adds useful context. However, it doesn't mention performance characteristics, rate limits, authentication requirements, or what happens if the config file is missing. The description doesn't contradict annotations (none exist), but could provide more operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear purpose statement, important note about what the tool is NOT, parameter explanations, and return format. It's appropriately sized for a tool with 2 parameters. The information is front-loaded with the core purpose first. Minor improvement could be making the parameter explanations slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 2 parameters with 0% schema coverage but the description fully documents them, and an output schema exists (so return values don't need explanation), the description is quite complete. It covers purpose, differentiation, parameters, and data source. The main gap is lack of behavioral details like error conditions or performance characteristics, but with output schema handling return format, this is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds significant value beyond the input schema, which has 0% description coverage. It clearly explains both parameters: top_n ('返回TOP N关注词' - returns TOP N focus words) with its default value, and mode with its two options (daily and current) and their meanings ('当日累计数据统计' - daily cumulative statistics vs '最新一批数据统计' - latest batch statistics). This fully compensates for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '获取个人关注词的新闻出现频率统计' (get news frequency statistics for personal focus words). It specifies the verb (统计/statistics) and resource (关注词/focus words), and explicitly distinguishes it from automatic news extraction: '本工具不是自动提取新闻热点' (this tool is not for automatically extracting news hotspots). This differentiation from potential sibling tools like analyze_topic_trend is clear and specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for statistics on personal focus words defined in config/frequency_words.txt. It explicitly states what the tool is NOT for (automatic news extraction), which helps differentiate it from siblings like analyze_topic_trend. However, it doesn't explicitly name alternative tools or provide detailed when-not-to-use scenarios beyond the basic distinction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: the tool returns a JSON list with similarity scores, explains the threshold parameter's effect on result strictness, notes that actual returned count may be less than the limit, mentions token-saving with include_url default, and details output display strategies (full list by default, filtered only on user request). This covers operational behavior well beyond basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and well-structured with clear sections (purpose, Args, Returns, important notes). Every sentence adds value: the opening states purpose, parameter explanations are necessary given 0% schema coverage, return format is specified, and the display strategy section provides crucial usage context. Minor verbosity in the display strategy could be tightened, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, similarity matching logic), no annotations, and the presence of an output schema (implied by 'Returns: JSON格式的相似新闻列表'), the description is highly complete. It covers purpose, all parameter semantics, behavioral traits, return format, and display strategies. The output schema handles return values, so the description appropriately focuses on usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must fully compensate. It provides detailed semantic explanations for all 4 parameters: reference_title as the news title (full or partial), threshold as similarity score range 0-1 with default and effect explanation, limit with default, maximum, and actual result caveat, and include_url with default and token-saving rationale. This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '查找与指定新闻标题相似的其他新闻' (find news similar to a specified news title). It specifies the verb '查找' (find) and resource '其他新闻' (other news), and distinguishes from siblings like 'search_news' or 'search_related_news_history' by focusing on similarity matching rather than general search or historical analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to find similar news based on title similarity. It doesn't explicitly state when not to use it or name alternatives, but the sibling tools list includes 'search_news' and 'search_related_news_history', which are implied alternatives for different search needs. The '重要:数据展示策略' section adds usage guidance for output handling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's a read-only operation (implied by '获取' - get), includes rate limits (limit up to 1000), explains that actual returns may be less than requested, and provides token-saving options (include_url default False). It also details output handling and user interaction scenarios, though it doesn't cover error cases or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized but not optimally structured. Key information is front-loaded (purpose and args), but the extensive data display recommendations, while useful, are verbose and could be more concise. Every sentence earns its place, but the organization could be tighter for faster parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (3 parameters, no annotations, but with output schema), the description is highly complete. It covers purpose, parameters, usage guidelines, behavioral traits, and output handling. The output schema exists, so the description doesn't need to detail return values, and it adequately addresses all other aspects for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It does so excellently: it explains all three parameters (platforms, limit, include_url) with clear semantics, defaults, constraints (e.g., limit max 1000), and practical usage notes (e.g., platforms from config.yaml, include_url saves tokens). This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '获取最新一批爬取的新闻数据,快速了解当前热点' (get the latest batch of crawled news data to quickly understand current hot topics). It specifies the verb '获取' (get) and resource '新闻数据' (news data), distinguishing it from siblings like get_news_by_date (date-filtered) or search_news (keyword-based).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines, including when to use it ('快速了解当前热点' - quickly understand current hot topics) and when to use alternatives (e.g., get_news_by_date for date-filtered news). It also includes detailed data display recommendations, specifying when to show full data versus when to summarize, which helps the agent decide how to present results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool returns JSON-formatted news lists with titles, platforms, rankings, etc.; it may return fewer items than requested based on available data; and it includes important usage notes about data display and user expectations. However, it lacks details on error handling, rate limits, or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately front-loaded with the core purpose and parameter details, but it includes extensive additional sections on data display recommendations and user interaction scenarios. While these are useful, they extend the length and could be streamlined or moved to a separate guidelines section. Every sentence earns its place, but the structure could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 parameters, no annotations, 0% schema coverage, but with an output schema), the description is highly complete. It covers the tool's purpose, detailed parameter semantics, return format, and extensive usage guidelines. The output schema likely handles return values, so the description appropriately focuses on other aspects, making it well-rounded for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must fully compensate. It does so excellently by explaining all four parameters in detail: 'date_query' with format options and default; 'platforms' with examples, default behavior, and configuration source; 'limit' with default, max, and note on actual returns; and 'include_url' with default and rationale. This adds significant meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '获取指定日期的新闻数据,用于历史数据分析和对比' (Get news data for a specified date, for historical data analysis and comparison). It specifies the verb ('获取' - get) and resource ('新闻数据' - news data), and distinguishes it from siblings like 'get_latest_news' (which likely gets current news) by focusing on historical data by date.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines, including when to use this tool (for historical data analysis and comparison) and when not to (e.g., for current news, use 'get_latest_news' implicitly). It also offers detailed guidance on data presentation, such as showing full data unless the user requests a summary, and handling cases where users ask why only part of the data is shown.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: data deduplication (news title deduplication across platforms), default values (e.g., limit=50, sort_by_weight=True), constraints (limit max=100), and output handling (full analysis vs. summaries only when requested). It lacks details on rate limits or error handling, but covers most operational aspects well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, date range handling, args, returns, examples, data strategy). Most sentences earn their place by providing essential guidance. It is slightly verbose in examples, but overall efficient and front-loaded with critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, no annotations, 0% schema coverage, but has output schema), the description is highly complete. It covers purpose, usage, parameters, behavioral notes, examples, and output strategy. The output schema exists, so return values need not be detailed in the description, making this comprehensive for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must fully compensate. It provides detailed semantics for all 6 parameters: explains 'topic' as optional keywords, 'platforms' with examples and config references, 'date_range' with format and acquisition method, 'limit' with defaults and deduplication note, 'sort_by_weight' and 'include_url' with defaults and purposes. This adds substantial value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '分析新闻的情感倾向和热度趋势' (analyze sentiment tendency and heat trends of news). It specifies the exact action (analyze) and resource (news), distinguishing it from siblings like 'analyze_topic_trend' or 'search_news' by focusing on sentiment and heat analysis rather than general trends or searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool, including prerequisites (calling 'resolve_date_range' for natural language dates), alternatives (e.g., using config defaults), and exclusions (e.g., not for summarization unless requested). It clearly differentiates from siblings by specifying its unique analysis focus.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by explaining the prerequisite dependency on resolve_date_range for natural language dates, default behaviors (e.g., default 7 days if date_range not specified), and mode-specific parameter usage. However, it doesn't mention rate limits, authentication needs, or potential side effects, leaving some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (important note, args, returns, examples) and uses bullet points efficiently. While comprehensive, some redundancy exists (e.g., repeating date_range instructions), and the length is justified given the complexity. Every sentence serves a purpose in clarifying usage or parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, multiple analysis modes), no annotations, and an output schema present, the description is highly complete. It covers purpose, prerequisites, parameter semantics, usage examples, and behavioral context. The output schema handles return values, so the description appropriately focuses on input guidance and workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must fully compensate. It excels by providing detailed semantics for all 8 parameters: explaining required vs. optional status, listing analysis_type enum values with descriptions, specifying date_range format and acquisition method, detailing defaults, and clarifying which parameters apply to which analysis modes. This adds substantial value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as '统一话题趋势分析工具 - 整合多种趋势分析模式' (unified topic trend analysis tool - integrating multiple trend analysis modes). It specifies the verb 'analyze' with the resource 'topic trends' and distinguishes itself from siblings like analyze_sentiment or get_trending_topics by focusing specifically on trend analysis rather than sentiment or listing trending topics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool, including a prerequisite step to call resolve_date_range for natural language date ranges. It also distinguishes usage by analysis_type (trend, lifecycle, viral, predict) and specifies when date_range is required (trend and lifecycle modes) versus optional or not applicable. The examples clearly demonstrate the recommended workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's a read-only transformation tool (no destructive hints implied), uses server-side time for consistency, returns JSON with success status and structured date range, and includes current_date and description fields. It doesn't mention rate limits or auth needs, but covers core operational behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, why needed, usage flow, Args, Returns, Examples) but is somewhat lengthy. Every section adds value: the workflow examples are particularly helpful. It could be slightly more concise in the examples section, but overall it's efficiently organized with front-loaded key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (natural language parsing with multiple pattern types), no annotations, and an output schema that documents the return structure, the description is complete. It covers purpose, usage, parameters, return format with examples, and integration with sibling tools. The output schema handles return value documentation, so the description appropriately focuses on operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage (just 'expression: string'), so the description must fully compensate. It provides extensive parameter semantics: defines 'expression' as natural language date expressions, lists supported patterns (single day, week, month, recent N days, dynamic), gives specific examples in both Chinese and English, and explains the format directly in the Args section.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '将自然语言日期表达式解析为标准日期范围' (parse natural language date expressions into standard date ranges). It specifies the verb ('解析' - parse) and resource ('自然语言日期表达式' - natural language date expressions), and distinguishes itself from siblings by focusing on date resolution rather than analysis, search, or other operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidelines: it recommends prioritizing this tool ('推荐优先调用') for natural language date expressions to ensure consistency. It gives a clear workflow example with steps and alternatives (e.g., using analyze_sentiment or search_news after resolution), and explains why it's needed versus AI model self-calculation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does so comprehensively. It discloses important behavioral traits: the need to pre-process natural language dates via another tool, default behaviors (today's news when no date_range, all platforms when none specified), result filtering in fuzzy mode, token-saving considerations with include_url, and data display strategies (default to show all results).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (important notes, args, returns, examples, data strategy) but is quite lengthy. While every section adds value, some information could potentially be more concise. The front-loading of the date range processing requirement is excellent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, no annotations, 0% schema coverage) and the presence of an output schema, the description is remarkably complete. It covers purpose, usage guidelines, parameter semantics, behavioral traits, examples, and data handling strategies. The output schema handles return values, so the description appropriately focuses on usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 8 parameters, the description provides extensive semantic information beyond the bare schema. It explains each parameter's purpose, valid values, defaults, interactions (threshold only for fuzzy mode), and practical usage examples. The description fully compensates for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as a 'unified search interface supporting multiple search modes' for news, which is specific (verb+resource). However, it doesn't explicitly differentiate from sibling tools like 'get_latest_news', 'get_news_by_date', or 'search_related_news_history', which appear to be related news retrieval tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent usage guidance, including explicit instructions for when to use the resolve_date_range tool first ('本周', '最近7天'), when to use different search modes, default behaviors, and data display strategies. It also mentions config.yaml configuration for platforms.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by explaining the optional persistence ('可选持久化'), what happens with failed platforms, token-saving considerations for include_url, and the JSON return structure. However, it doesn't mention potential rate limits, authentication needs, or error handling beyond failed platforms.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (Args, Returns, Examples) and front-loaded purpose statement. While comprehensive, some information could be more concise (e.g., the platforms explanation has some redundancy). Every sentence adds value, but the overall length is slightly above optimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (3 parameters, 0% schema coverage, no annotations, but has output schema), the description is remarkably complete. It covers purpose, parameters, return values, usage examples, and behavioral context. The output schema existence means the description doesn't need to detail return structure, which it appropriately references without over-explaining.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 3 parameters, the description fully compensates by providing rich semantic information. It explains platforms parameter options (list of IDs, null behavior, supported platforms from config), save_to_local's effect (save to output directory), and include_url's purpose (save tokens). The examples further clarify parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '手动触发一次爬取任务' (manually trigger a crawling task). It specifies the verb ('触发' - trigger) and resource ('爬取任务' - crawling task), and distinguishes itself from sibling tools like get_latest_news or search_news by focusing on initiating a crawl rather than retrieving existing data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives. It explains that not specifying platforms uses all configured platforms from config.yaml, and gives concrete examples for different scenarios (temporary crawl, crawl with save, default platforms). This clearly defines the tool's context and how it differs from data retrieval siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/funinii/TrendRadar'
If you have feedback or need assistance with the MCP directory API, please join our Discord server