Skip to main content
Glama
gqy20

Europe PMC Literature Search MCP Server

evaluate_articles_quality

Assess journal quality of articles in bulk by querying local cache or EasyScholar API. Designed for filtering and evaluating academic research quality in literature search results.

Instructions

批量评估文献的期刊质量

功能说明:

  • 为文献列表中的每篇文献评估其期刊质量

  • 先从本地缓存查询,没有则调用EasyScholar API

  • 返回包含期刊质量信息的完整文献列表

参数说明:

  • articles: 必需,文献列表(来自搜索结果)

  • secret_key: 可选,EasyScholar API密钥(可从环境变量EASYSCHOLAR_SECRET_KEY获取)

返回值说明:

  • evaluated_articles: 包含期刊质量信息的文献列表

  • total_count: 评估的文献总数

  • message: 处理信息

  • error: 错误信息(如果有)

使用场景:

  • 批量评估搜索结果的期刊质量

  • 文献质量筛选

  • 学术研究质量评估

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
articlesYes
secret_keyNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core handler function for the 'evaluate_articles_quality' tool. It processes a list of articles, retrieves journal quality metrics for each using the EasyScholar API (via secret_key) or local cache, and augments each article with a 'journal_quality' field containing metrics like impact factor, SCI quartile, JCI, etc.
    def evaluate_articles_quality(self, articles: list, secret_key: str = None):
        """批量评估文献的期刊质量"""
        if not articles:
            return []
    
        evaluated_articles = []
        for article in articles:
            journal_name = article.get("journal_name")
            if journal_name:
                quality_info = self.get_journal_quality(journal_name, secret_key)
                article_copy = article.copy()
                article_copy["journal_quality"] = quality_info
                evaluated_articles.append(article_copy)
            else:
                article_copy = article.copy()
                article_copy["journal_quality"] = {
                    "journal_name": None,
                    "source": None,
                    "quality_metrics": {},
                    "error": "无期刊信息",
                }
                evaluated_articles.append(article_copy)
    
        return evaluated_articles
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing important behavioral traits: the two-stage lookup process (local cache then API), the optional secret_key parameter with environment variable fallback, and the return structure. It doesn't mention rate limits, authentication requirements beyond the key, or error handling specifics, keeping it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (功能说明, 参数说明, 返回值说明, 使用场景) and front-loaded purpose. Some redundancy exists between the initial summary and functional description, but each sentence adds value. The Chinese formatting is efficient for the content covered.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, no annotations, 0% schema coverage, but presence of output schema, the description is complete enough. It covers purpose, parameters, return values, usage scenarios, and behavioral workflow. The output schema handles return value details, so the description appropriately focuses on context rather than repeating structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining both parameters: 'articles' as required literature list from search results, and 'secret_key' as optional API key with environment variable alternative. It doesn't specify the exact structure of article objects or key format details, but provides meaningful context beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('批量评估文献的期刊质量' - batch evaluate journal quality of articles) and resource ('文献列表中的每篇文献' - each article in a list). It distinguishes from siblings like 'get_journal_quality' by emphasizing batch processing and integration with search results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The '使用场景' section explicitly lists three usage scenarios: batch evaluation of search results, literature quality filtering, and academic research quality assessment. This provides clear context for when to use this tool versus alternatives like 'get_journal_quality' which appears to be a single-article version.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gqy20/article-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server