Skip to main content
Glama
chrismannina

PubMed MCP Server

by chrismannina

get_journal_metrics

Retrieve journal metrics and information from PubMed, including recent notable articles, to evaluate research impact and publication quality.

Instructions

Get metrics and information about a specific journal

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
journal_nameYesJournal name or abbreviation
include_recent_articlesNoInclude recent notable articles

Implementation Reference

  • The core handler function that implements the get_journal_metrics tool: validates journal_name input, searches PubMed for current year articles from the journal, computes metrics (article count, type distribution), optionally lists recent articles, and returns formatted MCPResponse.
    async def _handle_get_journal_metrics(self, arguments: Dict[str, Any]) -> MCPResponse:
        """Handle journal metrics request."""
        try:
            journal_name = arguments.get("journal_name", "")
            if not journal_name:
                return MCPResponse(
                    content=[{"type": "text", "text": "Journal name is required"}], is_error=True
                )
    
            include_recent_articles = arguments.get("include_recent_articles", True)
    
            # Get recent articles from the journal
            current_year = datetime.now().year
            search_result = await self.pubmed_client.search_articles(
                query=f'"{journal_name}"[Journal]',
                max_results=50,
                date_from=f"{current_year}/01/01",
                sort_order=SortOrder.PUBLICATION_DATE,
                cache=self.cache,
            )
    
            content = []
            content.append(
                {
                    "type": "text",
                    "text": f"**Journal Metrics: {journal_name}**\n\n"
                    f"Articles in {current_year}: {search_result.total_results:,}\n"
                    f"Sample Size: {search_result.returned_results}\n",
                }
            )
    
            # Analyze article types
            if search_result.articles:
                article_types = {}
                for article_data in search_result.articles:
                    # Handle Article objects - access article_types attribute directly
                    types = (
                        getattr(article_data, "article_types", [])
                        if hasattr(article_data, "article_types")
                        else []
                    )
                    for article_type in types:
                        article_types[article_type] = article_types.get(article_type, 0) + 1
    
                if article_types:
                    types_text = "**Article Types Distribution:**\n"
                    for article_type, count in sorted(
                        article_types.items(), key=lambda x: x[1], reverse=True
                    )[:5]:
                        percentage = (count / len(search_result.articles)) * 100
                        types_text += f"• {article_type}: {count} ({percentage:.1f}%)\n"
    
                    content.append({"type": "text", "text": types_text})
    
            # Show recent notable articles
            if include_recent_articles and search_result.articles:
                content.append({"type": "text", "text": "\n**Recent Articles:**\n"})
    
                for i, article_data in enumerate(search_result.articles[:5], 1):
                    article_text = self._format_article_summary(article_data, i)
                    content.append({"type": "text", "text": article_text})
    
            return MCPResponse(content=content)
    
        except Exception as e:
            logger.error(f"Error in get_journal_metrics: {e}")
            return MCPResponse(
                content=[{"type": "text", "text": f"Error: {str(e)}"}], is_error=True
            )
  • The tool schema definition in TOOL_DEFINITIONS list, including name, description, and inputSchema specifying journal_name as required parameter.
    {
        "name": "get_journal_metrics",
        "description": "Get metrics and information about a specific journal",
        "inputSchema": {
            "type": "object",
            "properties": {
                "journal_name": {"type": "string", "description": "Journal name or abbreviation"},
                "include_recent_articles": {
                    "type": "boolean",
                    "default": True,
                    "description": "Include recent notable articles",
                },
            },
            "required": ["journal_name"],
        },
    },
  • The handler_map in ToolHandler.handle_tool_call that registers 'get_journal_metrics' to route to _handle_get_journal_metrics function.
    handler_map = {
        "search_pubmed": self._handle_search_pubmed,
        "get_article_details": self._handle_get_article_details,
        "search_by_author": self._handle_search_by_author,
        "find_related_articles": self._handle_find_related_articles,
        "export_citations": self._handle_export_citations,
        "search_mesh_terms": self._handle_search_mesh_terms,
        "search_by_journal": self._handle_search_by_journal,
        "get_trending_topics": self._handle_get_trending_topics,
        "analyze_research_trends": self._handle_analyze_research_trends,
        "compare_articles": self._handle_compare_articles,
        "get_journal_metrics": self._handle_get_journal_metrics,
        "advanced_search": self._handle_advanced_search,
    }
  • The get_tools method that returns the TOOL_DEFINITIONS list, registering the get_journal_metrics tool schema with the MCP protocol.
    def get_tools(self) -> List[Dict[str, Any]]:
        """Get list of available tools."""
        return TOOL_DEFINITIONS
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves metrics and information, implying a read-only operation, but lacks details on permissions, rate limits, error handling, or what specific metrics are returned. This is inadequate for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It doesn't explain what metrics are returned, how data is formatted, or any behavioral traits. For a tool that retrieves information, this leaves significant gaps in understanding its functionality and output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting both parameters. The description adds no additional meaning beyond what the schema provides, such as examples or context for 'journal_name' or 'include_recent_articles'. Baseline score of 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('metrics and information about a specific journal'), making the purpose understandable. However, it doesn't differentiate from sibling tools like 'search_by_journal' or 'get_article_details', which could provide overlapping functionality, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'search_by_journal' and 'get_article_details' available, there's no indication of scenarios where this tool is preferred or excluded, leaving usage ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chrismannina/pubmed-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server