Skip to main content
Glama

get_metrics

Retrieve monthly visits and downloads statistics for datasets or resources on France's open data platform to analyze popularity, track usage trends, and understand consumption patterns.

Instructions

Get metrics (visits, downloads) for a dataset and/or a resource.

Returns monthly statistics including visits and downloads, sorted by month in descending order (most recent first). This tool is useful for analyzing the popularity and usage of datasets and resources, but is optional in the data exploration workflow.

Typical use cases:

  • Analyze which datasets/resources are most popular

  • Track usage trends over time

  • Understand data consumption patterns

Note: This is separate from the main data querying workflow. Use this after exploring datasets/resources if you need usage statistics.

Args: dataset_id: Optional dataset ID to get metrics for (obtained from search_datasets or get_dataset_info) resource_id: Optional resource ID to get metrics for (obtained from list_dataset_resources or get_resource_info) limit: Maximum number of monthly records to return (default: 12, max: 100)

Returns: Formatted text with monthly metrics for the dataset and/or resource

Note: At least one of dataset_id or resource_id must be provided. This tool only works with the production environment (DATAGOUV_ENV=prod). The Metrics API does not have a demo/preprod environment.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dataset_idNo
resource_idNo
limitNo

Implementation Reference

  • The main handler function for the 'get_metrics' tool. It fetches usage metrics (visits and downloads) for specified datasets and/or resources from the Metrics API, handles environment checks, fetches metadata for context, formats the data into readable tables with totals, and returns formatted text output.
    @mcp.tool() async def get_metrics( dataset_id: str | None = None, resource_id: str | None = None, limit: int = 12, ) -> str: """ Get metrics (visits, downloads) for a dataset and/or a resource. Returns monthly statistics including visits and downloads, sorted by month in descending order (most recent first). This tool is useful for analyzing the popularity and usage of datasets and resources, but is optional in the data exploration workflow. Typical use cases: - Analyze which datasets/resources are most popular - Track usage trends over time - Understand data consumption patterns Note: This is separate from the main data querying workflow. Use this after exploring datasets/resources if you need usage statistics. Args: dataset_id: Optional dataset ID to get metrics for (obtained from search_datasets or get_dataset_info) resource_id: Optional resource ID to get metrics for (obtained from list_dataset_resources or get_resource_info) limit: Maximum number of monthly records to return (default: 12, max: 100) Returns: Formatted text with monthly metrics for the dataset and/or resource Note: At least one of dataset_id or resource_id must be provided. This tool only works with the production environment (DATAGOUV_ENV=prod). The Metrics API does not have a demo/preprod environment. """ # Check if we're in demo environment current_env: str = os.getenv("DATAGOUV_ENV", "prod").strip().lower() if current_env == "demo": return ( "Error: The Metrics API is not available in the demo environment.\n" "The Metrics API only exists in production. Please set DATAGOUV_ENV=prod " "to use this tool, or switch to production environment to access metrics data." ) if not dataset_id and not resource_id: return "Error: At least one of dataset_id or resource_id must be provided." content_parts: list[str] = [] limit = max(1, min(limit, 100)) try: if dataset_id: # Clean and validate dataset_id dataset_id = str(dataset_id).strip() if not dataset_id: return "Error: dataset_id cannot be empty." logger.debug(f"Fetching metrics for dataset_id: {dataset_id}") # Get dataset metadata for context try: dataset_meta = await datagouv_api_client.get_dataset_metadata( dataset_id ) dataset_title = dataset_meta.get("title", "Unknown") content_parts.append(f"Dataset Metrics: {dataset_title}") content_parts.append(f"Dataset ID: {dataset_id}") content_parts.append("") except Exception as e: # noqa: BLE001 logger.warning(f"Could not fetch dataset metadata: {e}") content_parts.append("Dataset Metrics") content_parts.append(f"Dataset ID: {dataset_id}") content_parts.append("") # Get dataset metrics try: logger.debug( f"Calling metrics_api_client.get_metrics with dataset_id: {dataset_id}" ) metrics = await metrics_api_client.get_metrics( "datasets", dataset_id, limit=limit ) logger.debug( f"Received {len(metrics) if metrics else 0} metric entries" ) if not metrics: content_parts.append("No metrics available for this dataset.") else: content_parts.append("Monthly Statistics:") content_parts.append("-" * 60) content_parts.append( f"{'Month':<12} {'Visits':<15} {'Downloads':<15}" ) content_parts.append("-" * 60) total_visits = 0 total_downloads = 0 for entry in metrics: month = entry.get("metric_month", "Unknown") visits = entry.get("monthly_visit", 0) downloads = entry.get("monthly_download_resource", 0) total_visits += visits total_downloads += downloads content_parts.append( f"{month:<12} {visits:<15,} {downloads:<15,}" ) content_parts.append("-" * 60) content_parts.append( f"{'Total':<12} {total_visits:<15,} {total_downloads:<15,}" ) except Exception as e: # noqa: BLE001 logger.error(f"Error fetching dataset metrics: {e}") content_parts.append(f"Error fetching dataset metrics: {str(e)}") if resource_id: content_parts.append("") content_parts.append("") if resource_id: # Clean and validate resource_id resource_id = str(resource_id).strip() if not resource_id: return "Error: resource_id cannot be empty." logger.debug(f"Fetching metrics for resource_id: {resource_id}") # Get resource metadata for context try: resource_meta = await datagouv_api_client.get_resource_metadata( resource_id ) resource_title = resource_meta.get("title", "Unknown") content_parts.append(f"Resource Metrics: {resource_title}") content_parts.append(f"Resource ID: {resource_id}") content_parts.append("") except Exception as e: # noqa: BLE001 logger.warning(f"Could not fetch resource metadata: {e}") content_parts.append("Resource Metrics") content_parts.append(f"Resource ID: {resource_id}") content_parts.append("") # Get resource metrics try: logger.debug( f"Calling metrics_api_client.get_metrics with resource_id: {resource_id}" ) metrics = await metrics_api_client.get_metrics( "resources", resource_id, limit=limit ) logger.debug( f"Received {len(metrics) if metrics else 0} metric entries" ) if not metrics: content_parts.append("No metrics available for this resource.") else: content_parts.append("Monthly Statistics:") content_parts.append("-" * 40) content_parts.append(f"{'Month':<12} {'Downloads':<15}") content_parts.append("-" * 40) total_downloads = 0 for entry in metrics: month = entry.get("metric_month", "Unknown") downloads = entry.get("monthly_download_resource", 0) total_downloads += downloads content_parts.append(f"{month:<12} {downloads:<15,}") content_parts.append("-" * 40) content_parts.append(f"{'Total':<12} {total_downloads:<15,}") except Exception as e: # noqa: BLE001 logger.error(f"Error fetching resource metrics: {e}") content_parts.append(f"Error fetching resource metrics: {str(e)}") return "\n".join(content_parts) except Exception as e: # noqa: BLE001 logger.exception("Unexpected error in get_metrics") return f"Error: {str(e)}"
  • The call to register_get_metrics_tool within the central register_tools function, which registers all MCP tools with the FastMCP server instance.
    register_get_metrics_tool(mcp)
  • tools/__init__.py:7-7 (registration)
    Import statement for the get_metrics tool's registration function in the tools package init file.
    from tools.get_metrics import register_get_metrics_tool
  • The registration wrapper function that defines and decorates the get_metrics handler with @mcp.tool().
    def register_get_metrics_tool(mcp: FastMCP) -> None:

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bolinocroustibat/datagouv-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server