Skip to main content
Glama

get_vitals_summary

Retrieve Android Vitals summary data for your app, including crash and ANR rates with threshold flags, to monitor app stability and performance.

Instructions

Get combined Android Vitals: crash rate and ANR rate per version code.

Returns averages over the period with threshold flags. Thresholds: userPerceivedCrashRate > 1.09%, userPerceivedAnrRate > 0.47%.

Args: package_name: Package name, e.g. com.example.myapp days: Past days to include (default 7, max 30).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
package_nameYes
daysNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The `get_vitals_summary` function is decorated with `@mcp.tool()` and implements the logic to aggregate crash and ANR rates by version code, check against thresholds, and return a JSON summary.
    @mcp.tool()
    def get_vitals_summary(
        package_name: str,
        days: int = 7,
    ) -> str:
        """Get combined Android Vitals: crash rate and ANR rate per version code.
    
        Returns averages over the period with threshold flags.
        Thresholds: userPerceivedCrashRate > 1.09%, userPerceivedAnrRate > 0.47%.
    
        Args:
            package_name: Package name, e.g. com.example.myapp
            days: Past days to include (default 7, max 30).
        """
        days = max(1, min(days, 30))
        try:
            crash_raw = _reporting().query_crash_rate(package_name, days)
            anr_raw = _reporting().query_anr_rate(package_name, days)
    
            crash_rows = _parse_reporting_rows(crash_raw.get("rows", []))
            anr_rows = _parse_reporting_rows(anr_raw.get("rows", []))
    
            # Aggregate by version code: average rates over the period
            def _aggregate(rows: list, rate_key: str, perceived_key: str) -> dict:
                by_version: dict = {}
                for row in rows:
                    vc = row.get("versionCode") or "unknown"
                    entry = by_version.setdefault(vc, {"values": [], "perceived": [], "users": []})
                    if isinstance(row.get(rate_key), (int, float)):
                        entry["values"].append(row[rate_key])
                    if isinstance(row.get(perceived_key), (int, float)):
                        entry["perceived"].append(row[perceived_key])
                    if isinstance(row.get("distinctUsers"), (int, float)):
                        entry["users"].append(row["distinctUsers"])
                result = {}
                for vc, data in by_version.items():
                    avg = lambda lst: round(sum(lst) / len(lst), 6) if lst else None
                    result[vc] = {
                        f"avg_{rate_key}": avg(data["values"]),
                        f"avg_{perceived_key}": avg(data["perceived"]),
                        "avgDistinctUsers": avg(data["users"]),
                    }
                return result
    
            crash_by_vc = _aggregate(crash_rows, "crashRate", "userPerceivedCrashRate")
            anr_by_vc = _aggregate(anr_rows, "anrRate", "userPerceivedAnrRate")
    
            all_vcs = sorted(
                set(crash_by_vc) | set(anr_by_vc),
                key=lambda x: int(x) if str(x).isdigit() else 0,
                reverse=True,
            )
    
            summary = []
            for vc in all_vcs:
                entry = {"versionCode": vc}
                entry.update(crash_by_vc.get(vc, {}))
                entry.update(anr_by_vc.get(vc, {}))
                # Flag if exceeding bad behavior thresholds
                crash_pct = entry.get("avg_userPerceivedCrashRate")
                anr_pct = entry.get("avg_userPerceivedAnrRate")
                entry["exceedsCrashThreshold"] = crash_pct is not None and crash_pct > 0.0109
                entry["exceedsAnrThreshold"] = anr_pct is not None and anr_pct > 0.0047
                summary.append(entry)
    
            latest = summary[0] if summary else None
    
            return json.dumps(
                {
                    "packageName": package_name,
                    "periodDays": days,
                    "badBehaviorThresholds": {
                        "userPerceivedCrashRate": 0.0109,
                        "userPerceivedAnrRate": 0.0047,
                    },
                    "latestVersionSummary": latest,
                    "allVersions": summary,
                },
                indent=2,
            )
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully adds critical context beyond the schema by specifying that the tool 'Returns averages over the period with threshold flags' and provides the exact threshold percentages (1.09% and 0.47%), which are essential for interpreting the output flags.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately compact with front-loaded purpose and return value information. The Args section clearly delineates parameter documentation, though the indentation format slightly differs from standard prose. Every sentence contributes value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists, the description appropriately focuses on what the output represents ('averages,' 'threshold flags') rather than duplicating structural return value documentation. The inclusion of specific threshold values provides essential domain context for interpreting the returned data flags.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, but the Args section in the description fully compensates by providing semantic meaning for both parameters: package_name includes an example ('com.example.myapp'), and days includes constraints ('default 7, max 30'). This effectively bridges the gap where the schema fails to document requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Get') and clearly identifies the resource ('combined Android Vitals') and scope ('crash rate and ANR rate per version code'). The word 'combined' effectively distinguishes it from siblings get_crash_rate and get_anr_rate, signaling that this tool aggregates both metrics rather than returning individual metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the term 'combined,' suggesting it should be used when both crash and ANR data are needed together. However, it lacks explicit guidance on when to prefer this over the individual sibling tools (get_crash_rate, get_anr_rate) or whether this provides a higher-level summary versus detailed analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AgiMaulana/GooglePlayConsoleMcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server