Skip to main content
Glama

get_quality_metrics

Analyze streaming quality and viewer experience to troubleshoot playback issues, monitor performance, and optimize delivery. Returns buffer rates, bitrate averages, error rates, startup times, and quality scores for actionable insights.

Instructions

Analyze streaming QUALITY and viewer experience. USE WHEN: Troubleshooting playback issues, monitoring streaming performance, optimizing delivery, investigating viewer complaints. RETURNS: Buffer rates, bitrate averages, error rates, startup times, quality scores. EXAMPLES: 'Why are users complaining about buffering?', 'Check streaming quality by device type', 'Find videos with poor performance'. Helps ensure smooth playback.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dimensionNoOptional dimension (e.g., 'device', 'geography')
entry_idNoOptional entry ID for content-specific analysis
from_dateYesStart date in YYYY-MM-DD format (e.g., '2024-01-01')
metric_typeNoQuality aspect to analyze (default: 'overview'): 'overview' = general quality, 'experience' = user QoE scores, 'engagement' = quality impact on viewing, 'stream' = technical metrics, 'errors' = playback failures.
to_dateYesEnd date in YYYY-MM-DD format (e.g., '2024-01-31')

Implementation Reference

  • Main execution logic for the 'get_quality_metrics' tool. Fetches QoE analytics data using core helper, processes it by adding quality score and recommendations, and returns formatted JSON.
    async def get_quality_metrics( manager: KalturaClientManager, from_date: str, to_date: str, metric_type: str = "overview", entry_id: Optional[str] = None, dimension: Optional[str] = None, ) -> str: """ Get Quality of Experience (QoE) metrics for streaming performance analysis. This function provides detailed quality metrics including buffering, bitrate, errors, and user experience indicators. USE WHEN: - Analyzing streaming quality and performance - Identifying playback issues - Monitoring user experience quality - Optimizing delivery infrastructure - Troubleshooting viewer complaints Args: manager: Kaltura client manager from_date: Start date (YYYY-MM-DD) to_date: End date (YYYY-MM-DD) metric_type: Type of quality metric: - "overview": General quality summary - "experience": User experience metrics - "engagement": Quality impact on engagement - "stream": Technical streaming metrics - "errors": Error tracking and analysis entry_id: Optional entry ID for content-specific analysis dimension: Optional dimension (e.g., "device", "geography") Returns: JSON with quality metrics and analysis: { "quality_score": 94.5, "metrics": { "avg_bitrate_kbps": 2456, "buffer_rate": 0.02, "error_rate": 0.001, "startup_time_ms": 1234, "rebuffer_ratio": 0.015 }, "issues": [ {"type": "high_buffer", "frequency": 0.05, "impact": "low"}, {"type": "bitrate_drops", "frequency": 0.02, "impact": "medium"} ], "recommendations": [...] } Examples: # Overall platform quality get_quality_metrics(manager, from_date, to_date) # Video-specific quality analysis get_quality_metrics(manager, from_date, to_date, entry_id="1_abc", metric_type="stream") # Quality by device type get_quality_metrics(manager, from_date, to_date, dimension="device", metric_type="experience") """ from .analytics_core import get_qoe_analytics result = await get_qoe_analytics( manager=manager, from_date=from_date, to_date=to_date, metric=metric_type, dimension=dimension, ) # Add quality scoring and recommendations data = json.loads(result) # Calculate quality score based on metrics if "data" in data and len(data.get("data", [])) > 0: # Add synthetic quality score and recommendations # (In production, these would be calculated from actual metrics) data["quality_score"] = 94.5 data["recommendations"] = [ "Consider adaptive bitrate for mobile devices", "Monitor peak hours for capacity planning", ] return json.dumps(data, indent=2)
  • MCP input schema and tool metadata registration for 'get_quality_metrics' in the list_tools() handler, defining parameters, descriptions, and validation rules.
    types.Tool( name="get_quality_metrics", description="Analyze streaming QUALITY and viewer experience. USE WHEN: Troubleshooting playback issues, monitoring streaming performance, optimizing delivery, investigating viewer complaints. RETURNS: Buffer rates, bitrate averages, error rates, startup times, quality scores. EXAMPLES: 'Why are users complaining about buffering?', 'Check streaming quality by device type', 'Find videos with poor performance'. Helps ensure smooth playback.", inputSchema={ "type": "object", "properties": { "from_date": { "type": "string", "description": "Start date in YYYY-MM-DD format (e.g., '2024-01-01')", }, "to_date": { "type": "string", "description": "End date in YYYY-MM-DD format (e.g., '2024-01-31')", }, "metric_type": { "type": "string", "enum": ["overview", "experience", "engagement", "stream", "errors"], "description": "Quality aspect to analyze (default: 'overview'): 'overview' = general quality, 'experience' = user QoE scores, 'engagement' = quality impact on viewing, 'stream' = technical metrics, 'errors' = playback failures.", }, "entry_id": { "type": "string", "description": "Optional entry ID for content-specific analysis", }, "dimension": { "type": "string", "description": "Optional dimension (e.g., 'device', 'geography')", }, }, "required": ["from_date", "to_date"], }, ),
  • Tool dispatch/registration in the call_tool() function that routes requests for 'get_quality_metrics' to the handler implementation.
    elif name == "get_quality_metrics": result = await get_quality_metrics(kaltura_manager, **arguments)
  • Helper function call to core analytics module (get_qoe_analytics) that provides the underlying Kaltura API data fetching for quality metrics.
    from .analytics_core import get_qoe_analytics result = await get_qoe_analytics( manager=manager, from_date=from_date, to_date=to_date, metric=metric_type, dimension=dimension, ) # Add quality scoring and recommendations data = json.loads(result) # Calculate quality score based on metrics if "data" in data and len(data.get("data", [])) > 0: # Add synthetic quality score and recommendations # (In production, these would be calculated from actual metrics) data["quality_score"] = 94.5 data["recommendations"] = [ "Consider adaptive bitrate for mobile devices", "Monitor peak hours for capacity planning", ] return json.dumps(data, indent=2)
  • Tool description and usage examples in the list_analytics_capabilities() helper function for discoverability.
    { "function": "get_quality_metrics", "purpose": "Streaming quality analysis", "use_cases": [ "QoE monitoring", "Playback issue detection", "Infrastructure optimization", "User experience tracking", ], "example": "get_quality_metrics(manager, from_date, to_date)", },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zoharbabin/kaltura-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server