Skip to main content
Glama

get_video_retention

Analyze viewer drop-off points in a video to optimize content, identify engaging moments, and improve completion rates. Provides 101 data points showing viewer retention at each percentage of the video, enabling precise comparisons between audience segments.

Instructions

Analyze WHERE viewers stop watching in a video. USE WHEN: Optimizing video content, finding boring sections, identifying engaging moments, improving completion rates. RETURNS: 101 data points (0-100%) showing viewer count at each percent of video. EXAMPLES: 'Where do viewers drop off in video 1_abc123?', 'What parts get replayed?', 'Compare retention for anonymous vs logged-in users'. Shows exact percentages where audience is lost.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
compare_segmentsNoCompare filtered segment vs all viewers
entry_idYesVideo to analyze (required, format: '1_abc123'). Get from search_entries or get_media_entry.
from_dateNoStart date (optional, defaults to 30 days ago)
to_dateNoEnd date (optional, defaults to today)
user_filterNoOptional viewer segment: 'anonymous' (not logged in), 'registered' (logged in), 'user@email.com' (specific user), 'cohort:students' (named group). Compare different audience behaviors.

Implementation Reference

  • The main handler function for the 'get_video_retention' tool. Fetches raw percentiles analytics data from Kaltura API, processes it into time-formatted retention curve with viewers/replays at each percentile, calculates insights (average retention, drop-offs, replay hotspots), fetches video metadata for duration/title, supports user filters and date ranges.
    async def get_video_retention( manager: KalturaClientManager, entry_id: str, from_date: Optional[str] = None, to_date: Optional[str] = None, user_filter: Optional[str] = None, compare_segments: bool = False, ) -> str: """ Analyze viewer retention throughout a video with percentile-level granularity. This function provides detailed retention curves showing exactly where viewers drop off or replay content within a single video. Returns 101 data points representing viewer behavior at each percent of the video duration. USE WHEN: - Analyzing where viewers stop watching within a video - Identifying segments that get replayed frequently - Optimizing video content structure and pacing - Comparing retention between different viewer segments - Understanding completion rates and engagement patterns Args: manager: Kaltura client manager entry_id: Video entry ID to analyze (required) from_date: Start date (optional, defaults to 30 days ago) to_date: End date (optional, defaults to today) user_filter: Filter by user type (optional): - None: All viewers (default) - "anonymous": Only non-logged-in viewers - "registered": Only logged-in viewers - "user@email.com": Specific user - "cohort:name": Named user cohort compare_segments: If True, compare filtered segment vs all viewers Returns: JSON with detailed retention analysis including TIME CONVERSION: { "video": { "id": "1_abc", "title": "Video Title", "duration_seconds": 300, "duration_formatted": "05:00" }, "retention_data": [ { "percentile": 0, "time_seconds": 0, "time_formatted": "00:00", "viewers": 1000, "unique_users": 1000, "retention_percentage": 100.0, "replays": 0 }, { "percentile": 10, "time_seconds": 30, "time_formatted": "00:30", "viewers": 850, "unique_users": 800, "retention_percentage": 85.0, "replays": 50 }, ... ], "insights": { "average_retention": 65.5, "completion_rate": 42.0, "fifty_percent_point": "02:30", "major_dropoffs": [ {"time": "00:30", "time_seconds": 30, "percentile": 10, "retention_loss": 15.0}, ... ], "replay_hotspots": [ {"time": "02:15", "time_seconds": 135, "percentile": 45, "replay_rate": 0.35}, ... ] } } Examples: # Basic retention analysis get_video_retention(manager, entry_id="1_abc123") # Compare anonymous vs all viewers get_video_retention(manager, entry_id="1_abc123", user_filter="anonymous", compare_segments=True) # Analyze specific user's viewing pattern get_video_retention(manager, entry_id="1_abc123", user_filter="john@example.com") """ # Map user-friendly filters to API values user_ids = None if user_filter: if user_filter.lower() == "anonymous": user_ids = "Unknown" elif user_filter.lower() == "registered": # This requires getting all users vs anonymous # Note: comparison logic could be added here in future pass elif user_filter.startswith("cohort:"): # Handle cohort logic user_ids = user_filter[7:] # Remove "cohort:" prefix else: user_ids = user_filter # Default date range if not provided if not from_date or not to_date: from datetime import datetime, timedelta end = datetime.now() start = end - timedelta(days=30) from_date = from_date or start.strftime("%Y-%m-%d") to_date = to_date or end.strftime("%Y-%m-%d") # Use the core analytics function with raw response format from .analytics_core import get_analytics_enhanced # Get raw percentiles data to avoid object creation issues result = await get_analytics_enhanced( manager=manager, from_date=from_date, to_date=to_date, report_type="percentiles", entry_id=entry_id, object_ids=entry_id, user_id=user_ids, limit=500, response_format="raw", ) # Parse and enhance the result try: data = json.loads(result) # Get video metadata to extract duration try: from .media import get_media_entry video_info = await get_media_entry(manager, entry_id) video_data = json.loads(video_info) video_duration = video_data.get("duration", 0) video_title = video_data.get("name", "Unknown") except Exception: # Fallback for tests or when media info is not available # Try to determine duration from the data if we have 100 percentile video_duration = 300 # Default 5 minutes video_title = f"Video {entry_id}" # If we can access the raw response, try to get metadata from there if "kaltura_response" in data and isinstance(data["kaltura_response"], dict): # Sometimes duration might be in the response metadata if ( "totalCount" in data["kaltura_response"] and data["kaltura_response"]["totalCount"] == 101 ): # 101 data points suggest percentiles 0-100, so we have full video coverage # Default to 5 minutes if we can't determine actual duration video_duration = 300 # Create enhanced format with time conversion formatted_result = { "video": { "id": entry_id, "title": video_title, "duration_seconds": video_duration, "duration_formatted": f"{video_duration // 60:02d}:{video_duration % 60:02d}", }, "date_range": {"from": from_date, "to": to_date}, "filter": {"user_ids": user_ids or "all"}, "retention_data": [], } # Process the Kaltura response and add time conversion if "kaltura_response" in data: kaltura_data = data["kaltura_response"] # Parse the CSV data with percentiles if "data" in kaltura_data and kaltura_data["data"]: # Split by newline or semicolon (Kaltura sometimes uses semicolons) if ";" in kaltura_data["data"] and "\n" not in kaltura_data["data"]: rows = kaltura_data["data"].strip().split(";") else: rows = kaltura_data["data"].strip().split("\n") # First pass: collect all data points raw_data_points = [] for row in rows: if row.strip(): # Parse percentile data (format: percentile|viewers|unique_users or CSV) if "|" in row: values = row.split("|") else: values = row.split(",") if len(values) >= 3: try: percentile = int(values[0]) viewers = int(values[1]) unique_users = int(values[2]) raw_data_points.append( { "percentile": percentile, "viewers": viewers, "unique_users": unique_users, } ) except (ValueError, TypeError): continue # Find the maximum viewer count to use as initial reference # This handles cases where percentile 0 has 0 viewers max_viewers = max((p["viewers"] for p in raw_data_points), default=0) # If we have data at percentile 0 with viewers > 0, use that as initial # Otherwise, use the maximum viewer count as the reference point initial_viewers = 0 for point in raw_data_points: if point["percentile"] == 0 and point["viewers"] > 0: initial_viewers = point["viewers"] break if initial_viewers == 0: # No viewers at start, use max viewers as reference initial_viewers = max_viewers # Second pass: calculate retention percentages for point in raw_data_points: percentile = point["percentile"] viewers = point["viewers"] unique_users = point["unique_users"] # Calculate time position time_seconds = int((percentile / 100.0) * video_duration) time_formatted = f"{time_seconds // 60:02d}:{time_seconds % 60:02d}" # Calculate retention percentage if initial_viewers > 0: retention_pct = viewers / initial_viewers * 100 else: # If no initial viewers, show 0% retention retention_pct = 0 if viewers == 0 else 100 formatted_result["retention_data"].append( { "percentile": percentile, "time_seconds": time_seconds, "time_formatted": time_formatted, "viewers": viewers, "unique_users": unique_users, "retention_percentage": round(retention_pct, 2), "replays": viewers - unique_users, } ) # Calculate insights if formatted_result["retention_data"]: retention_values = [ d["retention_percentage"] for d in formatted_result["retention_data"] ] # Find major drop-offs (>5% loss in 10 seconds / ~10 percentile points) major_dropoffs = [] for i in range(10, len(formatted_result["retention_data"]), 10): current = formatted_result["retention_data"][i] previous = formatted_result["retention_data"][i - 10] drop = previous["retention_percentage"] - current["retention_percentage"] if drop >= 5: major_dropoffs.append( { "time": current["time_formatted"], "time_seconds": current["time_seconds"], "percentile": current["percentile"], "retention_loss": round(drop, 2), } ) # Find replay hotspots replay_hotspots = [] for point in formatted_result["retention_data"]: if point["unique_users"] > 0: replay_rate = point["replays"] / point["unique_users"] if replay_rate > 0.2: # 20% replay rate threshold replay_hotspots.append( { "time": point["time_formatted"], "time_seconds": point["time_seconds"], "percentile": point["percentile"], "replay_rate": round(replay_rate, 2), } ) formatted_result["insights"] = { "average_retention": round(sum(retention_values) / len(retention_values), 2), "completion_rate": round(retention_values[-1] if retention_values else 0, 2), "fifty_percent_point": next( ( d["time_formatted"] for d in formatted_result["retention_data"] if d["retention_percentage"] <= 50 ), "Never", ), "major_dropoffs": major_dropoffs[:5], # Top 5 drop-offs "replay_hotspots": sorted( replay_hotspots, key=lambda x: x["replay_rate"], reverse=True )[:5], } # Keep raw response for reference formatted_result["kaltura_raw_response"] = kaltura_data elif "error" in data: return json.dumps(data, indent=2) if user_ids and compare_segments: formatted_result[ "note" ] = "For segment comparison, call this function twice with different user filters" return json.dumps(formatted_result, indent=2) except Exception as e: # If parsing fails, return error return json.dumps( { "error": f"Failed to process retention data: {str(e)}", "video_id": entry_id, "filter": {"user_ids": user_ids or "all"}, }, indent=2, )
  • Tool registration in MCP server including name, description, and input schema validation. This defines the tool interface exposed to MCP clients.
    types.Tool( name="get_video_retention", description="Analyze WHERE viewers stop watching in a video. USE WHEN: Optimizing video content, finding boring sections, identifying engaging moments, improving completion rates. RETURNS: 101 data points (0-100%) showing viewer count at each percent of video. EXAMPLES: 'Where do viewers drop off in video 1_abc123?', 'What parts get replayed?', 'Compare retention for anonymous vs logged-in users'. Shows exact percentages where audience is lost.", inputSchema={ "type": "object", "properties": { "entry_id": { "type": "string", "description": "Video to analyze (required, format: '1_abc123'). Get from search_entries or get_media_entry.", }, "from_date": { "type": "string", "description": "Start date (optional, defaults to 30 days ago)", }, "to_date": { "type": "string", "description": "End date (optional, defaults to today)", }, "user_filter": { "type": "string", "description": "Optional viewer segment: 'anonymous' (not logged in), 'registered' (logged in), 'user@email.com' (specific user), 'cohort:students' (named group). Compare different audience behaviors.", }, "compare_segments": { "type": "boolean", "description": "Compare filtered segment vs all viewers", }, }, "required": ["entry_id"], }, ),
  • Core helper function called by get_video_retention to fetch raw 'percentiles' report (ID 43) from Kaltura API. Handles all analytics reports, filters, pagination; returns raw response which handler processes into retention insights.
    async def get_analytics_enhanced( manager: KalturaClientManager, from_date: str, to_date: str, report_type: str = "content", entry_id: Optional[str] = None, user_id: Optional[str] = None, object_ids: Optional[str] = None, metrics: Optional[List[str]] = None, categories: Optional[str] = None, dimension: Optional[str] = None, interval: Optional[str] = None, filters: Optional[Dict[str, str]] = None, limit: int = 20, page_index: int = 1, order_by: Optional[str] = None, response_format: str = "json", ) -> str: """Enhanced analytics with support for all report types and advanced features. Args: manager: Kaltura client manager from_date: Start date (YYYY-MM-DD) to_date: End date (YYYY-MM-DD) report_type: Type of report (see REPORT_TYPE_MAP keys) entry_id: Optional specific entry ID user_id: Optional specific user ID object_ids: Optional comma-separated object IDs metrics: Requested metrics (for reference) categories: Category filter dimension: Dimension for grouping (e.g., "device", "country") interval: Time interval (e.g., "days", "months", "years") filters: Additional filters (customVar1In, countryIn, etc.) limit: Maximum results page_index: Page number for pagination order_by: Sort field response_format: "json", "csv", or "raw" (returns unprocessed API response) """ # Validate dates date_pattern = r"^\d{4}-\d{2}-\d{2}$" if not re.match(date_pattern, from_date) or not re.match(date_pattern, to_date): return json.dumps({"error": "Invalid date format. Use YYYY-MM-DD"}, indent=2) # Validate entry ID if provided if entry_id and not validate_entry_id(entry_id): return json.dumps({"error": "Invalid entry ID format"}, indent=2) # Get report type ID report_type_id = REPORT_TYPE_MAP.get(report_type) if not report_type_id: return json.dumps( { "error": f"Unknown report type: {report_type}", "available_types": list(REPORT_TYPE_MAP.keys()), }, indent=2, ) # Check if object IDs are required if report_type in OBJECT_ID_REQUIRED_REPORTS and not (entry_id or user_id or object_ids): return json.dumps( { "error": f"Report type '{report_type}' requires object IDs", "suggestion": "Provide entry_id, user_id, or object_ids parameter", }, indent=2, ) # If requesting raw format and imports might fail, return early with a simpler approach if response_format == "raw": try: # Try the simple approach first for raw format client = manager.get_client() # Direct API call without complex objects start_time = int(datetime.strptime(from_date, "%Y-%m-%d").timestamp()) end_time = int(datetime.strptime(to_date, "%Y-%m-%d").timestamp()) # Try to get the report directly try: # Prepare object IDs if object_ids: obj_ids = object_ids elif entry_id: obj_ids = entry_id elif user_id: obj_ids = user_id else: obj_ids = None # Try direct call with minimal parameters report_result = client.report.getTable( reportType=report_type_id, reportInputFilter={ "fromDate": start_time, "toDate": end_time, "entryIdIn": entry_id if entry_id else None, "userIds": user_id if user_id else None, "categories": categories if categories else None, }, pager={"pageSize": min(limit, 500), "pageIndex": page_index}, order=order_by, objectIds=obj_ids, ) # Return raw response return json.dumps( { "kaltura_response": { "header": getattr(report_result, "header", ""), "data": getattr(report_result, "data", ""), "totalCount": getattr(report_result, "totalCount", 0), }, "request_info": { "report_type": report_type, "report_type_id": report_type_id, "from_date": from_date, "to_date": to_date, "entry_id": entry_id, "user_id": user_id, }, }, indent=2, ) except Exception: # If direct call fails, fall through to normal processing pass except Exception: # If anything fails, continue with normal processing pass client = manager.get_client() try: from KalturaClient.Plugins.Core import ( KalturaEndUserReportInputFilter, KalturaFilterPager, KalturaReportInputFilter, KalturaReportInterval, ) # Convert dates start_time = int(datetime.strptime(from_date, "%Y-%m-%d").timestamp()) end_time = int(datetime.strptime(to_date, "%Y-%m-%d").timestamp()) # Create appropriate filter if report_type in END_USER_REPORTS: report_filter = KalturaEndUserReportInputFilter() else: report_filter = KalturaReportInputFilter() # Set date range report_filter.fromDate = start_time report_filter.toDate = end_time # Set categories if provided if categories: report_filter.categories = categories # Set interval if provided if interval: interval_map = { "days": KalturaReportInterval.DAYS, "months": KalturaReportInterval.MONTHS, "years": KalturaReportInterval.YEARS, } if interval in interval_map: report_filter.interval = interval_map[interval] # Apply additional filters if filters: for key, value in filters.items(): if hasattr(report_filter, key): setattr(report_filter, key, value) # Create pager pager = KalturaFilterPager() pager.pageSize = min(limit, 500) # Allow larger pages pager.pageIndex = page_index # Prepare object IDs if object_ids: obj_ids = object_ids elif entry_id: obj_ids = entry_id elif user_id: obj_ids = user_id else: obj_ids = None # Get the report type enum value # For numeric IDs, just use the ID directly kaltura_report_type = report_type_id # Call appropriate API method if response_format == "raw": # Get raw table data without processing report_result = client.report.getTable( reportType=kaltura_report_type, reportInputFilter=report_filter, pager=pager, order=order_by, objectIds=obj_ids, ) # Return raw Kaltura response with minimal wrapping return json.dumps( { "kaltura_response": { "header": getattr(report_result, "header", ""), "data": getattr(report_result, "data", ""), "totalCount": getattr(report_result, "totalCount", 0), }, "request_info": { "report_type": report_type, "report_type_id": report_type_id, "from_date": from_date, "to_date": to_date, "entry_id": entry_id, "user_id": user_id, }, }, indent=2, ) elif response_format == "csv": # Get CSV export URL csv_result = client.report.getUrlForReportAsCsv( reportTitle=f"{REPORT_TYPE_NAMES.get(report_type, 'Report')}_{from_date}_{to_date}", reportText=f"Report from {from_date} to {to_date}", headers=",".join(metrics) if metrics else None, reportType=kaltura_report_type, reportInputFilter=report_filter, dimension=dimension, pager=pager, order=order_by, objectIds=obj_ids, ) return json.dumps( { "format": "csv", "download_url": csv_result, "expires_in": "300 seconds", "report_type": REPORT_TYPE_NAMES.get(report_type, report_type), }, indent=2, ) else: # Get table data # Note: getTable doesn't support dimension parameter # If dimension is requested, we'll include it in metadata but cannot group by it report_result = client.report.getTable( reportType=kaltura_report_type, reportInputFilter=report_filter, pager=pager, order=order_by, objectIds=obj_ids, ) # Parse results analytics_data = { "reportType": REPORT_TYPE_NAMES.get(report_type, "Analytics Report"), "reportTypeCode": report_type, "reportTypeId": report_type_id, "dateRange": {"from": from_date, "to": to_date}, "filters": { "categories": categories, "dimension": dimension, "interval": interval, "objectIds": obj_ids, "additionalFilters": filters, }, "pagination": { "pageSize": pager.pageSize, "pageIndex": pager.pageIndex, "totalCount": getattr(report_result, "totalCount", 0), }, "headers": [], "data": [], } # Parse headers if report_result.header: analytics_data["headers"] = [h.strip() for h in report_result.header.split(",")] # Parse data with enhanced handling if report_result.data: data_rows = report_result.data.split("\n") for row in data_rows: if row.strip(): # Handle different data formats if ";" in row and report_type == "engagement_timeline": # Special handling for timeline data timeline_data = parse_timeline_data(row) analytics_data["data"].append(timeline_data) elif report_type in [ "percentiles", "video_timeline", "retention_curve", "viewer_retention", "drop_off_analysis", "replay_detection", ]: # Special handling for PERCENTILES report (ID 43) # This report uses semicolon-separated rows with pipe-separated values if "|" in row: values = row.split("|") if len(values) >= 3: row_dict = { "percentile": convert_value(values[0]), "count_viewers": convert_value(values[1]), "unique_known_users": convert_value(values[2]), } analytics_data["data"].append(row_dict) else: # Fallback to standard CSV parsing if no pipes found row_values = parse_csv_row(row) if len(row_values) >= len(analytics_data["headers"]): row_dict = {} for i, header in enumerate(analytics_data["headers"]): if i < len(row_values): row_dict[header] = convert_value(row_values[i]) analytics_data["data"].append(row_dict) else: # Standard CSV parsing row_values = parse_csv_row(row) if len(row_values) >= len(analytics_data["headers"]): row_dict = {} for i, header in enumerate(analytics_data["headers"]): if i < len(row_values): row_dict[header] = convert_value(row_values[i]) analytics_data["data"].append(row_dict) analytics_data["totalResults"] = len(analytics_data["data"]) # Add note if dimension was requested but not applied if dimension: analytics_data["note"] = ( f"Dimension '{dimension}' was requested but grouping is not supported in table format. " "Use get_analytics_graph() or response_format='graph' for dimensional analysis." ) # Add summary for certain reports if report_type in ["partner_usage", "var_usage", "cdn_bandwidth"]: summary_result = client.report.getTotal( reportType=kaltura_report_type, reportInputFilter=report_filter, objectIds=obj_ids, ) if summary_result: analytics_data["summary"] = parse_summary_data(summary_result) return json.dumps(analytics_data, indent=2) except ImportError as e: return json.dumps( { "error": "Analytics functionality not available", "detail": str(e), "suggestion": "Ensure Kaltura client has Report plugin", }, indent=2, ) except Exception as e: return json.dumps( { "error": f"Failed to retrieve analytics: {str(e)}", "report_type": report_type, "suggestion": "Check permissions and report availability", }, indent=2, )
  • REPORT_TYPE_MAP defines 'percentiles': 43 which maps to Kaltura's PERCENTILES report used for video retention analysis. Imported and used by analytics functions.
    REPORT_TYPE_MAP = { # Content Performance Reports (1-10, 34, 44) "content": 1, # TOP_CONTENT "content_dropoff": 2, # CONTENT_DROPOFF "content_interactions": 3, # CONTENT_INTERACTIONS "engagement_timeline": 34, # USER_ENGAGEMENT_TIMELINE "content_contributions": 7, # CONTENT_CONTRIBUTIONS "content_report_reasons": 44, # CONTENT_REPORT_REASONS "content_spread": 10, # CONTENT_SPREAD # User Analytics Reports (11-18, 35, 40) "user_engagement": 11, # USER_ENGAGEMENT "specific_user_engagement": 12, # SPECIFIC_USER_ENGAGEMENT "user_top_content": 13, # USER_TOP_CONTENT "user_content_dropoff": 14, # USER_CONTENT_DROPOFF "user_content_interactions": 15, # USER_CONTENT_INTERACTIONS "user_usage": 17, # USER_USAGE "unique_users": 35, # UNIQUE_USERS_PLAY "user_highlights": 40, # USER_HIGHLIGHTS "specific_user_usage": 18, # SPECIFIC_USER_USAGE # Geographic & Demographic Reports (4, 30, 36-37) "geographic": 4, # MAP_OVERLAY "geographic_country": 36, # MAP_OVERLAY_COUNTRY "geographic_region": 37, # MAP_OVERLAY_REGION "geographic_city": 30, # MAP_OVERLAY_CITY # Platform & Technology Reports (21-23, 32-33) "platforms": 21, # PLATFORMS "operating_system": 22, # OPERATING_SYSTEM "browsers": 23, # BROWSERS "operating_system_families": 32, # OPERATING_SYSTEM_FAMILIES "browsers_families": 33, # BROWSERS_FAMILIES # Creator & Contributor Reports (5, 20, 38-39) "contributors": 5, # TOP_CONTRIBUTORS "creators": 20, # TOP_CREATORS "content_creator": 38, # TOP_CONTENT_CREATOR "content_contributors": 39, # TOP_CONTENT_CONTRIBUTORS # Distribution & Syndication Reports (6, 25, 41-42) "syndication": 6, # TOP_SYNDICATION "playback_context": 25, # TOP_PLAYBACK_CONTEXT "sources": 41, # TOP_SOURCES "syndication_usage": 42, # TOP_SYNDICATION_DISTRIBUTION # Usage & Infrastructure Reports (19, 26-27, 60, 64, 201) "partner_usage": 201, # PARTNER_USAGE "var_usage": 19, # VAR_USAGE "vpaas_usage": 26, # VPAAS_USAGE "entry_usage": 27, # ENTRY_USAGE "self_serve_usage": 60, # SELF_SERVE_USAGE "cdn_bandwidth": 64, # CDN_BANDWIDTH_USAGE # Interactive & Advanced Reports (43, 45-50) "percentiles": 43, # PERCENTILES - Video timeline retention analysis "video_timeline": 43, # Alias for PERCENTILES - clearer for LLMs "retention_curve": 43, # Another alias for PERCENTILES "viewer_retention": 43, # PERCENTILES - Per-video retention analysis "drop_off_analysis": 43, # PERCENTILES - Where viewers stop watching "replay_detection": 43, # PERCENTILES - Identify replay hotspots "player_interactions": 45, # PLAYER_RELATED_INTERACTIONS "playback_rate": 46, # PLAYBACK_RATE "interactive_video": 49, # USER_INTERACTIVE_VIDEO "interactive_nodes": 50, # INTERACTIVE_VIDEO_TOP_NODES # Live & Real-time Reports (48, 10001-10006) "live_stats": 48, # LIVE_STATS "realtime_country": 10001, # MAP_OVERLAY_COUNTRY_REALTIME "realtime_users": 10005, # USERS_OVERVIEW_REALTIME "realtime_qos": 10006, # QOS_OVERVIEW_REALTIME # Quality of Experience Reports (30001-30050) "qoe_overview": 30001, # QOE_OVERVIEW "qoe_experience": 30002, # QOE_EXPERIENCE "qoe_engagement": 30014, # QOE_ENGAGEMENT "qoe_stream_quality": 30026, # QOE_STREAM_QUALITY "qoe_error_tracking": 30038, # QOE_ERROR_TRACKING # Business Intelligence & Webcast Reports (40001-40013) "webcast_highlights": 40001, # HIGHLIGHTS_WEBCAST "webcast_engagement": 40011, # ENGAGEMENT_TIMELINE_WEBCAST # Additional Reports "discovery": 51, # DISCOVERY "discovery_realtime": 52, # DISCOVERY_REALTIME "realtime": 53, # REALTIME "peak_usage": 54, # PEAK_USAGE "flavor_params_usage": 55, # FLAVOR_PARAMS_USAGE "content_spread_country": 56, # CONTENT_SPREAD_COUNTRY "top_contributors_country": 57, # TOP_CONTRIBUTORS_COUNTRY "contribution_source": 58, # CONTRIBUTION_SOURCE "vod_performance": 59, # VOD_PERFORMANCE }
  • Dispatch/execution registration in server's call_tool handler that routes MCP tool calls to the get_video_retention implementation.
    elif name == "get_video_retention": result = await get_video_retention(kaltura_manager, **arguments)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zoharbabin/kaltura-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server