Skip to main content
Glama
sourcefuse

Robot Framework MCP Server

by sourcefuse

create_performance_monitoring_test

Generate Robot Framework test code to monitor application performance, returning complete .robot file content for analysis without execution.

Instructions

Generate Robot Framework performance monitoring test code. Returns complete .robot file content as text - does not execute.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function decorated with @mcp.tool() that implements the tool by generating and returning a comprehensive Robot Framework test suite for website performance monitoring, including metrics collection, threshold validation, interaction tests, and reporting.
    @mcp.tool()
    def create_performance_monitoring_test() -> str:
        """Generate Robot Framework performance monitoring test code. Returns complete .robot file content as text - does not execute."""
        return """*** Settings ***
    Library    SeleniumLibrary
    Library    Collections
    Library    DateTime
    Library    OperatingSystem
    
    *** Variables ***
    ${PERFORMANCE_THRESHOLD_LOAD}    3000    # 3 seconds
    ${PERFORMANCE_THRESHOLD_INTERACTIVE}    2000    # 2 seconds
    ${PERFORMANCE_THRESHOLD_PAINT}    1000    # 1 second
    
    *** Test Cases ***
    Website Performance Test
        [Documentation]    Comprehensive website performance testing
        [Tags]    performance    load-time    metrics
        
        # Start performance monitoring
        ${test_start}=    Get Current Date    result_format=epoch
        
        Open Browser    ${TEST_URL}    chrome    options=add_argument("--enable-precise-memory-info")
        
        # Wait for page to fully load
        Wait Until Page Does Not Contain    Loading    timeout=30s
        Sleep    2s    # Allow all resources to load
        
        # Collect performance metrics
        ${metrics}=    Collect Performance Metrics
        
        # Validate performance thresholds
        Validate Performance Thresholds    ${metrics}
        
        # Test page interactions performance
        Test Page Interaction Performance
        
        # Generate performance report
        Generate Performance Report    ${metrics}
        
        Close Browser
    
    *** Keywords ***
    Collect Performance Metrics
        [Documentation]    Collect comprehensive performance metrics
        
        # Navigation timing metrics
        ${navigation_timing}=    Execute Javascript    
        ...    var timing = window.performance.timing;
        ...    return {
        ...        'dns_lookup': timing.domainLookupEnd - timing.domainLookupStart,
        ...        'tcp_connect': timing.connectEnd - timing.connectStart,
        ...        'request_response': timing.responseEnd - timing.requestStart,
        ...        'dom_processing': timing.domComplete - timing.domLoading,
        ...        'load_complete': timing.loadEventEnd - timing.navigationStart,
        ...        'dom_ready': timing.domContentLoadedEventEnd - timing.navigationStart,
        ...        'first_paint': timing.responseStart - timing.navigationStart
        ...    };
        
        # Paint timing metrics (if supported)
        ${paint_timing}=    Execute Javascript    
        ...    if ('getEntriesByType' in window.performance) {
        ...        var paints = window.performance.getEntriesByType('paint');
        ...        var result = {};
        ...        paints.forEach(function(paint) {
        ...            result[paint.name.replace('-', '_')] = paint.startTime;
        ...        });
        ...        return result;
        ...    }
        ...    return {};
        
        # Memory usage (if supported)
        ${memory_info}=    Execute Javascript    
        ...    if ('memory' in window.performance) {
        ...        return {
        ...            'used_js_heap': window.performance.memory.usedJSHeapSize,
        ...            'total_js_heap': window.performance.memory.totalJSHeapSize,
        ...            'heap_limit': window.performance.memory.jsHeapSizeLimit
        ...        };
        ...    }
        ...    return {};
        
        # Resource timing
        ${resource_count}=    Execute Javascript    
        ...    if ('getEntriesByType' in window.performance) {
        ...        var resources = window.performance.getEntriesByType('resource');
        ...        var types = {};
        ...        resources.forEach(function(resource) {
        ...            var type = resource.initiatorType || 'other';
        ...            types[type] = (types[type] || 0) + 1;
        ...        });
        ...        types['total'] = resources.length;
        ...        return types;
        ...    }
        ...    return {};
        
        # Combine all metrics
        ${all_metrics}=    Create Dictionary
        Set To Dictionary    ${all_metrics}    navigation=${navigation_timing}
        Set To Dictionary    ${all_metrics}    paint=${paint_timing}
        Set To Dictionary    ${all_metrics}    memory=${memory_info}
        Set To Dictionary    ${all_metrics}    resources=${resource_count}
        
        [Return]    ${all_metrics}
    
    Validate Performance Thresholds
        [Documentation]    Validate performance against thresholds
        [Arguments]    ${metrics}
        
        ${navigation}=    Get From Dictionary    ${metrics}    navigation
        
        # Check load time
        ${load_time}=    Get From Dictionary    ${navigation}    load_complete
        Should Be True    ${load_time} < ${PERFORMANCE_THRESHOLD_LOAD}    
        ...    Page load time (${load_time}ms) exceeds threshold (${PERFORMANCE_THRESHOLD_LOAD}ms)
        
        # Check DOM ready time
        ${dom_ready}=    Get From Dictionary    ${navigation}    dom_ready
        Should Be True    ${dom_ready} < ${PERFORMANCE_THRESHOLD_INTERACTIVE}    
        ...    DOM ready time (${dom_ready}ms) exceeds threshold (${PERFORMANCE_THRESHOLD_INTERACTIVE}ms)
        
        # Check first paint time (if available)
        ${paint}=    Get From Dictionary    ${metrics}    paint
        ${has_first_paint}=    Run Keyword And Return Status    Dictionary Should Contain Key    ${paint}    first_paint
        IF    ${has_first_paint}
            ${first_paint}=    Get From Dictionary    ${paint}    first_paint
            Should Be True    ${first_paint} < ${PERFORMANCE_THRESHOLD_PAINT}    
            ...    First paint time (${first_paint}ms) exceeds threshold (${PERFORMANCE_THRESHOLD_PAINT}ms)
        END
    
    Test Page Interaction Performance
        [Documentation]    Test performance of page interactions
        
        # Test scroll performance
        ${scroll_start}=    Get Time    epoch
        Execute Javascript    window.scrollTo(0, document.body.scrollHeight);
        Sleep    0.5s
        Execute Javascript    window.scrollTo(0, 0);
        ${scroll_end}=    Get Time    epoch
        ${scroll_time}=    Evaluate    (${scroll_end} - ${scroll_start}) * 1000
        
        Log    Scroll performance: ${scroll_time}ms
        Should Be True    ${scroll_time} < 500    Scroll should be smooth (< 500ms)
        
        # Test click response time (if interactive elements exist)
        ${buttons}=    Get WebElements    css=button, input[type="submit"], .btn
        ${button_count}=    Get Length    ${buttons}
        
        IF    ${button_count} > 0
            ${button}=    Get From List    ${buttons}    0
            ${click_start}=    Get Time    epoch
            Click Element    ${button}
            Sleep    0.1s    # Small delay to register interaction
            ${click_end}=    Get Time    epoch
            ${click_time}=    Evaluate    (${click_end} - ${click_start}) * 1000
            
            Log    Click response time: ${click_time}ms
            Should Be True    ${click_time} < 200    Click response should be immediate (< 200ms)
        END
    
    Generate Performance Report
        [Documentation]    Generate detailed performance report
        [Arguments]    ${metrics}
        
        ${timestamp}=    Get Current Date    result_format=%Y-%m-%d_%H-%M-%S
        ${report_file}=    Set Variable    performance_report_${timestamp}.txt
        
        ${navigation}=    Get From Dictionary    ${metrics}    navigation
        ${resources}=    Get From Dictionary    ${metrics}    resources
        
        ${report_content}=    Catenate    SEPARATOR=\n
        ...    PERFORMANCE TEST REPORT
        ...    =========================
        ...    Test URL: ${TEST_URL}
        ...    Test Time: ${timestamp}
        ...    
        ...    NAVIGATION TIMING:
        ...    - DNS Lookup: ${navigation}[dns_lookup]ms
        ...    - TCP Connect: ${navigation}[tcp_connect]ms
        ...    - Request/Response: ${navigation}[request_response]ms
        ...    - DOM Processing: ${navigation}[dom_processing]ms
        ...    - Page Load Complete: ${navigation}[load_complete]ms
        ...    - DOM Ready: ${navigation}[dom_ready]ms
        ...    
        ...    RESOURCE LOADING:
        ...    - Total Resources: ${resources}[total] if 'total' in ${resources} else 'N/A'
        ...    
        ...    PERFORMANCE THRESHOLDS:
        ...    - Load Time Threshold: ${PERFORMANCE_THRESHOLD_LOAD}ms
        ...    - Interactive Threshold: ${PERFORMANCE_THRESHOLD_INTERACTIVE}ms
        ...    - Paint Threshold: ${PERFORMANCE_THRESHOLD_PAINT}ms
        
        Create File    ${report_file}    ${report_content}
        Log    Performance report saved to: ${report_file}
    
        Performance Comparison Test
        [Documentation]    Compare performance across multiple test runs
        [Arguments]    ${test_iterations}=3
        
        @{all_results}=    Create List
        
        FOR    ${iteration}    IN RANGE    1    ${test_iterations + 1}
            Log    Running performance test iteration ${iteration}
            
            Open Browser    ${TEST_URL}    chrome
            Sleep    2s
            
            ${metrics}=    Collect Performance Metrics
            Append To List    ${all_results}    ${metrics}
            
            Close Browser
            Sleep    5s    # Wait between tests
        END
        
        # Calculate average performance
        ${avg_metrics}=    Calculate Average Performance    ${all_results}
        Log    Average performance across ${test_iterations} runs: ${avg_metrics}
    
    Calculate Average Performance
        [Documentation]    Calculate average performance from multiple test runs
        [Arguments]    ${results_list}
        
        ${total_load}=    Set Variable    0
        ${total_dom}=    Set Variable    0
        ${count}=    Get Length    ${results_list}
        
        FOR    ${result}    IN    @{results_list}
            ${navigation}=    Get From Dictionary    ${result}    navigation
            ${load_time}=    Get From Dictionary    ${navigation}    load_complete
            ${dom_time}=    Get From Dictionary    ${navigation}    dom_ready
            
            ${total_load}=    Evaluate    ${total_load} + ${load_time}
            ${total_dom}=    Evaluate    ${total_dom} + ${dom_time}
        END
        
        ${avg_load}=    Evaluate    ${total_load} / ${count}
        ${avg_dom}=    Evaluate    ${total_dom} / ${count}
        
        ${averages}=    Create Dictionary    avg_load_time=${avg_load}    avg_dom_ready=${avg_dom}
        [Return]    ${averages}
    """
  • mcp_server.py:703-703 (registration)
    The @mcp.tool() decorator registers the create_performance_monitoring_test function as an MCP tool.
    @mcp.tool()

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sourcefuse/robotframework-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server