simulate_heavy_computation
Execute multiple heavy computation tasks concurrently to demonstrate parallel processing benefits for SEO data analysis operations.
Instructions
Parallelized version of simulate_heavy_computation.
This function accepts a list of keyword argument dictionaries and executes
simulate_heavy_computation concurrently for each set of arguments.
Original function signature: simulate_heavy_computation(complexity: int)
Args:
kwargs_list (List[Dict[str, Any]]): A list of dictionaries, where each
dictionary provides the keyword arguments
for a single call to simulate_heavy_computation.
Returns:
List[Any]: A list containing the results of each call to simulate_heavy_computation,
in the same order as the input kwargs_list.
Original docstring: Simulate a heavy computation task.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| kwargs_list | Yes |
Implementation Reference
- The core handler function implementing the simulate_heavy_computation tool. It performs a simulated heavy computation loop based on the complexity parameter, yielding control periodically for async compatibility, and returns performance metrics.async def simulate_heavy_computation(complexity: int = 5) -> Dict[str, Any]: """Simulate a heavy computation task. This tool demonstrates parallelization benefits by performing a computationally intensive task that can be parallelized. Args: complexity: Complexity level (1-10, higher = more computation) Returns: Dictionary containing computation results """ if complexity < 1 or complexity > 10: raise ValueError("complexity must be between 1 and 10") start_time = time.time() # Simulate heavy computation result = 0 iterations = complexity * 100000 # Reduced for async context for i in range(iterations): result += i * 2 if i % 10000 == 0: # Yield control to allow other tasks to run import asyncio await asyncio.sleep(0.001) computation_time = time.time() - start_time return { "complexity": complexity, "iterations": iterations, "result": result, "computation_time": computation_time, "operations_per_second": iterations / computation_time if computation_time > 0 else 0 }
- mcp_ahrefs/server/app.py:114-126 (registration)The registration code for parallel tools, including simulate_heavy_computation. It applies SAAGA decorators (parallelize, tool_logger, exception_handler) and registers the decorated function with the MCP server using mcp_server.tool(name=tool_name).for tool_func in parallel_example_tools: # Apply SAAGA decorator chain: exception_handler → tool_logger → parallelize decorated_func = exception_handler(tool_logger(parallelize(tool_func), config.__dict__)) # Extract metadata tool_name = tool_func.__name__ # Register directly with MCP mcp_server.tool( name=tool_name )(decorated_func) unified_logger.info(f"Registered parallel tool: {tool_name}")
- mcp_ahrefs/tools/example_tools.py:173-175 (registration)The list grouping simulate_heavy_computation as a parallel tool, which is imported and used in server/app.py for registration.parallel_example_tools = [ process_batch_data, simulate_heavy_computation