Skip to main content
Glama

simulate_heavy_computation

Perform parallel heavy computation tasks by processing multiple inputs concurrently. Input a list of arguments to execute intensive computations efficiently, returning results in the same order. Designed to showcase parallelization benefits.

Instructions

Parallelized version of simulate_heavy_computation.

This function accepts a list of keyword argument dictionaries and executes simulate_heavy_computation concurrently for each set of arguments.

Original function signature: simulate_heavy_computation(complexity: int)

Args: kwargs_list (List[Dict[str, Any]]): A list of dictionaries, where each dictionary provides the keyword arguments for a single call to simulate_heavy_computation.

Returns: List[Any]: A list containing the results of each call to simulate_heavy_computation, in the same order as the input kwargs_list.

Original docstring: Simulate a heavy computation task.

This tool demonstrates parallelization benefits by performing a computationally intensive task that can be parallelized. Args: complexity: Complexity level (1-10, higher = more computation) Returns: Dictionary containing computation results

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
kwargs_listYes

Implementation Reference

  • The core handler function implementing the simulate_heavy_computation tool logic, which performs a simulated heavy computational loop based on the complexity parameter, measures execution time, and returns performance metrics.
    async def simulate_heavy_computation(complexity: int = 5) -> Dict[str, Any]: """Simulate a heavy computation task. This tool demonstrates parallelization benefits by performing a computationally intensive task that can be parallelized. Args: complexity: Complexity level (1-10, higher = more computation) Returns: Dictionary containing computation results """ if complexity < 1 or complexity > 10: raise ValueError("complexity must be between 1 and 10") start_time = time.time() # Simulate heavy computation result = 0 iterations = complexity * 100000 # Reduced for async context for i in range(iterations): result += i * 2 if i % 10000 == 0: # Yield control to allow other tasks to run import asyncio await asyncio.sleep(0.001) computation_time = time.time() - start_time return { "complexity": complexity, "iterations": iterations, "result": result, "computation_time": computation_time, "operations_per_second": iterations / computation_time if computation_time > 0 else 0 }
  • Registration of parallel tools, including simulate_heavy_computation, by applying decorators (parallelize, tool_logger, exception_handler) and registering with mcp_server.tool.
    # Register parallel tools with SAAGA decorators for tool_func in parallel_example_tools: # Apply SAAGA decorator chain: exception_handler → tool_logger → parallelize decorated_func = exception_handler(tool_logger(parallelize(tool_func), config.__dict__)) # Extract metadata tool_name = tool_func.__name__ # Register directly with MCP mcp_server.tool( name=tool_name )(decorated_func) unified_logger.info(f"Registered parallel tool: {tool_name}") unified_logger.info(f"Server '{mcp_server.name}' initialized with SAAGA decorators")

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SAGAAIDEV/mcp-ahrefs'

If you have feedback or need assistance with the MCP directory API, please join our Discord server