process_batch_data
Process multiple SEO data batches concurrently to accelerate analysis operations like text transformations and metadata generation.
Instructions
Parallelized version of process_batch_data.
This function accepts a list of keyword argument dictionaries and executes
process_batch_data concurrently for each set of arguments.
Original function signature: process_batch_data(items: List, operation: str)
Args:
kwargs_list (List[Dict[str, Any]]): A list of dictionaries, where each
dictionary provides the keyword arguments
for a single call to process_batch_data.
Returns:
List[Any]: A list containing the results of each call to process_batch_data,
in the same order as the input kwargs_list.
Original docstring: Process a batch of data items.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| kwargs_list | Yes |
Implementation Reference
- mcp_ahrefs/tools/example_tools.py:95-130 (handler)The core handler function for the 'process_batch_data' tool. Processes a list of strings using the specified operation ('upper', 'lower', 'reverse') and returns original and processed items with metadata. Includes type hints serving as input/output schema.async def process_batch_data(items: List[str], operation: str = "upper") -> Dict[str, Any]: """Process a batch of data items. This is an example of a tool that benefits from parallelization. It will be automatically decorated with the parallelize decorator in addition to exception handling and logging. Args: items: List of strings to process operation: Operation to perform ('upper', 'lower', 'reverse') Returns: Processed items with metadata """ # Simulate some processing time import asyncio await asyncio.sleep(0.1) processed_items = [] for item in items: if operation == "upper": processed = item.upper() elif operation == "lower": processed = item.lower() elif operation == "reverse": processed = item[::-1] else: raise ValueError(f"Unknown operation: {operation}") processed_items.append(processed) return { "original": items, "processed": processed_items, "operation": operation, "timestamp": time.time() }
- mcp_ahrefs/server/app.py:113-127 (registration)Registers all tools in parallel_example_tools (including process_batch_data) by applying decorators (parallelize, tool_logger, exception_handler) and calling mcp_server.tool(name=tool.__name__)(decorated_func).# Register parallel tools with SAAGA decorators for tool_func in parallel_example_tools: # Apply SAAGA decorator chain: exception_handler → tool_logger → parallelize decorated_func = exception_handler(tool_logger(parallelize(tool_func), config.__dict__)) # Extract metadata tool_name = tool_func.__name__ # Register directly with MCP mcp_server.tool( name=tool_name )(decorated_func) unified_logger.info(f"Registered parallel tool: {tool_name}") unified_logger.info(f"Server '{mcp_server.name}' initialized with SAAGA decorators")
- List defining parallel tools including process_batch_data, imported and used in server/app.py for registration.parallel_example_tools = [ process_batch_data, simulate_heavy_computation ]