Skip to main content
Glama

process_batch_data

Process large batches of SEO data concurrently by applying specified operations (e.g., 'upper', 'lower', 'reverse') to items, ensuring efficient parallel execution and ordered results.

Instructions

Parallelized version of process_batch_data.

This function accepts a list of keyword argument dictionaries and executes process_batch_data concurrently for each set of arguments.

Original function signature: process_batch_data(items: List, operation: str)

Args: kwargs_list (List[Dict[str, Any]]): A list of dictionaries, where each dictionary provides the keyword arguments for a single call to process_batch_data.

Returns: List[Any]: A list containing the results of each call to process_batch_data, in the same order as the input kwargs_list.

Original docstring: Process a batch of data items.

This is an example of a tool that benefits from parallelization. It will be automatically decorated with the parallelize decorator in addition to exception handling and logging. Args: items: List of strings to process operation: Operation to perform ('upper', 'lower', 'reverse') Returns: Processed items with metadata

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
kwargs_listYes

Implementation Reference

  • The core handler function implementing the process_batch_data tool logic, processing batch of strings with specified operation (upper, lower, reverse).
    async def process_batch_data(items: List[str], operation: str = "upper") -> Dict[str, Any]: """Process a batch of data items. This is an example of a tool that benefits from parallelization. It will be automatically decorated with the parallelize decorator in addition to exception handling and logging. Args: items: List of strings to process operation: Operation to perform ('upper', 'lower', 'reverse') Returns: Processed items with metadata """ # Simulate some processing time import asyncio await asyncio.sleep(0.1) processed_items = [] for item in items: if operation == "upper": processed = item.upper() elif operation == "lower": processed = item.lower() elif operation == "reverse": processed = item[::-1] else: raise ValueError(f"Unknown operation: {operation}") processed_items.append(processed) return { "original": items, "processed": processed_items, "operation": operation, "timestamp": time.time() }
  • Registers process_batch_data (via parallel_example_tools list) as an MCP tool with SAAGA decorators: parallelize, tool_logger, exception_handler.
    # Register parallel tools with SAAGA decorators for tool_func in parallel_example_tools: # Apply SAAGA decorator chain: exception_handler → tool_logger → parallelize decorated_func = exception_handler(tool_logger(parallelize(tool_func), config.__dict__)) # Extract metadata tool_name = tool_func.__name__ # Register directly with MCP mcp_server.tool( name=tool_name )(decorated_func) unified_logger.info(f"Registered parallel tool: {tool_name}")
  • Helper list defining parallel tools including process_batch_data, used in server registration for applying parallelization decorator.
    parallel_example_tools = [ process_batch_data, simulate_heavy_computation ]

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SAGAAIDEV/mcp-ahrefs'

If you have feedback or need assistance with the MCP directory API, please join our Discord server