Skip to main content
Glama

process_batch_data

Process multiple SEO data batches concurrently to accelerate analysis operations like text transformations and metadata generation.

Instructions

Parallelized version of process_batch_data.

This function accepts a list of keyword argument dictionaries and executes process_batch_data concurrently for each set of arguments.

Original function signature: process_batch_data(items: List, operation: str)

Args: kwargs_list (List[Dict[str, Any]]): A list of dictionaries, where each dictionary provides the keyword arguments for a single call to process_batch_data.

Returns: List[Any]: A list containing the results of each call to process_batch_data, in the same order as the input kwargs_list.

Original docstring: Process a batch of data items.

This is an example of a tool that benefits from parallelization.
It will be automatically decorated with the parallelize decorator
in addition to exception handling and logging.

Args:
    items: List of strings to process
    operation: Operation to perform ('upper', 'lower', 'reverse')
    
Returns:
    Processed items with metadata

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
kwargs_listYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core handler function for the 'process_batch_data' tool. Processes a list of strings using the specified operation ('upper', 'lower', 'reverse') and returns original and processed items with metadata. Includes type hints serving as input/output schema.
    async def process_batch_data(items: List[str], operation: str = "upper") -> Dict[str, Any]:
        """Process a batch of data items.
        
        This is an example of a tool that benefits from parallelization.
        It will be automatically decorated with the parallelize decorator
        in addition to exception handling and logging.
        
        Args:
            items: List of strings to process
            operation: Operation to perform ('upper', 'lower', 'reverse')
            
        Returns:
            Processed items with metadata
        """
        # Simulate some processing time
        import asyncio
        await asyncio.sleep(0.1)
        
        processed_items = []
        for item in items:
            if operation == "upper":
                processed = item.upper()
            elif operation == "lower":
                processed = item.lower()
            elif operation == "reverse":
                processed = item[::-1]
            else:
                raise ValueError(f"Unknown operation: {operation}")
            processed_items.append(processed)
        
        return {
            "original": items,
            "processed": processed_items,
            "operation": operation,
            "timestamp": time.time()
        }
  • Registers all tools in parallel_example_tools (including process_batch_data) by applying decorators (parallelize, tool_logger, exception_handler) and calling mcp_server.tool(name=tool.__name__)(decorated_func).
    # Register parallel tools with SAAGA decorators  
    for tool_func in parallel_example_tools:
        # Apply SAAGA decorator chain: exception_handler → tool_logger → parallelize
        decorated_func = exception_handler(tool_logger(parallelize(tool_func), config.__dict__))
        
        # Extract metadata
        tool_name = tool_func.__name__
        
        # Register directly with MCP
        mcp_server.tool(
            name=tool_name
        )(decorated_func)
        
        unified_logger.info(f"Registered parallel tool: {tool_name}")
    unified_logger.info(f"Server '{mcp_server.name}' initialized with SAAGA decorators")
  • List defining parallel tools including process_batch_data, imported and used in server/app.py for registration.
    parallel_example_tools = [
        process_batch_data,
        simulate_heavy_computation
    ]
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool is 'automatically decorated with the parallelize decorator in addition to exception handling and logging,' which adds valuable behavioral context beyond basic functionality. However, it lacks details on error handling specifics, performance characteristics, or resource usage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized but not optimally structured. It front-loads the parallelization aspect but includes redundant information like the original docstring, which could be condensed. Some sentences (e.g., about automatic decoration) earn their place, but others could be more streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (parallel processing), no annotations, and an output schema present, the description is fairly complete. It covers purpose, parameters, returns, and behavioral traits like parallelization and logging. However, it could benefit from more details on error propagation or concurrency limits to be fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description compensates well by explaining that `kwargs_list` is a list of dictionaries providing keyword arguments for calls to `process_batch_data`. It also references the original function's parameters (`items` and `operation`), adding meaning beyond the minimal schema. With only one parameter, this is above the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states this is a 'parallelized version' of `process_batch_data` that executes calls concurrently, which is a specific verb+resource combination. It distinguishes itself from the original function by emphasizing parallelization, though it doesn't explicitly differentiate from sibling tools like `simulate_heavy_computation`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when parallelization is beneficial for processing multiple data batches, as noted in the original docstring. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like `simulate_heavy_computation` or when not to use it (e.g., for single operations).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SAGAAIDEV/mcp-ahrefs'

If you have feedback or need assistance with the MCP directory API, please join our Discord server