Skip to main content
Glama

ShallowCodeResearch_agent_llm_processor

Process text with LLM to summarize content, extract keywords, or perform reasoning tasks using optional context for enhanced analysis.

Instructions

Wrapper for LLMProcessorAgent to process text with LLM. Returns: LLM processing result with output and metadata

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
text_inputNoThe input text to process
taskNoThe processing task ('summarize', 'reason', or 'extract_keywords')summarize
contextNoOptional context for processing

Implementation Reference

  • app.py:989-1005 (registration)
    Gradio Interface registration for the agent_llm_processor tool as MCP endpoint with api_name 'agent_llm_processor_service'
    with gr.Tab("Agent: LLM Processor", scale=1):
        gr.Interface(
            fn=agent_llm_processor,
            inputs=[
                gr.Textbox(label="Text to Process", lines=12, placeholder="Enter text for the LLM…"),
                gr.Dropdown(
                    choices=["summarize", "reason", "extract_keywords"],
                    value="summarize",
                    label="LLM Task",
                ),
                gr.Textbox(label="Optional Context", lines=12, placeholder="Background info…"),
            ],
            outputs=gr.JSON(label="LLM Processed Output", height=1200),
            title="LLM Processing Agent",
            description="Use configured LLM provider for text processing tasks.",
            api_name="agent_llm_processor_service",
        )
  • app.py:750-762 (handler)
    Wrapper handler function exposed as MCP tool that delegates to LLMProcessorAgent.process
    def agent_llm_processor(text_input: str, task: str, context: str | None = None) -> dict:
        """
        Wrapper for LLMProcessorAgent to process text with LLM.
    
        Args:
            text_input (str): The input text to process
            task (str): The processing task ('summarize', 'reason', or 'extract_keywords')
            context (str | None): Optional context for processing
    
        Returns:
            dict: LLM processing result with output and metadata
        """
        return llm_processor.process(text_input, task, context)
  • Core handler logic in LLMProcessorAgent.process method that performs the actual LLM text processing
    @track_performance(operation_name="llm_processing")
    @rate_limited("nebius")
    @circuit_protected("nebius")
    def process(self, text_input: str, task: str, context: str = None) -> Dict[str, Any]:
        """
        Process text using LLM for summarization, reasoning, or keyword extraction.
    
        Applies the configured LLM model to process the input text according to the
        specified task type. Supports summarization for condensing content, reasoning
        for analytical tasks, and keyword extraction for identifying key terms.
    
        Args:
            text_input (str): The input text to be processed by the LLM
            task (str): The processing task ('summarize', 'reason', or 'extract_keywords')
            context (str, optional): Additional context to guide the processing
    
        Returns:
            Dict[str, Any]: A dictionary containing the processed output and metadata
                           or error information if processing fails
        """
        try:
            validate_non_empty_string(text_input, "Input text")
            validate_non_empty_string(task, "Task")
            logger.info(f"Processing text with task: {task}")
    
            task_lower = task.lower()
            if task_lower not in ["reason", "summarize", "extract_keywords"]:
                raise ValidationError(
                    f"Unsupported LLM task: {task}. Choose 'summarize', 'reason', or 'extract_keywords'."
                )
    
            prompt_text = self._build_prompt(text_input, task_lower, context)
            messages = [{"role": "user", "content": prompt_text}]
    
            logger.info(f"LLM provider is: {api_config.llm_provider}, model used: {model_config.get_model_for_provider('llm_processor', api_config.llm_provider)}")
    
            output_text = make_llm_completion(
                model=model_config.get_model_for_provider("llm_processor", api_config.llm_provider),
                messages=messages,
                temperature=app_config.llm_temperature
            )
    
            logger.info(f"LLM processing completed for task: {task}")
            return {
                "input_text": text_input,
                "task": task,
                "provided_context": context,
                "llm_processed_output": output_text,
                "llm_model_used": model_config.get_model_for_provider("llm_processor", api_config.llm_provider),
            }
    
        except (ValidationError, APIError) as e:
            logger.error(f"LLM processing failed: {str(e)}")
            return {"error": str(e), "input_text": text_input, "processed_output": None}
        except Exception as e:
            logger.error(f"Unexpected error in LLM processing: {str(e)}")
            return {"error": f"Unexpected error: {str(e)}", "input_text": text_input, "processed_output": None}
  • Input/output schema defined in the function signature and docstring for the tool handler
    def agent_llm_processor(text_input: str, task: str, context: str | None = None) -> dict:
        """
        Wrapper for LLMProcessorAgent to process text with LLM.
    
        Args:
            text_input (str): The input text to process
            task (str): The processing task ('summarize', 'reason', or 'extract_keywords')
            context (str | None): Optional context for processing
    
        Returns:
            dict: LLM processing result with output and metadata
        """
  • Helper method to build task-specific prompts for the LLM calls
        def _build_prompt(self, text_input: str, task: str, context: str = None) -> str:
            """Build the appropriate prompt based on the task."""
            prompts = {
                "reason": f"Analyze this text and provide detailed reasoning (less than 250):\n\n{text_input} with this context {context if context else ''} for {task}",
                "summarize": f"Summarize in detail (less than 250):\n\n{text_input} with this context {context if context else ''} for {task}",
                "extract_keywords": f"Extract key terms/entities (comma-separated) from:\n\n{text_input}"
            }
    
            prompt = prompts[task]
    
            if context:
                context_additions = {
                    "reason": f"\n\nAdditional context: {context}",
                    "summarize": f"\n\nKeep in mind this context: {context}",
                    "extract_keywords": f"\n\nFocus on this context: {context}"
                }
                prompt += context_additions[task]
    
            task_endings = {
                "reason": "\n\nReasoning:",
                "summarize": "\n\nSummary:",
                "extract_keywords": "\n\nKeywords:"
            }
            prompt += task_endings[task]
    
            return prompt
    """LLM Processor Agent for text processing tasks using language models."""
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'Returns: LLM processing result with output and metadata' which gives some output information, but doesn't describe important behavioral aspects like rate limits, authentication requirements, error conditions, processing time, or what happens with invalid inputs. For a tool that processes text with an LLM, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise at two sentences. The first sentence states the core function, and the second describes the return value. There's no wasted text or unnecessary elaboration. However, it could be slightly more front-loaded by integrating the return information into the main purpose statement.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, no annotations, and no output schema, the description is insufficiently complete. While it mentions the return includes 'output and metadata', it doesn't specify what format this takes or what the metadata contains. Given the complexity of LLM processing and the lack of structured output documentation, the description should provide more context about expected behavior and results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds no additional parameter information beyond what's in the schema. It doesn't explain parameter interactions, provide examples, or add context about how parameters affect processing. This meets the baseline of 3 when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'process text with LLM' which provides a basic purpose, but it's vague about what 'process' entails. It distinguishes from some siblings like 'citation_formatter' or 'web_search' by mentioning LLM processing, but doesn't clearly differentiate from 'question_enhancer' or 'code_generator' which might also use LLMs. The description lacks specificity about the nature of the processing beyond the wrapper function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided about when to use this tool versus alternatives. The description doesn't mention when this wrapper should be chosen over direct LLM calls or other processing tools in the sibling list. There's no context about appropriate use cases, prerequisites, or limitations that would help an agent decide between this and similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CodeHalwell/gradio-mcp-agent-hack'

If you have feedback or need assistance with the MCP directory API, please join our Discord server