Skip to main content
Glama

ShallowCodeResearch_agent_llm_processor

Process text with LLM to summarize content, extract keywords, or perform reasoning tasks using optional context for enhanced analysis.

Instructions

Wrapper for LLMProcessorAgent to process text with LLM. Returns: LLM processing result with output and metadata

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
text_inputNoThe input text to process
taskNoThe processing task ('summarize', 'reason', or 'extract_keywords')summarize
contextNoOptional context for processing

Implementation Reference

  • app.py:989-1005 (registration)
    Gradio Interface registration for the agent_llm_processor tool as MCP endpoint with api_name 'agent_llm_processor_service'
    with gr.Tab("Agent: LLM Processor", scale=1): gr.Interface( fn=agent_llm_processor, inputs=[ gr.Textbox(label="Text to Process", lines=12, placeholder="Enter text for the LLM…"), gr.Dropdown( choices=["summarize", "reason", "extract_keywords"], value="summarize", label="LLM Task", ), gr.Textbox(label="Optional Context", lines=12, placeholder="Background info…"), ], outputs=gr.JSON(label="LLM Processed Output", height=1200), title="LLM Processing Agent", description="Use configured LLM provider for text processing tasks.", api_name="agent_llm_processor_service", )
  • app.py:750-762 (handler)
    Wrapper handler function exposed as MCP tool that delegates to LLMProcessorAgent.process
    def agent_llm_processor(text_input: str, task: str, context: str | None = None) -> dict: """ Wrapper for LLMProcessorAgent to process text with LLM. Args: text_input (str): The input text to process task (str): The processing task ('summarize', 'reason', or 'extract_keywords') context (str | None): Optional context for processing Returns: dict: LLM processing result with output and metadata """ return llm_processor.process(text_input, task, context)
  • Core handler logic in LLMProcessorAgent.process method that performs the actual LLM text processing
    @track_performance(operation_name="llm_processing") @rate_limited("nebius") @circuit_protected("nebius") def process(self, text_input: str, task: str, context: str = None) -> Dict[str, Any]: """ Process text using LLM for summarization, reasoning, or keyword extraction. Applies the configured LLM model to process the input text according to the specified task type. Supports summarization for condensing content, reasoning for analytical tasks, and keyword extraction for identifying key terms. Args: text_input (str): The input text to be processed by the LLM task (str): The processing task ('summarize', 'reason', or 'extract_keywords') context (str, optional): Additional context to guide the processing Returns: Dict[str, Any]: A dictionary containing the processed output and metadata or error information if processing fails """ try: validate_non_empty_string(text_input, "Input text") validate_non_empty_string(task, "Task") logger.info(f"Processing text with task: {task}") task_lower = task.lower() if task_lower not in ["reason", "summarize", "extract_keywords"]: raise ValidationError( f"Unsupported LLM task: {task}. Choose 'summarize', 'reason', or 'extract_keywords'." ) prompt_text = self._build_prompt(text_input, task_lower, context) messages = [{"role": "user", "content": prompt_text}] logger.info(f"LLM provider is: {api_config.llm_provider}, model used: {model_config.get_model_for_provider('llm_processor', api_config.llm_provider)}") output_text = make_llm_completion( model=model_config.get_model_for_provider("llm_processor", api_config.llm_provider), messages=messages, temperature=app_config.llm_temperature ) logger.info(f"LLM processing completed for task: {task}") return { "input_text": text_input, "task": task, "provided_context": context, "llm_processed_output": output_text, "llm_model_used": model_config.get_model_for_provider("llm_processor", api_config.llm_provider), } except (ValidationError, APIError) as e: logger.error(f"LLM processing failed: {str(e)}") return {"error": str(e), "input_text": text_input, "processed_output": None} except Exception as e: logger.error(f"Unexpected error in LLM processing: {str(e)}") return {"error": f"Unexpected error: {str(e)}", "input_text": text_input, "processed_output": None}
  • Input/output schema defined in the function signature and docstring for the tool handler
    def agent_llm_processor(text_input: str, task: str, context: str | None = None) -> dict: """ Wrapper for LLMProcessorAgent to process text with LLM. Args: text_input (str): The input text to process task (str): The processing task ('summarize', 'reason', or 'extract_keywords') context (str | None): Optional context for processing Returns: dict: LLM processing result with output and metadata """
  • Helper method to build task-specific prompts for the LLM calls
    def _build_prompt(self, text_input: str, task: str, context: str = None) -> str: """Build the appropriate prompt based on the task.""" prompts = { "reason": f"Analyze this text and provide detailed reasoning (less than 250):\n\n{text_input} with this context {context if context else ''} for {task}", "summarize": f"Summarize in detail (less than 250):\n\n{text_input} with this context {context if context else ''} for {task}", "extract_keywords": f"Extract key terms/entities (comma-separated) from:\n\n{text_input}" } prompt = prompts[task] if context: context_additions = { "reason": f"\n\nAdditional context: {context}", "summarize": f"\n\nKeep in mind this context: {context}", "extract_keywords": f"\n\nFocus on this context: {context}" } prompt += context_additions[task] task_endings = { "reason": "\n\nReasoning:", "summarize": "\n\nSummary:", "extract_keywords": "\n\nKeywords:" } prompt += task_endings[task] return prompt """LLM Processor Agent for text processing tasks using language models."""

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CodeHalwell/gradio-mcp-agent-hack'

If you have feedback or need assistance with the MCP directory API, please join our Discord server