Skip to main content
Glama

ShallowCodeResearch_agent_question_enhancer

Enhances research questions by breaking them into focused sub-questions to improve search results and analysis depth.

Instructions

Wrapper for QuestionEnhancerAgent to provide question enhancement. Returns: Enhanced question result with sub-questions

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
user_requestNoThe original user request to enhance

Implementation Reference

  • app.py:726-736 (handler)
    The main handler function for the agent_question_enhancer tool, which wraps the QuestionEnhancerAgent's enhance_question method. This is exposed as an MCP tool via Gradio's mcp_server.
    def agent_question_enhancer(user_request: str) -> dict:
        """
        Wrapper for QuestionEnhancerAgent to provide question enhancement.
    
        Args:
            user_request (str): The original user request to enhance
    
        Returns:
            dict: Enhanced question result with sub-questions
        """
        return question_enhancer.enhance_question(user_request, num_questions=2)
  • app.py:962-977 (registration)
    Gradio Interface registration that exposes the agent_question_enhancer as an MCP tool with api_name 'agent_question_enhancer_service', likely prefixed in the context of 'ShallowCodeResearch'.
    with gr.Tab("Agent: Question Enhancer", scale=1):
        gr.Interface(
            fn=agent_question_enhancer,
            inputs=[
                gr.Textbox(
                    label="Original User Request",
                    lines=12,
                    placeholder="Enter your question to be split into 3 sub-questions…"
                )
            ],
            outputs=gr.JSON(label="Enhanced Sub-Questions",
            height=305),
            title="Question Enhancer Agent",
            description="Splits a single user query into 3 distinct sub-questions using Qwen models.",
            api_name="agent_question_enhancer_service",
        )
  • Core handler logic in QuestionEnhancerAgent.enhance_question method, which performs LLM-based question splitting into sub-questions. Called by the app.py wrapper.
    @track_performance(operation_name="question_enhancement")
    @rate_limited("nebius")
    @circuit_protected("nebius")
    @cached(ttl=300)  # Cache for 5 minutes
    def enhance_question(self, user_request: str, num_questions: int) -> Dict[str, Any]:
        """
        Split a single user query into multiple distinct sub-questions for enhanced research.
    
        Takes a user's original request and uses LLM processing to break it down into
        separate sub-questions that explore different technical angles. This enables
        more comprehensive research and analysis of complex topics.
    
        Args:
            user_request (str): The original user query to be enhanced and split
            num_questions (int): The number of sub-questions to generate
    
        Returns:
            Dict[str, Any]: A dictionary containing the generated sub-questions array
                           or error information if processing fails
        """
        try:
            validate_non_empty_string(user_request, "User request")
            logger.info(f"Enhancing question: {user_request[:100]}...")
    
            prompt_text = f"""
            You are an AI assistant specialised in Python programming that must break a single user query into {num_questions} distinct, non-overlapping sub-questions.
            Each sub-question should explore a different technical angle of the original request.
            Output must be valid JSON with a top-level key "sub_questions" whose value is an array of strings—no extra keys, no extra prose.
    
            User Request: "{user_request}"
    
            Respond with exactly:
            {{
            "sub_questions": [
                "First enhanced sub-question …",
                "Second enhanced sub-question …",
                ........ more added as necessary
            ]
            }}
            """
    
            messages = [{"role": "user", "content": prompt_text}]
            response_format = {
                "type": "json_object",
                "object": {
                    "sub_questions": {
                        "type": "array",
                        "items": {"type": "string"},
                    }
                },
            }
    
            logger.info(
                "The LLM provider is: %s and the model is: %s",
                api_config.llm_provider,
                model_config.get_model_for_provider("question_enhancer", api_config.llm_provider)
            )
    
            raw_output = make_llm_completion(
                model=model_config.get_model_for_provider("question_enhancer", api_config.llm_provider),
                messages=messages,
                temperature=0.7,
                response_format=response_format
            )
    
            parsed = extract_json_from_text(raw_output)
    
            if "sub_questions" not in parsed:
                raise ValidationError("JSON does not contain a 'sub_questions' key.")
    
            sub_questions = parsed["sub_questions"]
            if not isinstance(sub_questions, list) or not all(isinstance(q, str) for q in sub_questions):
                raise ValidationError("Expected 'sub_questions' to be a list of strings.")
    
            logger.info(f"Successfully generated {len(sub_questions)} sub-questions")
            return {"sub_questions": sub_questions}
    
        except (ValidationError, APIError) as e:
            logger.error(f"Question enhancement failed: {str(e)}")
            return {"error": str(e), "sub_questions": []}
        except Exception as e:
            logger.error(f"Unexpected error in question enhancement: {str(e)}")
            return {"error": f"Unexpected error: {str(e)}", "sub_questions": []}
  • JSON schema definition for the expected LLM response, enforcing output as array of strings under 'sub_questions' key.
    response_format = {
        "type": "json_object",
        "object": {
            "sub_questions": {
                "type": "array",
                "items": {"type": "string"},
            }
        },
    }
  • The full QuestionEnhancerAgent class providing the question enhancement utility used by the tool.
    class QuestionEnhancerAgent:
        """
        Agent responsible for enhancing questions into sub-questions for research.
    
        This agent takes a single user query and intelligently breaks it down into
        multiple distinct, non-overlapping sub-questions that explore different
        technical angles of the original request. It uses LLM models to enhance
        question comprehension and research depth.
        """
        
        @with_performance_tracking("question_enhancement")
        @rate_limited("nebius")
        @circuit_protected("nebius")
        @cached(ttl=300)  # Cache for 5 minutes
        def enhance_question(self, user_request: str, num_questions: int) -> Dict[str, Any]:
            """
            Split a single user query into multiple distinct sub-questions for enhanced research.
    
            Takes a user's original request and uses LLM processing to break it down into
            separate sub-questions that explore different technical angles. This enables
            more comprehensive research and analysis of complex topics.
    
            Args:
                user_request (str): The original user query to be enhanced and split
                num_questions (int): The number of sub-questions to generate
    
            Returns:
                Dict[str, Any]: A dictionary containing the generated sub-questions array
                               or error information if processing fails
            """
            try:
                validate_non_empty_string(user_request, "User request")
                logger.info(f"Enhancing question: {user_request[:100]}...")
                
                prompt_text = f"""
                You are an AI assistant specialised in Python programming that must break a single user query into {num_questions} distinct, non-overlapping sub-questions.
                Each sub-question should explore a different technical angle of the original request.
                Output must be valid JSON with a top-level key "sub_questions" whose value is an array of strings—no extra keys, no extra prose.
    
                User Request: "{user_request}"
    
                Respond with exactly:
                {{
                "sub_questions": [
                    "First enhanced sub-question …",
                    "Second enhanced sub-question …",
                    ........ more added as necessary
                ]
                }}
                """
                
                messages = [{"role": "user", "content": prompt_text}]
                response_format = {
                    "type": "json_object",
                    "object": {
                        "sub_questions": {
                            "type": "array",
                            "items": {"type": "string"},
                        }
                    },
                }
    
                logger.info(
                    "The LLM provider is: %s and the model is: %s",
                    api_config.llm_provider,
                    model_config.get_model_for_provider("question_enhancer", api_config.llm_provider)
                )
                
                raw_output = make_llm_completion(
                    model=model_config.get_model_for_provider("question_enhancer", api_config.llm_provider),
                    messages=messages,
                    temperature=0.7,
                    response_format=response_format
                )
                
                parsed = extract_json_from_text(raw_output)
                
                if "sub_questions" not in parsed:
                    raise ValidationError("JSON does not contain a 'sub_questions' key.")
                
                sub_questions = parsed["sub_questions"]
                if not isinstance(sub_questions, list) or not all(isinstance(q, str) for q in sub_questions):
                    raise ValidationError("Expected 'sub_questions' to be a list of strings.")
                
                logger.info(f"Successfully generated {len(sub_questions)} sub-questions")
                return {"sub_questions": sub_questions}
                
            except (ValidationError, APIError) as e:
                logger.error(f"Question enhancement failed: {str(e)}")
                return {"error": str(e), "sub_questions": []}
            except Exception as e:
                logger.error(f"Unexpected error in question enhancement: {str(e)}")
                return {"error": f"Unexpected error: {str(e)}", "sub_questions": []}
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool 'Returns: Enhanced question result with sub-questions,' which gives some output information, but lacks details on how the enhancement works (e.g., is it AI-based, does it modify the input, are there rate limits or authentication needs?). This is a significant gap for a tool with no annotations, as it doesn't fully describe behavioral traits beyond the basic return statement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: one stating the wrapper purpose and another specifying the return value. It's front-loaded with the main function and avoids unnecessary details. However, it could be slightly more structured by explicitly separating purpose from output, but overall it's efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a wrapper agent for enhancement), lack of annotations, and no output schema, the description is incomplete. It mentions the return includes 'Enhanced question result with sub-questions,' but doesn't explain the format or content of these results. For a tool that processes user requests, more context on behavior, error handling, or examples would be needed to be fully helpful, especially with no structured fields to rely on.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no meaning beyond what the input schema provides. The schema has 1 parameter with 100% coverage (a 'user_request' string described as 'The original user request to enhance'), and the description doesn't elaborate on this parameter's usage, format, or constraints. With high schema coverage, the baseline is 3, as the description doesn't compensate but also doesn't detract from the schema's information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Wrapper for QuestionEnhancerAgent to provide question enhancement' which clarifies the tool's function as a wrapper that enhances questions. However, it's somewhat vague about what 'enhancement' entails and doesn't distinguish this tool from its siblings like 'ShallowCodeResearch_agent_research_request' or 'ShallowCodeResearch_agent_llm_processor' which might also process user requests. The description provides a basic purpose but lacks specificity about the enhancement mechanism.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It doesn't mention any specific contexts, prerequisites, or exclusions, nor does it reference sibling tools. For example, it doesn't clarify if this should be used for initial query refinement versus other processing steps, leaving the agent with no usage instructions beyond the generic purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CodeHalwell/gradio-mcp-agent-hack'

If you have feedback or need assistance with the MCP directory API, please join our Discord server