Skip to main content
Glama

ShallowCodeResearch_agent_question_enhancer

Enhances research questions by breaking them into focused sub-questions to improve search results and analysis depth.

Instructions

Wrapper for QuestionEnhancerAgent to provide question enhancement. Returns: Enhanced question result with sub-questions

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
user_requestNoThe original user request to enhance

Implementation Reference

  • app.py:726-736 (handler)
    The main handler function for the agent_question_enhancer tool, which wraps the QuestionEnhancerAgent's enhance_question method. This is exposed as an MCP tool via Gradio's mcp_server.
    def agent_question_enhancer(user_request: str) -> dict: """ Wrapper for QuestionEnhancerAgent to provide question enhancement. Args: user_request (str): The original user request to enhance Returns: dict: Enhanced question result with sub-questions """ return question_enhancer.enhance_question(user_request, num_questions=2)
  • app.py:962-977 (registration)
    Gradio Interface registration that exposes the agent_question_enhancer as an MCP tool with api_name 'agent_question_enhancer_service', likely prefixed in the context of 'ShallowCodeResearch'.
    with gr.Tab("Agent: Question Enhancer", scale=1): gr.Interface( fn=agent_question_enhancer, inputs=[ gr.Textbox( label="Original User Request", lines=12, placeholder="Enter your question to be split into 3 sub-questions…" ) ], outputs=gr.JSON(label="Enhanced Sub-Questions", height=305), title="Question Enhancer Agent", description="Splits a single user query into 3 distinct sub-questions using Qwen models.", api_name="agent_question_enhancer_service", )
  • Core handler logic in QuestionEnhancerAgent.enhance_question method, which performs LLM-based question splitting into sub-questions. Called by the app.py wrapper.
    @track_performance(operation_name="question_enhancement") @rate_limited("nebius") @circuit_protected("nebius") @cached(ttl=300) # Cache for 5 minutes def enhance_question(self, user_request: str, num_questions: int) -> Dict[str, Any]: """ Split a single user query into multiple distinct sub-questions for enhanced research. Takes a user's original request and uses LLM processing to break it down into separate sub-questions that explore different technical angles. This enables more comprehensive research and analysis of complex topics. Args: user_request (str): The original user query to be enhanced and split num_questions (int): The number of sub-questions to generate Returns: Dict[str, Any]: A dictionary containing the generated sub-questions array or error information if processing fails """ try: validate_non_empty_string(user_request, "User request") logger.info(f"Enhancing question: {user_request[:100]}...") prompt_text = f""" You are an AI assistant specialised in Python programming that must break a single user query into {num_questions} distinct, non-overlapping sub-questions. Each sub-question should explore a different technical angle of the original request. Output must be valid JSON with a top-level key "sub_questions" whose value is an array of strings—no extra keys, no extra prose. User Request: "{user_request}" Respond with exactly: {{ "sub_questions": [ "First enhanced sub-question …", "Second enhanced sub-question …", ........ more added as necessary ] }} """ messages = [{"role": "user", "content": prompt_text}] response_format = { "type": "json_object", "object": { "sub_questions": { "type": "array", "items": {"type": "string"}, } }, } logger.info( "The LLM provider is: %s and the model is: %s", api_config.llm_provider, model_config.get_model_for_provider("question_enhancer", api_config.llm_provider) ) raw_output = make_llm_completion( model=model_config.get_model_for_provider("question_enhancer", api_config.llm_provider), messages=messages, temperature=0.7, response_format=response_format ) parsed = extract_json_from_text(raw_output) if "sub_questions" not in parsed: raise ValidationError("JSON does not contain a 'sub_questions' key.") sub_questions = parsed["sub_questions"] if not isinstance(sub_questions, list) or not all(isinstance(q, str) for q in sub_questions): raise ValidationError("Expected 'sub_questions' to be a list of strings.") logger.info(f"Successfully generated {len(sub_questions)} sub-questions") return {"sub_questions": sub_questions} except (ValidationError, APIError) as e: logger.error(f"Question enhancement failed: {str(e)}") return {"error": str(e), "sub_questions": []} except Exception as e: logger.error(f"Unexpected error in question enhancement: {str(e)}") return {"error": f"Unexpected error: {str(e)}", "sub_questions": []}
  • JSON schema definition for the expected LLM response, enforcing output as array of strings under 'sub_questions' key.
    response_format = { "type": "json_object", "object": { "sub_questions": { "type": "array", "items": {"type": "string"}, } }, }
  • The full QuestionEnhancerAgent class providing the question enhancement utility used by the tool.
    class QuestionEnhancerAgent: """ Agent responsible for enhancing questions into sub-questions for research. This agent takes a single user query and intelligently breaks it down into multiple distinct, non-overlapping sub-questions that explore different technical angles of the original request. It uses LLM models to enhance question comprehension and research depth. """ @with_performance_tracking("question_enhancement") @rate_limited("nebius") @circuit_protected("nebius") @cached(ttl=300) # Cache for 5 minutes def enhance_question(self, user_request: str, num_questions: int) -> Dict[str, Any]: """ Split a single user query into multiple distinct sub-questions for enhanced research. Takes a user's original request and uses LLM processing to break it down into separate sub-questions that explore different technical angles. This enables more comprehensive research and analysis of complex topics. Args: user_request (str): The original user query to be enhanced and split num_questions (int): The number of sub-questions to generate Returns: Dict[str, Any]: A dictionary containing the generated sub-questions array or error information if processing fails """ try: validate_non_empty_string(user_request, "User request") logger.info(f"Enhancing question: {user_request[:100]}...") prompt_text = f""" You are an AI assistant specialised in Python programming that must break a single user query into {num_questions} distinct, non-overlapping sub-questions. Each sub-question should explore a different technical angle of the original request. Output must be valid JSON with a top-level key "sub_questions" whose value is an array of strings—no extra keys, no extra prose. User Request: "{user_request}" Respond with exactly: {{ "sub_questions": [ "First enhanced sub-question …", "Second enhanced sub-question …", ........ more added as necessary ] }} """ messages = [{"role": "user", "content": prompt_text}] response_format = { "type": "json_object", "object": { "sub_questions": { "type": "array", "items": {"type": "string"}, } }, } logger.info( "The LLM provider is: %s and the model is: %s", api_config.llm_provider, model_config.get_model_for_provider("question_enhancer", api_config.llm_provider) ) raw_output = make_llm_completion( model=model_config.get_model_for_provider("question_enhancer", api_config.llm_provider), messages=messages, temperature=0.7, response_format=response_format ) parsed = extract_json_from_text(raw_output) if "sub_questions" not in parsed: raise ValidationError("JSON does not contain a 'sub_questions' key.") sub_questions = parsed["sub_questions"] if not isinstance(sub_questions, list) or not all(isinstance(q, str) for q in sub_questions): raise ValidationError("Expected 'sub_questions' to be a list of strings.") logger.info(f"Successfully generated {len(sub_questions)} sub-questions") return {"sub_questions": sub_questions} except (ValidationError, APIError) as e: logger.error(f"Question enhancement failed: {str(e)}") return {"error": str(e), "sub_questions": []} except Exception as e: logger.error(f"Unexpected error in question enhancement: {str(e)}") return {"error": f"Unexpected error: {str(e)}", "sub_questions": []}

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CodeHalwell/gradio-mcp-agent-hack'

If you have feedback or need assistance with the MCP directory API, please join our Discord server