understand_question
Decomposes user questions to clarify intent, identify constraints, and prepare structured prompts for accurate responses.
Instructions
Produce a protocol shell to decompose a user question.
Args:
question: The raw user ask to unpack.
context: Optional background knowledge or situational frame.
constraints: Explicit limits or success criteria.
Returns:
A structured prompt guiding the model to restate intent, surface
constraints, and prepare clarifying questions before acting.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | ||
| context | No | ||
| constraints | No |
Implementation Reference
- The core handler function for the 'understand_question' tool. It is decorated with @mcp.tool(), validates input using Pydantic, and generates a structured reasoning protocol template.@mcp.tool() def understand_question( question: str, context: Optional[str] = None, constraints: Optional[str] = None, ) -> str: """Produce a protocol shell to decompose a user question. Args: question: The raw user ask to unpack. context: Optional background knowledge or situational frame. constraints: Explicit limits or success criteria. Returns: A structured prompt guiding the model to restate intent, surface constraints, and prepare clarifying questions before acting. """ # Validate input using Pydantic try: model = UnderstandQuestionInput( question=question, context=context, constraints=constraints ) except ValidationError as e: return f"Input Validation Error: {e}" normalized_context = model.context or "<none>" normalized_constraints = model.constraints or "<none>" template = """ /reasoning.understand_question{{ intent="Clarify the ask before solving by isolating intent, constraints, and required outputs", input={{ question="{question}", context="{context}", constraints="{constraints}" }}, process=[ /intent_map{{action="Restate the core ask and target outcome"}}, /constraints{{action="List explicit and implicit constraints"}}, /decomposition{{action="Break request into solvable sub-goals"}}, /risk_check{{action="Flag ambiguity or missing data"}} ], output={{ intent="Single sentence goal statement", constraints="Bullet list of must-haves and guardrails", clarifications="Questions to close gaps before execution", proposed_plan="Initial steps or protocol to proceed" }} }} """ return template.format( question=model.question, context=normalized_context, constraints=normalized_constraints, )
- Pydantic BaseModel defining the input schema for the 'understand_question' tool, including validation rules.class UnderstandQuestionInput(BaseModel): question: str = Field(..., min_length=3, description="The raw user ask to unpack.") context: Optional[str] = Field(None, description="Optional background knowledge.") constraints: Optional[str] = Field( None, description="Explicit limits or success criteria." )
- src/context_engineering_mcp/server.py:21-22 (registration)Invocation of register_thinking_models(mcp) which registers the 'understand_question' tool (and others) on the FastMCP server instance.# Register cognitive tools register_thinking_models(mcp)