generate_meta_prompt
Create structured meta-prompts for AI tasks by analyzing requirements and selecting appropriate reasoning frameworks to improve response quality.
Instructions
Generate an optimized meta-prompt for the given task.
Analyzes the task, selects the best reasoning framework (or uses the specified one), and generates a structured prompt designed to elicit high-quality reasoning.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| task | Yes | The task or question to create a prompt for | |
| context | No | Additional context to include in the prompt | |
| framework | No | Specific framework to use (optional, auto-selects if not provided) | |
| persist | No | Whether to log this generation for analytics |
Implementation Reference
- src/promptcore/main.py:89-137 (handler)The function 'generate_meta_prompt' acts as the primary handler for the MCP tool, orchestrating task analysis, meta-prompt construction, and optional persistence.
def generate_meta_prompt( task: Annotated[str, "The task or question to create a prompt for"], context: Annotated[str, "Additional context to include in the prompt"] = "", framework: Annotated[Optional[str], "Specific framework to use (optional, auto-selects if not provided)"] = None, persist: Annotated[bool, "Whether to log this generation for analytics"] = True, ) -> dict: """ Generate an optimized meta-prompt for the given task. Analyzes the task, selects the best reasoning framework (or uses the specified one), and generates a structured prompt designed to elicit high-quality reasoning. """ deps = get_dependencies() # Analyze task analysis = deps.selector.analyze(task, context) # Build prompt (use override framework if specified) result = deps.builder.build( task=task, context=context, framework_name=framework, analysis=analysis, ) # Persist if requested log_id = None if persist: log_data = ReasoningLogCreate( task_input=task, context=context if context else None, detected_category=analysis.category.value, complexity_score=analysis.complexity_score, selected_framework=result.framework_used, meta_prompt_generated=result.meta_prompt, ) log = deps.storage.create_log(log_data) log_id = str(log.id) return { "task_id": log_id, "framework_used": result.framework_used, "analysis": { "category": analysis.category.value, "complexity_score": analysis.complexity_score, "complexity_level": analysis.complexity_level.value, }, "meta_prompt": result.meta_prompt, } - src/promptcore/main.py:88-88 (registration)Tool registration using the '@mcp.tool()' decorator within the 'FastMCP' server instance.
@mcp.tool()