Skip to main content
Glama

generate_meta_prompt

Create structured meta-prompts for AI tasks by analyzing requirements and selecting appropriate reasoning frameworks to improve response quality.

Instructions

Generate an optimized meta-prompt for the given task.

Analyzes the task, selects the best reasoning framework (or uses the specified one), and generates a structured prompt designed to elicit high-quality reasoning.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
taskYesThe task or question to create a prompt for
contextNoAdditional context to include in the prompt
frameworkNoSpecific framework to use (optional, auto-selects if not provided)
persistNoWhether to log this generation for analytics

Implementation Reference

  • The function 'generate_meta_prompt' acts as the primary handler for the MCP tool, orchestrating task analysis, meta-prompt construction, and optional persistence.
    def generate_meta_prompt(
        task: Annotated[str, "The task or question to create a prompt for"],
        context: Annotated[str, "Additional context to include in the prompt"] = "",
        framework: Annotated[Optional[str], "Specific framework to use (optional, auto-selects if not provided)"] = None,
        persist: Annotated[bool, "Whether to log this generation for analytics"] = True,
    ) -> dict:
        """
        Generate an optimized meta-prompt for the given task.
        
        Analyzes the task, selects the best reasoning framework (or uses the specified one),
        and generates a structured prompt designed to elicit high-quality reasoning.
        """
        deps = get_dependencies()
        
        # Analyze task
        analysis = deps.selector.analyze(task, context)
        
        # Build prompt (use override framework if specified)
        result = deps.builder.build(
            task=task,
            context=context,
            framework_name=framework,
            analysis=analysis,
        )
        
        # Persist if requested
        log_id = None
        if persist:
            log_data = ReasoningLogCreate(
                task_input=task,
                context=context if context else None,
                detected_category=analysis.category.value,
                complexity_score=analysis.complexity_score,
                selected_framework=result.framework_used,
                meta_prompt_generated=result.meta_prompt,
            )
            log = deps.storage.create_log(log_data)
            log_id = str(log.id)
        
        return {
            "task_id": log_id,
            "framework_used": result.framework_used,
            "analysis": {
                "category": analysis.category.value,
                "complexity_score": analysis.complexity_score,
                "complexity_level": analysis.complexity_level.value,
            },
            "meta_prompt": result.meta_prompt,
        }
  • Tool registration using the '@mcp.tool()' decorator within the 'FastMCP' server instance.
    @mcp.tool()

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/BlinkVoid/PromptSmith'

If you have feedback or need assistance with the MCP directory API, please join our Discord server