Skip to main content
Glama

log_execution_feedback

Track prompt performance feedback to improve reasoning framework selection over time. Submit task IDs and execution results for optimization.

Instructions

Log feedback about how a generated prompt performed.

Used to track effectiveness and improve framework selection over time.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
task_idYesThe task ID from generate_meta_prompt
feedbackYesFeedback about how well the prompt worked

Implementation Reference

  • The implementation of the log_execution_feedback tool, which uses the @mcp.tool() decorator for registration and updates feedback in the storage.
    @mcp.tool()
    def log_execution_feedback(
        task_id: Annotated[str, "The task ID from generate_meta_prompt"],
        feedback: Annotated[str, "Feedback about how well the prompt worked"],
    ) -> dict:
        """
        Log feedback about how a generated prompt performed.
        
        Used to track effectiveness and improve framework selection over time.
        """
        deps = get_dependencies()
        updated = deps.storage.update_feedback(task_id, feedback)
        
        if not updated:
            return {
                "success": False,
                "error": f"Task ID not found: {task_id}",
            }
        
        return {
            "success": True,
            "message": f"Feedback logged for task {task_id}",
        }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/BlinkVoid/PromptSmith'

If you have feedback or need assistance with the MCP directory API, please join our Discord server