log_execution_feedback
Track prompt performance feedback to improve reasoning framework selection over time. Submit task IDs and execution results for optimization.
Instructions
Log feedback about how a generated prompt performed.
Used to track effectiveness and improve framework selection over time.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| task_id | Yes | The task ID from generate_meta_prompt | |
| feedback | Yes | Feedback about how well the prompt worked |
Implementation Reference
- src/promptcore/main.py:140-162 (handler)The implementation of the log_execution_feedback tool, which uses the @mcp.tool() decorator for registration and updates feedback in the storage.
@mcp.tool() def log_execution_feedback( task_id: Annotated[str, "The task ID from generate_meta_prompt"], feedback: Annotated[str, "Feedback about how well the prompt worked"], ) -> dict: """ Log feedback about how a generated prompt performed. Used to track effectiveness and improve framework selection over time. """ deps = get_dependencies() updated = deps.storage.update_feedback(task_id, feedback) if not updated: return { "success": False, "error": f"Task ID not found: {task_id}", } return { "success": True, "message": f"Feedback logged for task {task_id}", }