velocity_end_task
Stop task timers and record outcomes to track performance against historical medians. Updates duration, status, and captures git changes for accurate productivity analysis.
Instructions
Stop a task timer started with velocity_start_task, record the outcome, and return the actual duration alongside a comparison to the historical median for similar tasks.
When to use: immediately after finishing — or abandoning — any task started with velocity_start_task. Always call, even on failed or abandoned outcomes; skipping leaves orphaned active rows that pollute future predictions and stats.
Side effects: updates the task row in ~/.velocity-mcp/tasks.db with end timestamp, duration, status, optional file/line counts, and any telemetry passed in. Shells out to git diff --stat HEAD~1 and git log --since (5s timeout each) to capture diff stats; safely no-ops outside a git repo. On completed status: computes a semantic embedding for similarity matching, records a calibration residual, and — if the task belonged to a plan — seals the plan when its last active task ends.
Returns: JSON with task_id, duration_seconds (numeric), duration_human (formatted), category, tags, and a message that compares this run's duration to the historical median for the category+tags combination ("you were 23% faster", "right on pace", etc.). Includes a git_diff block with lines added/removed, files changed, and commits made during the task when a git repo is detected.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| task_id | Yes | Identifier of the active task to end. Must match a `task_id` returned by an earlier `velocity_start_task` call that has not already been ended. | |
| status | Yes | Outcome of the task: "completed" (succeeded as planned), "failed" (attempted but did not produce the intended result), or "abandoned" (intentionally stopped — e.g. requirements changed mid-task). Affects whether calibration residuals and embeddings are recorded. | |
| actual_files | No | Number of files actually modified during the task. Compared against `estimated_files` from start-task to feed accuracy metrics. | |
| notes | No | Free-form context about what happened, surprises, or follow-ups. Stored as plain text for later review; does not affect predictions. | |
| tools_used | No | Names of the tools invoked during the task (e.g. ["Edit", "Bash", "Grep"]). Used for telemetry; ordering does not matter, duplicates are deduplicated. | |
| tool_call_count | No | Total number of individual tool invocations during the task — useful for diagnosing tasks that took many small steps versus few large ones. | |
| turn_count | No | Number of assistant turns (request/response cycles) the task spanned. Helps correlate task duration with conversational verbosity. | |
| retry_count | No | Number of times an operation had to be retried (e.g. a failing test re-run after a fix). Higher counts often correlate with under-estimated tasks. | |
| tests_passed_first_try | No | When tests were run as part of the task, whether they passed on the very first execution. Useful signal for code-quality dashboards. | |
| model_id | No | Identifier of the model that handled this task, if not already set at start. Required for model-segmented calibration to take effect. | |
| context_tokens | No | Approximate tokens in the context window at task end. Stored alongside the start-time value to track context growth across the task. |