Skip to main content
Glama
MustafaPatharia

ProofHub MCP Server

proofhub_get_task_history

Fetch task activity history including stage changes and edits to track modifications in ProofHub projects.

Instructions

Fetch the activity history of a ProofHub task (stage changes, edits, etc.).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYes
list_idYes
task_idYes

Implementation Reference

  • index.js:173-185 (registration)
    Tool registration in ListToolsRequestSchema handler defining the tool name, description, and inputSchema for proofhub_get_task_history.
    {
      name: 'proofhub_get_task_history',
      description: 'Fetch the activity history of a ProofHub task (stage changes, edits, etc.).',
      inputSchema: {
        type: 'object',
        properties: {
          project_id: { type: 'string' },
          list_id:    { type: 'string' },
          task_id:    { type: 'string' },
        },
        required: ['project_id', 'list_id', 'task_id'],
      },
    },
  • Handler function for proofhub_get_task_history. It extracts project_id, list_id, and task_id from arguments, makes a GET request to the ProofHub API history endpoint, and returns the result as formatted JSON.
    // ── proofhub_get_task_history ────────────────────────────────────────
    if (name === 'proofhub_get_task_history') {
      const { project_id, list_id, task_id } = args;
      const history = await apiGet(`/projects/${project_id}/todolists/${list_id}/tasks/${task_id}/history`);
      return {
        content: [{
          type: 'text',
          text: JSON.stringify(history, null, 2),
        }],
      };
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool fetches activity history and lists examples (stage changes, edits), which gives some behavioral context. However, it does not specify if results are paginated, ordered, or if it only returns recent history.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 12 words, front-loading the action and object. Every word adds value, with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and three required parameters, the description is minimal but adequate for a simple fetch tool. It lacks explanation of the output format and parameter details, which would be needed for a tool with more complex semantics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It does not explain what project_id, list_id, or task_id represent beyond the obvious task identification. The agent may not know that list_id refers to a list within the project. The examples in the description are about the output, not the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Fetch' and the resource 'activity history of a ProofHub task', with examples of content (stage changes, edits). It distinguishes this tool from siblings like proofhub_get_task (which fetches task details) and proofhub_get_comments (which fetches comments) by focusing on history.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, such as proofhub_get_task or proofhub_get_task_with_bug_links. The description does not mention prerequisites, limitations, or context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MustafaPatharia/proofhub-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server