Skip to main content
Glama
ZH1754629545

TickTick/Dida365 MCP Server

by ZH1754629545

get_tasks_by_projectId

Retrieve all tasks from a specific project in TickTick/Dida365 using the project ID to view and manage project-related tasks.

Instructions

Get all tasks belonging to a specific project by project ID. Returns a list of tasks with their basic information. Useful for viewing all tasks in a project.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYesThe ID of the project whose tasks you want to list (required)

Implementation Reference

  • The handler for the 'get_tasks_by_projectId' tool. It validates the projectId argument, makes an API call to the Dida365 endpoint `/project/{projectId}/data` to fetch tasks, and returns the response data as a formatted JSON text block.
    case "get_tasks_by_projectId": {
        const params: Record<string, any> = {};
        if (args.projectId) params.projectId = args.projectId;
        else throw new McpError(ErrorCode.InvalidRequest, "项目名称为空");
        const response = await dida365Api.get(`/project/${params.projectId}/data`);
    
        return {
            content: [
                {
                    type: "text",
                    text: `任务列表: ${JSON.stringify(response.data, null, 2)}`,
                },
            ],
        };
    }
  • src/index.ts:156-169 (registration)
    Registration of the 'get_tasks_by_projectId' tool in the ListTools response, including its name, description, and input schema definition.
    {
        name: "get_tasks_by_projectId",
        description: "Get all tasks belonging to a specific project by project ID. Returns a list of tasks with their basic information. Useful for viewing all tasks in a project.",
        inputSchema: {
            type: "object",
            properties: {
                projectId: {
                    type: "string",
                    description: "The ID of the project whose tasks you want to list (required)",
                },
            },
            required: ["projectId"],
        },
    },
  • TypeScript interface defining the structure of Task objects, used for typing API responses in the get_tasks_by_projectId handler.
    interface Task {
        id?: string;                     // Task identifier
        projectId?: string;              // Task project id
        title?: string;                  // Task title
        isAllDay?: boolean;              // All day
        completedTime?: string;          // Task completed time in "yyyy-MM-dd'T'HH:mm:ssZ"
        content?: string;                // Task content
        desc?: string;                   // Task description of checklist
        dueDate?: string;                // Task due date time in "yyyy-MM-dd'T'HH:mm:ssZ"
        items?: ChecklistItem[];         // Subtasks of Task
        priority?: 0 | 1 | 3 | 5 | number;        // Task priority: None:0, Low:1, Medium:3, High:5
        reminders?: string[];            // List of reminder triggers
        repeatFlag?: string;             // Recurring rules of task
        sortOrder?: number;              // Task sort order
        startDate?: string;              // Start date time in "yyyy-MM-dd'T'HH:mm:ssZ"
        status?: 0 | 2 | number;                  // Task completion status: Normal: 0, Completed: 2
        timeZone?: string;               // Task timezone
    }
  • Axios client instance configured with base URL and authorization token for Dida365 API, used by the tool handler to make requests.
    const dida365Api = axios.create({
        baseURL: DIDA365_BASE_URL,
        headers: {
            "Content-Type": "application/json",
            Authorization: DIDA365_TOKEN,
        },
    });
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns 'a list of tasks with their basic information,' which hints at read-only behavior and output format, but lacks details on permissions, rate limits, pagination, error handling, or what 'basic information' entails. For a tool with no annotations, this leaves significant gaps in understanding its operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with two sentences that directly state the tool's purpose and utility. There is no wasted text, and each sentence adds value: the first defines the action, and the second provides usage context. However, it could be slightly more structured by explicitly separating purpose from guidelines.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose and hints at output, but lacks details on behavioral traits, error cases, or integration with sibling tools. Without annotations or output schema, more context on return values and operational limits would improve completeness for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'projectId' documented as 'The ID of the project whose tasks you want to list (required).' The description adds no additional semantic context beyond what the schema provides, such as format examples or constraints. With high schema coverage, the baseline score of 3 is appropriate, as the description does not compensate but also does not detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get all tasks belonging to a specific project by project ID.' It specifies the verb ('Get'), resource ('tasks'), and scope ('belonging to a specific project'), but does not explicitly differentiate it from sibling tools like 'get_task_by_projectId_and_taskId' (which gets a single task) or 'get_projects' (which gets projects, not tasks).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context with 'Useful for viewing all tasks in a project,' suggesting it's for bulk task viewing within a project. However, it does not provide explicit guidance on when to use this tool versus alternatives like 'get_task_by_projectId_and_taskId' (for a single task) or 'complete_task' (for task completion), nor does it mention any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ZH1754629545/dida365-mcp-servers'

If you have feedback or need assistance with the MCP directory API, please join our Discord server