Skip to main content
Glama

tracker_import

Import project data from JSON format to restore tracker structure with new IDs and remapped references using atomic transactions.

Instructions

Import a project from JSON (matching tracker_export format). Creates all entities with new IDs and remaps references. Uses a transaction for atomicity.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dataYesFull export JSON object from tracker_export

Implementation Reference

  • The 'handleImport' function implements the logic for 'tracker_import', including schema validation, database transaction handling, ID remapping for projects, epics, tasks, and notes, and activity logging.
    function handleImport(args: Record<string, unknown>) {
      const db = getDb();
      const data = args.data as Record<string, unknown>;
    
      const version = data.format_version as string;
      if (version !== '1.0' && version !== '1.1') {
        throw new Error(`Unsupported format version: ${version}. Expected "1.0" or "1.1".`);
      }
    
      const projectData = data.project as Record<string, unknown>;
      if (!projectData || !projectData.name) {
        throw new Error('Invalid import data: missing project or project.name');
      }
    
      const result = db.transaction(() => {
        const epicIdMap = new Map<number, number>();
        const taskIdMap = new Map<number, number>();
    
        // 1. Create project
        const project = db.prepare(
          'INSERT INTO projects (name, description, status, tags, metadata) VALUES (?, ?, ?, ?, ?) RETURNING *'
        ).get(
          projectData.name,
          projectData.description ?? null,
          projectData.status ?? 'active',
          projectData.tags ?? '[]',
          projectData.metadata ?? '{}'
        ) as Record<string, unknown>;
    
        const newProjectId = project.id as number;
        logActivity(db, 'project', newProjectId, 'created', null, null, null, `Project '${projectData.name}' imported`);
    
        // 2. Create epics and their children
        const epics = (projectData.epics as Array<Record<string, unknown>>) ?? [];
        let epicCount = 0;
        let taskCount = 0;
        let subtaskCount = 0;
        let commentCount = 0;
        let depCount = 0;
    
        // Collect deferred dependencies (need all tasks created first)
        const deferredDeps: Array<{ newTaskId: number; originalDeps: number[] }> = [];
    
        for (const epicData of epics) {
          const epic = db.prepare(
            `INSERT INTO epics (project_id, name, description, status, priority, sort_order, tags, metadata)
             VALUES (?, ?, ?, ?, ?, ?, ?, ?) RETURNING *`
          ).get(
            newProjectId,
            epicData.name,
            epicData.description ?? null,
            epicData.status ?? 'planned',
            epicData.priority ?? 'medium',
            epicData.sort_order ?? 0,
            epicData.tags ?? '[]',
            epicData.metadata ?? '{}'
          ) as Record<string, unknown>;
    
          const newEpicId = epic.id as number;
          if (epicData._original_id != null) {
            epicIdMap.set(epicData._original_id as number, newEpicId);
          }
          epicCount++;
          logActivity(db, 'epic', newEpicId, 'created', null, null, null, `Epic '${epicData.name}' imported`);
    
          // 3. Create tasks
          const tasks = (epicData.tasks as Array<Record<string, unknown>>) ?? [];
          for (const taskData of tasks) {
            const task = db.prepare(
              `INSERT INTO tasks (epic_id, title, description, status, priority, sort_order,
               assigned_to, estimated_hours, actual_hours, due_date, source_ref, tags, metadata)
               VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) RETURNING *`
            ).get(
              newEpicId,
              taskData.title,
              taskData.description ?? null,
              taskData.status ?? 'todo',
              taskData.priority ?? 'medium',
              taskData.sort_order ?? 0,
              taskData.assigned_to ?? null,
              taskData.estimated_hours ?? null,
              taskData.actual_hours ?? null,
              taskData.due_date ?? null,
              taskData.source_ref ?? null,
              taskData.tags ?? '[]',
              taskData.metadata ?? '{}'
            ) as Record<string, unknown>;
    
            const newTaskId = task.id as number;
            if (taskData._original_id != null) {
              taskIdMap.set(taskData._original_id as number, newTaskId);
            }
            taskCount++;
            logActivity(db, 'task', newTaskId, 'created', null, null, null, `Task '${taskData.title}' imported`);
    
            // Defer dependency creation
            const originalDeps = (taskData.depends_on as number[]) ?? [];
            if (originalDeps.length > 0) {
              deferredDeps.push({ newTaskId, originalDeps });
            }
    
            // 4. Create subtasks
            const subtasks = (taskData.subtasks as Array<Record<string, unknown>>) ?? [];
            for (const subtaskData of subtasks) {
              const subtask = db.prepare(
                'INSERT INTO subtasks (task_id, title, status, sort_order) VALUES (?, ?, ?, ?) RETURNING *'
              ).get(
                newTaskId,
                subtaskData.title,
                subtaskData.status ?? 'todo',
                subtaskData.sort_order ?? 0
              ) as Record<string, unknown>;
    
              subtaskCount++;
              logActivity(db, 'subtask', subtask.id as number, 'created', null, null, null, `Subtask '${subtaskData.title}' imported`);
            }
    
            // 5. Create comments
            const comments = (taskData.comments as Array<Record<string, unknown>>) ?? [];
            for (const commentData of comments) {
              db.prepare(
                'INSERT INTO comments (task_id, author, content) VALUES (?, ?, ?)'
              ).run(newTaskId, commentData.author ?? null, commentData.content);
              commentCount++;
            }
          }
        }
    
        // 6. Create dependencies with ID remapping
        const depInsert = db.prepare('INSERT INTO task_dependencies (task_id, depends_on_task_id) VALUES (?, ?)');
        for (const { newTaskId, originalDeps } of deferredDeps) {
          for (const origDepId of originalDeps) {
            const newDepId = taskIdMap.get(origDepId);
            if (newDepId != null) {
              depInsert.run(newTaskId, newDepId);
              depCount++;
            }
          }
        }
    
        // 7. Create notes with ID remapping
        const importNotes = (data.notes as Array<Record<string, unknown>>) ?? [];
        let noteCount = 0;
    
        for (const noteData of importNotes) {
          let relatedEntityType = noteData.related_entity_type as string | null;
          let relatedEntityId: number | null = null;
          const originalId = noteData._original_related_entity_id as number | null;
    
          if (relatedEntityType && originalId != null) {
            if (relatedEntityType === 'project') {
              relatedEntityId = newProjectId;
            } else if (relatedEntityType === 'epic') {
              relatedEntityId = epicIdMap.get(originalId) ?? null;
              if (relatedEntityId === null) relatedEntityType = null;
            } else if (relatedEntityType === 'task') {
              relatedEntityId = taskIdMap.get(originalId) ?? null;
              if (relatedEntityId === null) relatedEntityType = null;
            }
          }
    
          const note = db.prepare(
            `INSERT INTO notes (title, content, note_type, related_entity_type, related_entity_id, tags, metadata)
             VALUES (?, ?, ?, ?, ?, ?, ?) RETURNING *`
          ).get(
            noteData.title,
            noteData.content,
            noteData.note_type ?? 'general',
            relatedEntityType,
            relatedEntityId,
            noteData.tags ?? '[]',
            noteData.metadata ?? '{}'
          ) as Record<string, unknown>;
    
          noteCount++;
          logActivity(db, 'note', note.id as number, 'created', null, null, null, `Note '${noteData.title}' imported`);
        }
    
        return {
          message: 'Import complete.',
          project_id: newProjectId,
          project_name: projectData.name,
          counts: { epics: epicCount, tasks: taskCount, subtasks: subtaskCount, comments: commentCount, dependencies: depCount, notes: noteCount },
        };
      })();
    
      return result;
    }
  • The 'tracker_import' tool is registered in the 'handlers' dictionary which maps tool names to their respective handler functions.
    export const handlers: Record<string, ToolHandler> = {
      tracker_export: handleExport,
      tracker_import: handleImport,
    };
  • The 'tracker_import' tool definition includes its name, description, and input schema specifying the required 'data' object.
    {
      name: 'tracker_import',
      description:
        'Import a project from JSON (matching tracker_export format). Creates all entities with new IDs and remaps references. Uses a transaction for atomicity.',
      annotations: { title: 'Import Project', readOnlyHint: false, destructiveHint: false, idempotentHint: false, openWorldHint: false },
      inputSchema: {
        type: 'object',
        properties: {
          data: {
            type: 'object',
            description: 'Full export JSON object from tracker_export',
          },
        },
        required: ['data'],
      },
    },
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains that the tool 'Creates all entities with new IDs and remaps references' and 'Uses a transaction for atomicity'. While annotations cover basic safety (readOnlyHint=false, destructiveHint=false), the description provides important implementation details about ID generation, reference handling, and transactional behavior that aren't captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded with essential information in just two sentences. Every word earns its place: the first sentence states purpose and format, the second explains key behavioral characteristics. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with good annotations, 100% schema coverage, but no output schema, the description provides strong contextual completeness. It explains the import process, ID behavior, reference remapping, and transactional atomicity. The main gap is lack of information about return values or error conditions, but given the annotations cover safety aspects, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents the single 'data' parameter. The description adds marginal value by specifying the parameter must be 'Full export JSON object from tracker_export', which provides context about the expected format but doesn't add significant semantic information beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Import a project from JSON'), identifies the resource ('project'), and distinguishes it from siblings by referencing 'tracker_export format'. It explicitly differentiates from other import/creation tools by specifying it creates entities with new IDs and remaps references.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Import a project from JSON matching tracker_export format'), but doesn't explicitly state when not to use it or name alternatives. It implies usage for importing entire projects rather than creating individual entities, but lacks explicit exclusions or comparisons to sibling tools like project_create.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/spranab/saga-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server