Skip to main content
Glama
InsForge

Insforge MCP Server

bulk-upsert

Insert or update multiple records in a database table using CSV or JSON files. Specify a unique key to handle existing data efficiently.

Instructions

Bulk insert or update data from CSV or JSON file. Supports upsert operations with a unique key.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
apiKeyNoAPI key for authentication (optional if provided via --api_key)
tableYes
upsertKeyNo
filePathYesPath to CSV or JSON file containing data to import

Implementation Reference

  • Registration of the 'bulk-upsert' MCP tool, including name, description, Zod input schema (apiKey, table, upsertKey from schema, filePath), and handler function wrapped with usage tracking.
    server.tool(
      'bulk-upsert',
      'Bulk insert or update data from CSV or JSON file. Supports upsert operations with a unique key.',
      {
        apiKey: z
          .string()
          .optional()
          .describe('API key for authentication (optional if provided via --api_key)'),
        ...bulkUpsertRequestSchema.shape,
        filePath: z.string().describe('Path to CSV or JSON file containing data to import'),
      },
      withUsageTracking('bulk-upsert', async ({ apiKey, table, filePath, upsertKey }) => {
        try {
          const actualApiKey = getApiKey(apiKey);
          
          // Read the file
          const fileBuffer = await fs.readFile(filePath);
          const fileName = filePath.split('/').pop() || 'data.csv';
          
          // Create form data for multipart upload
          const formData = new FormData();
          formData.append('file', fileBuffer, fileName);
          formData.append('table', table);
          if (upsertKey) {
            formData.append('upsertKey', upsertKey);
          }
          
          const response = await fetch(`${API_BASE_URL}/api/database/advance/bulk-upsert`, {
            method: 'POST',
            headers: {
              'x-api-key': actualApiKey,
              ...formData.getHeaders(),
            },
            body: formData,
          });
          
          const result = await handleApiResponse(response);
          
          // Format the result message
          const message = result.success
            ? `Successfully processed ${result.rowsAffected} of ${result.totalRecords} records into table "${result.table}"`
            : result.message || 'Bulk upsert operation completed';
    
          return await addBackgroundContext({
            content: [
              {
                type: 'text',
                text: formatSuccessMessage('Bulk upsert completed', {
                  message,
                  table: result.table,
                  rowsAffected: result.rowsAffected,
                  totalRecords: result.totalRecords,
                  errors: result.errors,
                }),
              },
            ],
          });
        } catch (error) {
          const errMsg = error instanceof Error ? error.message : 'Unknown error occurred';
          return {
            content: [
              {
                type: 'text',
                text: `Error performing bulk upsert: ${errMsg}`,
              },
            ],
            isError: true,
          };
        }
      })
    );
  • The handler function for bulk-upsert tool. Reads the input file, creates multipart form data with file, table, and upsertKey, sends POST request to backend API endpoint for bulk upsert, handles response, and returns formatted success or error message.
    withUsageTracking('bulk-upsert', async ({ apiKey, table, filePath, upsertKey }) => {
      try {
        const actualApiKey = getApiKey(apiKey);
        
        // Read the file
        const fileBuffer = await fs.readFile(filePath);
        const fileName = filePath.split('/').pop() || 'data.csv';
        
        // Create form data for multipart upload
        const formData = new FormData();
        formData.append('file', fileBuffer, fileName);
        formData.append('table', table);
        if (upsertKey) {
          formData.append('upsertKey', upsertKey);
        }
        
        const response = await fetch(`${API_BASE_URL}/api/database/advance/bulk-upsert`, {
          method: 'POST',
          headers: {
            'x-api-key': actualApiKey,
            ...formData.getHeaders(),
          },
          body: formData,
        });
        
        const result = await handleApiResponse(response);
        
        // Format the result message
        const message = result.success
          ? `Successfully processed ${result.rowsAffected} of ${result.totalRecords} records into table "${result.table}"`
          : result.message || 'Bulk upsert operation completed';
    
        return await addBackgroundContext({
          content: [
            {
              type: 'text',
              text: formatSuccessMessage('Bulk upsert completed', {
                message,
                table: result.table,
                rowsAffected: result.rowsAffected,
                totalRecords: result.totalRecords,
                errors: result.errors,
              }),
            },
          ],
        });
      } catch (error) {
        const errMsg = error instanceof Error ? error.message : 'Unknown error occurred';
        return {
          content: [
            {
              type: 'text',
              text: `Error performing bulk upsert: ${errMsg}`,
            },
          ],
          isError: true,
        };
      }
    })
  • Input parameters schema using Zod: optional apiKey, fields from imported bulkUpsertRequestSchema (likely table and upsertKey), and required filePath pointing to CSV or JSON data file.
    {
      apiKey: z
        .string()
        .optional()
        .describe('API key for authentication (optional if provided via --api_key)'),
      ...bulkUpsertRequestSchema.shape,
      filePath: z.string().describe('Path to CSV or JSON file containing data to import'),
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'bulk insert or update' and 'upsert operations,' which implies mutation capabilities, but doesn't address critical behaviors like authentication requirements (though the schema hints at apiKey), error handling, performance characteristics, rate limits, or what happens on conflicts. The description is too sparse for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences that directly state the tool's function and key feature. Every word earns its place, and it's front-loaded with the core purpose. No unnecessary details or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a bulk mutation tool with 4 parameters, 50% schema coverage, no annotations, and no output schema, the description is incomplete. It lacks details on authentication, error handling, return values, performance implications, and usage context. For a tool that modifies data, this is inadequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (2 of 4 parameters have descriptions). The description adds some context by mentioning 'CSV or JSON file' and 'unique key,' which loosely relates to 'filePath' and 'upsertKey' parameters, but doesn't provide detailed semantics like file format specifics, key constraints, or how upsert works. It partially compensates for the coverage gap but not fully.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Bulk insert or update data from CSV or JSON file. Supports upsert operations with a unique key.' It specifies the verb (bulk insert/update), resource (data), and format (CSV/JSON), but doesn't explicitly differentiate from siblings like 'run-raw-sql' or 'get-table-schema' which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description mentions upsert operations but doesn't specify scenarios where bulk-upsert is preferred over individual operations or other data import methods. There's no mention of prerequisites, limitations, or comparison with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/InsForge/insforge-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server