Skip to main content
Glama
HenkDz

PostgreSQL MCP Server

pg_import_table_data

Import JSON or CSV file data into PostgreSQL tables using specified connection parameters and file paths.

Instructions

Import data from JSON or CSV file into a table

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
connectionStringNo
tableNameYes
inputPathYesabsolute path to the file to import
truncateFirstNo
formatNojson
delimiterNo

Implementation Reference

  • Core implementation of the pg_import_table_data tool logic: reads JSON/CSV file, parses data, optionally truncates table, and inserts rows into PostgreSQL table using transactions.
    async function executeImportTableData(
      input: ImportTableDataInput,
      getConnectionString: GetConnectionStringFn
    ): Promise<{ tableName: string; rowCount: number }> {
      const resolvedConnectionString = getConnectionString(input.connectionString);
      const db = DatabaseConnection.getInstance();
      const { tableName, inputPath, truncateFirst, format, delimiter } = input;
      
      try {
        await db.connect(resolvedConnectionString);
        
        const fileContent = await fs.promises.readFile(inputPath, 'utf8');
        
        let dataToImport: Record<string, unknown>[];
        
        if (format === 'csv') {
          const csvDelimiter = delimiter || ',';
          const lines = fileContent.split('\n').filter(line => line.trim()); // Use \n consistently
          
          if (lines.length === 0) {
            return { tableName, rowCount: 0 };
          }
          
          const headers = lines[0].split(csvDelimiter).map(h => h.trim().replace(/^"|"$/g, '')); // Remove surrounding quotes from headers
          
          dataToImport = lines.slice(1).map(line => {
            // Basic CSV parsing, might need a more robust library for complex CSVs
            const values = line.split(csvDelimiter).map(val => val.trim().replace(/^"|"$/g, '').replace(/""/g, '"'));
            return headers.reduce((obj, header, index) => {
              obj[header] = values[index] !== undefined ? values[index] : null;
              return obj;
            }, {} as Record<string, unknown>);
          });
        } else {
          dataToImport = JSON.parse(fileContent);
        }
        
        if (!Array.isArray(dataToImport)) {
          throw new Error('Input file does not contain an array of records');
        }
        
        if (truncateFirst) {
          await db.query(`TRUNCATE TABLE "${tableName}"`); // Consider quoting
        }
        
        let importedCount = 0;
        if (dataToImport.length > 0) {
          await db.transaction(async (client: import('pg').PoolClient) => {
            for (const record of dataToImport) {
              const columns = Object.keys(record);
              if (columns.length === 0) continue; // Skip empty records
              const values = Object.values(record);
              const placeholders = values.map((_, i) => `$${i + 1}`).join(', ');
              
              const query = `
                INSERT INTO "${tableName}" (${columns.map(c => `"${c}"`).join(', ')})
                VALUES (${placeholders})
              `;
              
              await client.query(query, values);
              importedCount++;
            }
          });
        }
        
        return {
            tableName,
            rowCount: importedCount
        };
      } catch (error) {
        throw new McpError(ErrorCode.InternalError, `Failed to import data: ${error instanceof Error ? error.message : String(error)}`);
      } finally {
        await db.disconnect();
      }
    }
  • Zod input schema defining parameters for pg_import_table_data tool: connection string (optional), table name, input file path, truncate option, format (json/csv), delimiter.
    const ImportTableDataInputSchema = z.object({
      connectionString: z.string().optional(),
      tableName: z.string(),
      inputPath: z.string().describe("absolute path to the file to import"),
      truncateFirst: z.boolean().optional().default(false),
      format: z.enum(['json', 'csv']).optional().default('json'),
      delimiter: z.string().optional(),
    });
  • Tool object definition exporting the pg_import_table_data handler with name, description, schema, and execute method that validates input and calls the core handler.
    export const importTableDataTool: PostgresTool = {
      name: 'pg_import_table_data',
      description: 'Import data from JSON or CSV file into a table',
      inputSchema: ImportTableDataInputSchema,
      async execute(params: unknown, getConnectionString: GetConnectionStringFn): Promise<ToolOutput> {
        const validationResult = ImportTableDataInputSchema.safeParse(params);
        if (!validationResult.success) {
          return { content: [{ type: 'text', text: `Invalid input: ${validationResult.error.format()}` }], isError: true };
        }
        try {
          const result = await executeImportTableData(validationResult.data, getConnectionString);
          return { content: [{ type: 'text', text: `Successfully imported ${result.rowCount} rows into ${result.tableName}` }] };
        } catch (error) {
          const errorMessage = error instanceof McpError ? error.message : (error instanceof Error ? error.message : String(error));
          return { content: [{ type: 'text', text: `Error importing data: ${errorMessage}` }], isError: true };
        }
      }
    };
  • src/index.ts:254-254 (registration)
    Inclusion of importTableDataTool in the allTools array, which populates the MCP server's available tools list.
    importTableDataTool,
  • src/index.ts:22-22 (registration)
    Import statement bringing importTableDataTool into the main index file for server registration.
    import { exportTableDataTool, importTableDataTool, copyBetweenDatabasesTool } from './tools/migration.js';
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the import action but fails to describe critical behaviors: whether it overwrites existing data (hinted by 'truncateFirst' parameter but not explained), authentication needs (implied by 'connectionString' but not stated), error handling, or output format. This leaves significant gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every part ('Import data from JSON or CSV file into a table') contributes directly to understanding the tool's function, making it appropriately concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with 6 parameters, low schema coverage (17%), no annotations, and no output schema, the description is insufficient. It lacks details on behavior (e.g., data overwriting, error cases), parameter usage, and comparison to siblings, leaving the agent with inadequate context to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (17%), with only 'inputPath' documented. The description adds minimal value by mentioning JSON/CSV formats, which aligns with the 'format' enum, but doesn't explain other parameters like 'connectionString', 'truncateFirst', or 'delimiter'. It partially compensates for the coverage gap but leaves most parameters semantically unclear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Import data') and resource ('from JSON or CSV file into a table'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like pg_export_table_data or pg_copy_between_databases, which would require more specific context about when to choose import vs. other data movement operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., table must exist), compare to siblings like pg_copy_between_databases for database-to-database transfers, or specify scenarios where import is preferred over direct SQL execution via pg_execute_mutation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/HenkDz/postgresql-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server