CSV to SQL: Import Spreadsheet Data to Any Database (2026 Guide)

Getting a CSV into a SQL database shouldn't require a PhD. But most guides cover only one database, ignore the fast bulk methods, or assume you want to click through a GUI.
This guide covers the actual import methods—the ones that handle 100,000+ rows without timing out. PostgreSQL's COPY. MySQL's LOAD DATA INFILE. SQL Server's BULK INSERT. Plus a universal Node.js approach when you need custom validation.
Pick your database and skip to that section. Or read through for the full picture.
Quick Reference: Which Method to Use
| Method | Database | Speed | Best For |
|---|---|---|---|
COPY | PostgreSQL | ~730K rows/sec | Server-side imports, ETL pipelines |
\copy | PostgreSQL | ~150K rows/sec | Remote imports, no server access |
LOAD DATA | MySQL | ~500K rows/sec | Bulk MySQL imports |
BULK INSERT | SQL Server | Variable | SQL Server environments |
.import | SQLite | Fast | Local databases, dev environments |
| Node.js batch | Any | ~50K rows/sec | Custom validation, any database |
| INSERT statements | Any | Slow | Small files (under 1000 rows) |
The common mistake: Using INSERT statements for bulk imports. One INSERT per row means 10,000 database round trips for 10,000 rows. The bulk methods handle the same data in one operation.
PostgreSQL: COPY Command
PostgreSQL's COPY is fast. Really fast. In our testing on a standard M1 Mac, server-side COPY processes around 730,000 rows per second. Client-side \copy hits about 150,000 rows per second.
Server-Side COPY (Fastest)
Use this when the CSV file is accessible from the PostgreSQL server's filesystem:
-- Basic import
COPY users (name, email, created_at)
FROM '/data/users.csv'
WITH (FORMAT CSV, HEADER true);
-- With more options
COPY users (name, email, role, created_at)
FROM '/data/users.csv'
WITH (
FORMAT CSV,
HEADER true,
DELIMITER ',',
NULL '',
ENCODING 'UTF8'
);The catch: The file must be readable by the postgres system user. For Docker setups, mount the directory. For remote servers, you'll need to upload the file first.
Client-Side \copy (For Remote Files)
When you can't put the file on the server, use \copy in psql. Same syntax, different execution:
psql -d mydb -c "\copy users(name, email, created_at) FROM './users.csv' WITH (FORMAT CSV, HEADER true)"This streams data from your local machine through the psql client. Slower than server-side COPY, but works anywhere you can connect.
Handling Errors
COPY is all-or-nothing by default. One bad row kills the entire import. Fix this with a staging table approach:
-- Create staging table
CREATE TEMP TABLE users_staging (LIKE users INCLUDING ALL);
-- Import to staging (will succeed even with bad data types)
\copy users_staging FROM './users.csv' WITH (FORMAT CSV, HEADER true)
-- Insert valid rows to production
INSERT INTO users
SELECT * FROM users_staging
WHERE email ~ '^[^@]+@[^@]+\.[^@]+$' -- Basic email validation
AND name IS NOT NULL;
-- Check what was skipped
SELECT * FROM users_staging
WHERE email !~ '^[^@]+@[^@]+\.[^@]+$'
OR name IS NULL;For a deep dive on PostgreSQL imports, see our PostgreSQL CSV Import guide with benchmarks and troubleshooting.
MySQL: LOAD DATA INFILE
MySQL's LOAD DATA INFILE is the equivalent of PostgreSQL's COPY. Fast bulk loading, handles large files efficiently.
Basic Import
LOAD DATA INFILE '/var/lib/mysql-files/users.csv'
INTO TABLE users
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS
(name, email, @created)
SET created_at = STR_TO_DATE(@created, '%Y-%m-%d');Note: MySQL 8.0+ restricts LOAD DATA INFILE to the secure_file_priv directory by default. Check yours:
SHOW VARIABLES LIKE 'secure_file_priv';LOCAL Variant (From Client)
When the file is on your local machine, not the server:
LOAD DATA LOCAL INFILE './users.csv'
INTO TABLE users
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS;You'll need local_infile=1 in your MySQL client and server configuration. Some hosted MySQL services disable this for security.
Handling Windows Line Endings
CSVs from Excel often have \r\n line endings:
LOAD DATA INFILE '/path/to/file.csv'
INTO TABLE users
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\r\n' -- Windows-style
IGNORE 1 ROWS;Character Encoding Issues
Excel exports often use Windows-1252 encoding, not UTF-8:
LOAD DATA INFILE '/path/to/file.csv'
INTO TABLE users
CHARACTER SET latin1 -- or 'cp1252' for Windows
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';SQL Server: BULK INSERT
SQL Server's BULK INSERT handles large CSV imports. The syntax is more verbose than PostgreSQL or MySQL, but it works.
Basic Import
BULK INSERT users
FROM 'C:\data\users.csv'
WITH (
FORMAT = 'CSV',
FIRSTROW = 2, -- Skip header row
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
TABLOCK -- Lock entire table for speed
);With Format File (For Complex Mappings)
When your CSV columns don't match your table columns:
<!-- users.fmt -->
<?xml version="1.0"?>
<BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format">
<RECORD>
<FIELD ID="1" xsi:type="CharTerm" TERMINATOR="," MAX_LENGTH="100"/>
<FIELD ID="2" xsi:type="CharTerm" TERMINATOR="," MAX_LENGTH="255"/>
<FIELD ID="3" xsi:type="CharTerm" TERMINATOR="\r\n" MAX_LENGTH="50"/>
</RECORD>
<ROW>
<COLUMN SOURCE="1" NAME="name" xsi:type="SQLVARYCHAR"/>
<COLUMN SOURCE="2" NAME="email" xsi:type="SQLVARYCHAR"/>
<COLUMN SOURCE="3" NAME="role" xsi:type="SQLVARYCHAR"/>
</ROW>
</BCPFORMAT>BULK INSERT users
FROM 'C:\data\users.csv'
WITH (FORMATFILE = 'C:\data\users.fmt');SSMS Import Wizard
For one-off imports, SQL Server Management Studio has a GUI:
- Right-click database → Tasks → Import Flat File
- Select your CSV
- Review column mappings
- Execute
It works for small files. For anything automated or over a few thousand rows, script it with BULK INSERT.
SQLite: .import Command
SQLite handles CSV imports through the .import command in the sqlite3 CLI. Simple and fast for local development.
# In sqlite3 shell
.mode csv
.import users.csv users
# Or one-liner
sqlite3 database.db ".mode csv" ".import users.csv users"Handling Headers
By default, .import treats the first row as data. To skip it:
sqlite3 database.db <<EOF
.mode csv
.import '|tail -n +2 users.csv' users
EOFPython Alternative
For programmatic access, Python's sqlite3 module works well:
import csv
import sqlite3
conn = sqlite3.connect('database.db')
cursor = conn.cursor()
with open('users.csv', 'r') as f:
reader = csv.DictReader(f)
for row in reader:
cursor.execute(
'INSERT INTO users (name, email, role) VALUES (?, ?, ?)',
(row['name'], row['email'], row.get('role', 'user'))
)
conn.commit()
conn.close()Universal Approach: Node.js
When you need custom validation, transformation, or the same code to work across databases, use Node.js with batch inserts.
import { parse } from 'csv-parse';
import { createReadStream } from 'fs';
import { Pool } from 'pg'; // Swap for mysql2, mssql, better-sqlite3
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
async function importCSV(filePath: string) {
const records: Array<{ name: string; email: string; role: string }> = [];
const parser = createReadStream(filePath).pipe(
parse({
columns: true,
skip_empty_lines: true,
trim: true,
})
);
for await (const record of parser) {
// Validate/transform here
if (!record.email?.includes('@')) continue;
records.push({
name: record.name?.trim() || 'Unknown',
email: record.email.toLowerCase(),
role: record.role || 'user',
});
// Batch insert every 1000 rows
if (records.length >= 1000) {
await batchInsert(records);
records.length = 0;
}
}
// Insert remaining
if (records.length > 0) {
await batchInsert(records);
}
}
async function batchInsert(records: Array<{ name: string; email: string; role: string }>) {
const values: unknown[] = [];
const placeholders = records.map((r, i) => {
const offset = i * 3;
values.push(r.name, r.email, r.role);
return `($${offset + 1}, $${offset + 2}, $${offset + 3})`;
});
await pool.query(
`INSERT INTO users (name, email, role) VALUES ${placeholders.join(', ')}
ON CONFLICT (email) DO UPDATE SET name = EXCLUDED.name`,
values
);
}This approach is slower than native bulk methods (~50K rows/sec vs 500K+), but gives you:
- Custom validation logic
- Row-level error handling
- Same code for any database (just swap the client library)
- No file system access requirements
Online Converter Tools
For quick one-off conversions, online tools work:
- convertcsv.com/csv-to-sql.htm - Generates INSERT statements
- tableconvert.com/csv-to-sql - Similar, with more format options
Limitations:
- Most cap at a few MB
- Generate INSERTs, not bulk commands (slow for large imports)
- No validation or error handling
- Don't use for sensitive data
Use these for quick tests. For production data, use the methods above.
Choosing the Right Method
Use native bulk commands (COPY/LOAD DATA/BULK INSERT) when:
- Importing to a single known database
- File is accessible from server or client
- Speed matters
- You don't need custom validation
Use Node.js/Python batch approach when:
- Importing to multiple database types
- Need custom validation or transformation
- Can't access native bulk commands (hosted databases)
- Need detailed per-row error handling
Use GUI tools when:
- One-off imports by non-developers
- Exploring data before committing to a structure
- You don't mind clicking through menus
Common Problems and Fixes
"Permission denied" on COPY
PostgreSQL's COPY FROM requires the postgres system user to read the file:
chmod 644 /data/users.csv
chown postgres:postgres /data/users.csvOr use \copy from psql instead.
Encoding errors
Files from Excel are often Windows-1252, not UTF-8:
# Convert before importing
iconv -f WINDOWS-1252 -t UTF-8 input.csv > output.csvOr specify encoding in your import command.
Dates in wrong format
Excel loves MM/DD/YYYY. Databases want YYYY-MM-DD. Convert in your import:
PostgreSQL:
SET datestyle = 'MDY';
\copy users FROM './data.csv' ...MySQL:
SET created_at = STR_TO_DATE(@date_col, '%m/%d/%Y')Too many columns
Your CSV has more columns than your table:
PostgreSQL: Specify columns explicitly:
COPY users (name, email) FROM ...MySQL: Use variable syntax:
LOAD DATA ... (name, email, @ignored, @also_ignored)When Building This Yourself Isn't Worth It
If your import needs are simple—same CSV format every time, developers doing the import—the methods above work great.
But if you need:
- Users to map their columns to your schema
- Validation rules that change frequently
- Support for multiple CSV formats hitting the same endpoint
- A nice UI instead of command-line tools
Then you're building an entire data import product, not just running a COPY command.
ImportCSV handles the messy parts—column mapping UI, validation, error handling—so you receive clean data via webhook. Might be worth a look before spending a week on import infrastructure.
FAQ
Which bulk method is fastest?
PostgreSQL COPY is typically fastest (700K+ rows/sec on modern hardware). MySQL LOAD DATA is close behind. SQL Server BULK INSERT varies more based on configuration. All are orders of magnitude faster than individual INSERTs.
Can I use these methods with hosted databases?
Depends. PostgreSQL's \copy works with any connection. MySQL LOCAL INFILE needs server configuration that many hosts disable. For RDS, Aurora, Cloud SQL—check their specific documentation. When in doubt, the Node.js batch approach works everywhere.
How do I handle CSVs with inconsistent columns?
Two options: preprocess the CSV to standardize columns, or use the Node.js approach where you can handle missing/extra columns in code. Native bulk commands expect consistent structure.
What about transactions?
PostgreSQL COPY and MySQL LOAD DATA run in a transaction by default—all-or-nothing. For partial success (import good rows, skip bad ones), use a staging table or the programmatic approach.
Wrap-up
CSV imports shouldn't slow you down. ImportCSV aims to expand into your workflow — whether you're building data import flows, handling customer uploads, or processing large datasets.
If that sounds like the kind of tooling you want to use, try ImportCSV .