Supabase bulk upload: import large CSV files without timeouts

Supabase is great. Its CSV import isn't.
If you've tried uploading a large CSV through the Supabase dashboard, you've probably hit one of these walls: the 100MB file size limit, browser crashes mid-upload, or timeouts that leave you guessing whether any data made it through.
This post covers why Supabase bulk uploads fail, what's happening under the hood, and how to reliably import large CSV files into your Supabase database.
Why Supabase CSV imports time out
The dashboard CSV import works fine for small files. Once you cross into hundreds of thousands of rows, three limitations collide.
The 100MB dashboard limit
Supabase documentation states it directly:
"Supabase dashboard provides a user-friendly way to import data. However, for very large datasets, this method may not be the most efficient choice, given the size limit is 100MB."
Files larger than 100MB won't upload through the dashboard. No warning, no partial upload. It fails.
Short default timeouts
Supabase applies different statement timeouts based on the role making the request:
| Role | Default Timeout |
|---|---|
| anon | 3 seconds |
| authenticated | 8 seconds |
| service_role | None (inherits 8s from authenticator) |
| postgres | None (capped at 2 minutes globally) |
Dashboard and client queries have a maximum configurable timeout of 60 seconds. A bulk import with hundreds of thousands of rows will exceed this, even if your file is under 100MB.
COPY command restrictions
PostgreSQL's COPY command is the fastest way to bulk load data. On hosted Supabase, you can't use it.
A Supabase team member confirmed in a GitHub discussion:
"You are not Superuser with hosted Supabase. You have a lot of privileges, but not all... You don't have the ability to do Copy [from file]."
This means the fastest PostgreSQL bulk loading method is unavailable unless you connect directly via psql.
How slow are bulk inserts without COPY?
Benchmark data from TigerData shows the performance difference:
| Method | 1M Rows | 25M Rows | 50M Rows | 100M Rows |
|---|---|---|---|---|
| COPY | 4.3 sec | 73 sec | 166 sec | 316 sec |
| Batch INSERT (20K rows) | 32.5 sec | 566 sec | 1,208 sec | 2,653 sec |
| Single INSERT | 1,067 sec | 23,964 sec | 47,976 sec | 94,623 sec |
COPY is approximately 8x faster than batched inserts and 300x faster than single row inserts.
When you're forced to use INSERT statements (which Supabase's client libraries do), bulk uploads take significantly longer and are more likely to hit timeout limits.
Row-level security makes it worse
If your table has RLS enabled, bulk inserts slow down further. Supabase's documentation on RLS performance shows the impact:
| RLS Pattern | Execution Time (100K rows) |
|---|---|
auth.uid()=user_id (no index) | 171ms |
auth.uid()=user_id (indexed) | Less than 0.1ms |
| Complex RLS with table joins | 9,000-178,000ms |
(select auth.uid()) = user_id (wrapped) | 9ms |
Complex RLS policies can slow bulk inserts by 100-1000x. For large imports, either add indexes on columns used in RLS policies, use the service_role key to bypass RLS, or temporarily disable RLS during import.
Real developers hitting these limits
This isn't theoretical. Here's what developers encounter:
One Reddit user trying to import 65 million rows (~1.5 GB) reported:
- psql connection "got stuck and timed out"
- Split files into 93MB chunks, but dashboard "crashes browser"
- psql COPY command "hangs and never completes"
Another developer with a 6GB CSV file:
"I'm trying to upload a ~6GB CSV to a new table in Supabase and it seems like it starts breaking around 250-450k records."
A developer using Edge Functions for bulk insert hit CPU limits:
- Uploading ~50,000 rows (~4MB) from Storage
- Using Supabase JS
insert()/upsert()methods: "CPU Time exceeded" error - Cannot use COPY command due to superuser restrictions
Supabase's recommended approach
Supabase documentation recommends these steps for large imports:
- Back up your data
- Increase statement timeouts
- Estimate required disk size
- Disable triggers temporarily
- Rebuild indices after import completes
The recommended tools are pgloader (for migrations), psql with COPY (requires direct connection), or the Supabase API with a warning: "When importing data via the Supabase API, it is advisable to refrain from bulk imports."
For many developers, especially those building with AI coding tools or no-code platforms, this manual process isn't practical.
The solution: chunked uploads with validation
The reliable way to bulk upload to Supabase is chunking: break your large file into smaller batches, insert each batch within the timeout window, and handle errors per-batch instead of failing the entire import.
The optimal batch size is 500-1000 rows. This keeps each insert well under the timeout limit while minimizing the number of round trips.
Comparison: Supabase dashboard vs ImportCSV
| Feature | Supabase Dashboard | ImportCSV |
|---|---|---|
| File size limit | 100MB | Unlimited (chunked) |
| Column mapping | Headers must match exactly | AI-assisted mapping |
| Data validation | None (data goes straight to table) | Custom validation rules |
| End-user facing | No (admin only) | Yes (embeddable) |
| Large file handling | Times out | Chunked uploads |
| Error handling | Import fails silently | Visual inline fixes |
How to connect ImportCSV to Supabase
ImportCSV handles chunking, validation, and error recovery automatically. Here's how to set it up.
Step 1: Get your Supabase credentials
From your Supabase dashboard, copy:
- Project URL (Settings > API)
- Service role key (for server-side imports that bypass RLS)
Step 2: Add Supabase as a destination
In ImportCSV, create a new destination and select Supabase. Paste your project URL and service role key.
Step 3: Create an importer with your table schema
Define the columns that map to your Supabase table. ImportCSV's AI-assisted mapping handles messy spreadsheets where column headers don't match your schema exactly.
Step 4: Embed the importer in your app
import { CSVImporter } from '@importcsv/react';
function DataImportPage() {
return (
<CSVImporter
importerKey="YOUR_IMPORTER_KEY"
onComplete={(result) => {
console.log(`Imported ${result.rowCount} rows to Supabase`);
}}
onError={(error) => {
console.error('Import failed:', error.message);
}}
/>
);
}
export default DataImportPage;The component handles file parsing, validation, column mapping, and chunked uploads to your Supabase table. Users see validation errors inline and can fix them before the data hits your database.
Comparison: manual chunking vs ImportCSV
| Task | Manual Approach | With ImportCSV |
|---|---|---|
| Split large files | Python script to split CSV into chunks | Automatic |
| Column mapping | Require exact header matches or write mapping code | AI maps columns, user confirms |
| Validation | Write validation logic, handle edge cases | Declarative rules in dashboard |
| Error handling | Parse error messages, show to user, retry | Visual inline fixes |
| Timeout management | Calculate batch sizes, implement retry logic | Handled automatically |
When to use each approach
Use Supabase dashboard when:
- Files are under 100MB
- You're doing a one-time admin import
- Column headers already match your table schema
- You don't need validation
Use psql COPY when:
- You have direct database access
- Files are very large (1GB+)
- You're comfortable with command line tools
- This is a one-time migration, not a recurring process
Use ImportCSV when:
- End users need to upload CSVs to your app
- Files may exceed 100MB or 250K+ rows
- Column headers vary and need mapping
- You need validation before data hits your table
- You want error handling without custom code
Get started
Stop fighting timeouts. Connect ImportCSV to your Supabase project in 2 clicks.
Start free - no credit card required.
Related posts
Wrap-up
CSV imports shouldn't slow you down. ImportCSV aims to expand into your workflow — whether you're building data import flows, handling customer uploads, or processing large datasets.
If that sounds like the kind of tooling you want to use, try ImportCSV .