Blog
January 15, 2026

Supabase bulk upload: import large CSV files without timeouts

7 mins read

Supabase bulk upload: import large CSV files without timeouts

Supabase is great. Its CSV import isn't.

If you've tried uploading a large CSV through the Supabase dashboard, you've probably hit one of these walls: the 100MB file size limit, browser crashes mid-upload, or timeouts that leave you guessing whether any data made it through.

This post covers why Supabase bulk uploads fail, what's happening under the hood, and how to reliably import large CSV files into your Supabase database.

Why Supabase CSV imports time out

The dashboard CSV import works fine for small files. Once you cross into hundreds of thousands of rows, three limitations collide.

The 100MB dashboard limit

Supabase documentation states it directly:

"Supabase dashboard provides a user-friendly way to import data. However, for very large datasets, this method may not be the most efficient choice, given the size limit is 100MB."

Files larger than 100MB won't upload through the dashboard. No warning, no partial upload. It fails.

Short default timeouts

Supabase applies different statement timeouts based on the role making the request:

RoleDefault Timeout
anon3 seconds
authenticated8 seconds
service_roleNone (inherits 8s from authenticator)
postgresNone (capped at 2 minutes globally)

Dashboard and client queries have a maximum configurable timeout of 60 seconds. A bulk import with hundreds of thousands of rows will exceed this, even if your file is under 100MB.

COPY command restrictions

PostgreSQL's COPY command is the fastest way to bulk load data. On hosted Supabase, you can't use it.

A Supabase team member confirmed in a GitHub discussion:

"You are not Superuser with hosted Supabase. You have a lot of privileges, but not all... You don't have the ability to do Copy [from file]."

This means the fastest PostgreSQL bulk loading method is unavailable unless you connect directly via psql.

How slow are bulk inserts without COPY?

Benchmark data from TigerData shows the performance difference:

Method1M Rows25M Rows50M Rows100M Rows
COPY4.3 sec73 sec166 sec316 sec
Batch INSERT (20K rows)32.5 sec566 sec1,208 sec2,653 sec
Single INSERT1,067 sec23,964 sec47,976 sec94,623 sec

COPY is approximately 8x faster than batched inserts and 300x faster than single row inserts.

When you're forced to use INSERT statements (which Supabase's client libraries do), bulk uploads take significantly longer and are more likely to hit timeout limits.

Row-level security makes it worse

If your table has RLS enabled, bulk inserts slow down further. Supabase's documentation on RLS performance shows the impact:

RLS PatternExecution Time (100K rows)
auth.uid()=user_id (no index)171ms
auth.uid()=user_id (indexed)Less than 0.1ms
Complex RLS with table joins9,000-178,000ms
(select auth.uid()) = user_id (wrapped)9ms

Complex RLS policies can slow bulk inserts by 100-1000x. For large imports, either add indexes on columns used in RLS policies, use the service_role key to bypass RLS, or temporarily disable RLS during import.

Real developers hitting these limits

This isn't theoretical. Here's what developers encounter:

One Reddit user trying to import 65 million rows (~1.5 GB) reported:

  • psql connection "got stuck and timed out"
  • Split files into 93MB chunks, but dashboard "crashes browser"
  • psql COPY command "hangs and never completes"

Another developer with a 6GB CSV file:

"I'm trying to upload a ~6GB CSV to a new table in Supabase and it seems like it starts breaking around 250-450k records."

A developer using Edge Functions for bulk insert hit CPU limits:

  • Uploading ~50,000 rows (~4MB) from Storage
  • Using Supabase JS insert()/upsert() methods: "CPU Time exceeded" error
  • Cannot use COPY command due to superuser restrictions

Supabase documentation recommends these steps for large imports:

  1. Back up your data
  2. Increase statement timeouts
  3. Estimate required disk size
  4. Disable triggers temporarily
  5. Rebuild indices after import completes

The recommended tools are pgloader (for migrations), psql with COPY (requires direct connection), or the Supabase API with a warning: "When importing data via the Supabase API, it is advisable to refrain from bulk imports."

For many developers, especially those building with AI coding tools or no-code platforms, this manual process isn't practical.

The solution: chunked uploads with validation

The reliable way to bulk upload to Supabase is chunking: break your large file into smaller batches, insert each batch within the timeout window, and handle errors per-batch instead of failing the entire import.

The optimal batch size is 500-1000 rows. This keeps each insert well under the timeout limit while minimizing the number of round trips.

Comparison: Supabase dashboard vs ImportCSV

FeatureSupabase DashboardImportCSV
File size limit100MBUnlimited (chunked)
Column mappingHeaders must match exactlyAI-assisted mapping
Data validationNone (data goes straight to table)Custom validation rules
End-user facingNo (admin only)Yes (embeddable)
Large file handlingTimes outChunked uploads
Error handlingImport fails silentlyVisual inline fixes

How to connect ImportCSV to Supabase

ImportCSV handles chunking, validation, and error recovery automatically. Here's how to set it up.

Step 1: Get your Supabase credentials

From your Supabase dashboard, copy:

  • Project URL (Settings > API)
  • Service role key (for server-side imports that bypass RLS)

Step 2: Add Supabase as a destination

In ImportCSV, create a new destination and select Supabase. Paste your project URL and service role key.

Step 3: Create an importer with your table schema

Define the columns that map to your Supabase table. ImportCSV's AI-assisted mapping handles messy spreadsheets where column headers don't match your schema exactly.

Step 4: Embed the importer in your app

import { CSVImporter } from '@importcsv/react';

function DataImportPage() {
  return (
    <CSVImporter
      importerKey="YOUR_IMPORTER_KEY"
      onComplete={(result) => {
        console.log(`Imported ${result.rowCount} rows to Supabase`);
      }}
      onError={(error) => {
        console.error('Import failed:', error.message);
      }}
    />
  );
}

export default DataImportPage;

The component handles file parsing, validation, column mapping, and chunked uploads to your Supabase table. Users see validation errors inline and can fix them before the data hits your database.

Comparison: manual chunking vs ImportCSV

TaskManual ApproachWith ImportCSV
Split large filesPython script to split CSV into chunksAutomatic
Column mappingRequire exact header matches or write mapping codeAI maps columns, user confirms
ValidationWrite validation logic, handle edge casesDeclarative rules in dashboard
Error handlingParse error messages, show to user, retryVisual inline fixes
Timeout managementCalculate batch sizes, implement retry logicHandled automatically

When to use each approach

Use Supabase dashboard when:

  • Files are under 100MB
  • You're doing a one-time admin import
  • Column headers already match your table schema
  • You don't need validation

Use psql COPY when:

  • You have direct database access
  • Files are very large (1GB+)
  • You're comfortable with command line tools
  • This is a one-time migration, not a recurring process

Use ImportCSV when:

  • End users need to upload CSVs to your app
  • Files may exceed 100MB or 250K+ rows
  • Column headers vary and need mapping
  • You need validation before data hits your table
  • You want error handling without custom code

Get started

Stop fighting timeouts. Connect ImportCSV to your Supabase project in 2 clicks.

Start free - no credit card required.


Wrap-up

CSV imports shouldn't slow you down. ImportCSV aims to expand into your workflow — whether you're building data import flows, handling customer uploads, or processing large datasets.

If that sounds like the kind of tooling you want to use, try ImportCSV .