Skip to main content

COPY

Bulk import data from files or export data to files. Supports CSV, text and Parquet formats, with local and S3 paths.

COPY FROM (import)

COPY TO (export)

Parameters

ParameterDescription
tableTable to import into or export from
columnOptional subset of columns (import only)
pathFile path — local or S3
select_statementA query whose results are exported (export only)
WHERE conditionFilter rows during import

Path

Local paths are relative to the server:

'/path/to/file.csv'

S3 paths use the s3:// scheme:

's3://bucket/path/to/file.csv'

Options

Common options

OptionValuesDefaultDescription
FORMATCSV, TEXT, PARQUET, ORC, DWRFTEXTFile format
PROGRESSTRUE / FALSEFALSEShow progress notices during COPY

CSV format

OptionValuesDefaultDescription
HEADERTRUE / FALSEFALSEFirst line is a header row
DELIMITERsingle character,Column delimiter
ESCAPEsingle character"Escape character
NULLstring''String representing NULL

Text format

OptionValuesDefaultDescription
HEADERTRUE / FALSEFALSEFirst line is a header row
DELIMITERsingle character\tColumn delimiter
ESCAPEsingle character\Escape character
NULLstring\NString representing NULL

Parquet / ORC / DWRF formats

No additional options. Schema is inferred from the table definition.

Error handling options

OptionValuesDefaultDescription
ON_ERRORSTOP, IGNORESTOPHow to handle malformed rows
REJECT_LIMITinteger0Max rows to skip (requires ON_ERROR IGNORE)
LOG_VERBOSITYSILENT, DEFAULT, VERBOSEDEFAULTLogging level for COPY output

S3 options

OptionDescription
S3_ACCESS_KEYAWS access key ID
S3_SECRET_KEYAWS secret access key
S3_IAM_ROLEIAM role ARN (alternative to access/secret keys)
S3_USE_INSTANCE_CREDENTIALSUse EC2 instance credentials (TRUE / FALSE)
S3_ENDPOINTS3-compatible endpoint URL
S3_REGIONAWS region
S3_PATH_STYLE_ACCESSUse path-style URLs (TRUE / FALSE, default TRUE)
S3_SSL_ENABLEDEnable SSL (TRUE / FALSE, default FALSE)
note

S3_ACCESS_KEY/S3_SECRET_KEY and S3_IAM_ROLE cannot be used together.

Examples

Import CSV

COPY movies FROM '/data/movies.csv' WITH (FORMAT CSV, HEADER TRUE);

Import with psql (client-side path)

\copy movies FROM 'movies.csv' WITH (FORMAT CSV, HEADER TRUE);

Import with filtering

COPY movies FROM '/data/movies.csv' WITH (FORMAT CSV, HEADER TRUE)
WHERE year > 2010;

Import Parquet

COPY movies FROM '/data/movies.parquet' WITH (FORMAT PARQUET);

Export to CSV

COPY (SELECT title, year, rating FROM movies WHERE rating > 8.0)
TO '/data/top_rated.csv' WITH (FORMAT CSV, HEADER TRUE);

Import from S3

COPY movies FROM 's3://my-bucket/data/movies.csv' WITH (
FORMAT CSV,
HEADER TRUE,
S3_ACCESS_KEY 'your-access-key',
S3_SECRET_KEY 'your-secret-key',
S3_REGION 'us-east-1'
);

Export Parquet to S3

COPY movies TO 's3://my-bucket/data/movies.parquet' WITH (
FORMAT PARQUET,
S3_ACCESS_KEY 'your-access-key',
S3_SECRET_KEY 'your-secret-key',
S3_ENDPOINT 'https://s3.us-east-1.amazonaws.com'
);
tip

Use WITH (HELP) to see all available COPY options and their defaults:

COPY movies FROM '/data/movies.csv' WITH (HELP);

See also