COPY
Bulk import data from files or export data to files. Supports CSV, text and Parquet formats, with local and S3 paths.
COPY FROM (import)
COPY TO (export)
Parameters
| Parameter | Description |
|---|---|
| table | Table to import into or export from |
| column | Optional subset of columns (import only) |
| path | File path — local or S3 |
| select_statement | A query whose results are exported (export only) |
WHERE condition | Filter rows during import |
Path
Local paths are relative to the server:
'/path/to/file.csv'
S3 paths use the s3:// scheme:
's3://bucket/path/to/file.csv'
Options
Common options
| Option | Values | Default | Description |
|---|---|---|---|
FORMAT | CSV, TEXT, PARQUET, ORC, DWRF | TEXT | File format |
PROGRESS | TRUE / FALSE | FALSE | Show progress notices during COPY |
CSV format
| Option | Values | Default | Description |
|---|---|---|---|
HEADER | TRUE / FALSE | FALSE | First line is a header row |
DELIMITER | single character | , | Column delimiter |
ESCAPE | single character | " | Escape character |
NULL | string | '' | String representing NULL |
Text format
| Option | Values | Default | Description |
|---|---|---|---|
HEADER | TRUE / FALSE | FALSE | First line is a header row |
DELIMITER | single character | \t | Column delimiter |
ESCAPE | single character | \ | Escape character |
NULL | string | \N | String representing NULL |
Parquet / ORC / DWRF formats
No additional options. Schema is inferred from the table definition.
Error handling options
| Option | Values | Default | Description |
|---|---|---|---|
ON_ERROR | STOP, IGNORE | STOP | How to handle malformed rows |
REJECT_LIMIT | integer | 0 | Max rows to skip (requires ON_ERROR IGNORE) |
LOG_VERBOSITY | SILENT, DEFAULT, VERBOSE | DEFAULT | Logging level for COPY output |
S3 options
| Option | Description |
|---|---|
S3_ACCESS_KEY | AWS access key ID |
S3_SECRET_KEY | AWS secret access key |
S3_IAM_ROLE | IAM role ARN (alternative to access/secret keys) |
S3_USE_INSTANCE_CREDENTIALS | Use EC2 instance credentials (TRUE / FALSE) |
S3_ENDPOINT | S3-compatible endpoint URL |
S3_REGION | AWS region |
S3_PATH_STYLE_ACCESS | Use path-style URLs (TRUE / FALSE, default TRUE) |
S3_SSL_ENABLED | Enable SSL (TRUE / FALSE, default FALSE) |
note
S3_ACCESS_KEY/S3_SECRET_KEY and S3_IAM_ROLE cannot be used together.
Examples
Import CSV
COPY movies FROM '/data/movies.csv' WITH (FORMAT CSV, HEADER TRUE);
Import with psql (client-side path)
\copy movies FROM 'movies.csv' WITH (FORMAT CSV, HEADER TRUE);
Import with filtering
COPY movies FROM '/data/movies.csv' WITH (FORMAT CSV, HEADER TRUE)
WHERE year > 2010;
Import Parquet
COPY movies FROM '/data/movies.parquet' WITH (FORMAT PARQUET);
Export to CSV
COPY (SELECT title, year, rating FROM movies WHERE rating > 8.0)
TO '/data/top_rated.csv' WITH (FORMAT CSV, HEADER TRUE);
Import from S3
COPY movies FROM 's3://my-bucket/data/movies.csv' WITH (
FORMAT CSV,
HEADER TRUE,
S3_ACCESS_KEY 'your-access-key',
S3_SECRET_KEY 'your-secret-key',
S3_REGION 'us-east-1'
);
Export Parquet to S3
COPY movies TO 's3://my-bucket/data/movies.parquet' WITH (
FORMAT PARQUET,
S3_ACCESS_KEY 'your-access-key',
S3_SECRET_KEY 'your-secret-key',
S3_ENDPOINT 'https://s3.us-east-1.amazonaws.com'
);
tip
Use WITH (HELP) to see all available COPY options and their defaults:
COPY movies FROM '/data/movies.csv' WITH (HELP);