FAQ

FAQ on PostgreSQL Backup

PostgreSQL Backup Privileges

How to grant enough permissions to your Postgres user to create a backup.

If you are facing issues backing up your PostgreSQL database, ensure that your database user has enough permissions to actually back up the database.

You may use the following queries to grant read access on your target database:

GRANT CONNECT ON DATABASE "your_databbase_name" to postgres_user;
GRANT SELECT ON ALL TABLES IN SCHEMA your_schema_name TO postgres_user;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA your_schema_name TO postgres_user;
 

How to exclude tables from a PostgreSQL database backup?

In order to exclude certain tables from a PostgreSQL database backup, simply go to the Advanced section on your Backup page and fill in the "Custom Dump Flags" field as described below.

--exclude-table-data table_one --exclude-table-data table_two
Notion image

Errors

pg_dump: error: Error message from server: SSL SYSCALL error: EOF detected

This error could happen if your database has very little resources available, or very little RAM.

If you cannot upgrade your database to a more powerful one, refer to https://www.roelpeters.be/error-ssl-syscall-error-eof-detected/ for more information.

PostgreSQL ERROR - canceling statement due to conflict with recovery.

How to solve PostgreSQL ERROR - canceling statement due to conflict with recovery

If you see some of the following sample error messages when backing up PostgreSQL, then it is very likely that you need to tweak your database settings to allow replica/slave backups to complete.

pg_dump: error: Dumping the contents of table "some_table" failed: PQgetResult() failed.

pg_dump: error: Error message from server: ERROR: canceling statement due to conflict with recovery DETAIL: User was holding a relation lock for too long.

pg_dump: error: The command was: COPY public.some_table (id, created_at, ...) TO stdout;

Explanation of the issue

If you have a replica/slave configuration and you use that replica/slave for your backups, then if a query takes too long to be read, there is a possibility that the returned result no longer exists or have been changed on the master node.

Proposed solution

Edit postgresql.conf on the replica node and set the following two values:

max_standby_archive_delay = 900s
max_standby_streaming_delay = 900s

This way queries on the slave/replica with a duration less than 900 seconds won't be cancelled quickly and give it more time to complete.

  • max_standby_streaming_delay: sets the maximum delay before canceling queries when a hot standby server is processing streamed WAL data
  • max_standby_archive_delay: sets the maximum delay before canceling queries when a hot standby server is processing archived WAL data
 
 
 
Did this answer your question?
😞
😐
🤩

Last updated on August 4, 2021