I've the following settings in my custom postgres configuration file:
wal_buffers = 16MB wal_log_hints = on # wal_level min_wal_size = 1GB max_wal_size = 10GB #wal_level = replica max_wal_senders = 3 wal_keep_segments = 15
yet, with large updates running, I'm seeing upwards of a few hundred files appearing before cleanup occurs. Is there something I can look at to help clear this up quicker?
The parameters you mentioned usually comes in picture when the cluster is in a replication scenario. Bulk DML operations always lead to a spike in WAL files generation which will take some time cleanup depending on cluster configuration.
Do you have standalone postgres cluster running or it is a package of master-slave setup running with archive command and/or replication slot in place?
Master/slave with replication. This is currently our lab, but doing testing for our production systems, which will be in the same config. Running an update script on this, but it’s generating significant WAL files. I’m wondering if, for production, I’ll have to expand my WAL storage drive significantly, considering this one masking script is causing me to crash out.
Below are the troubleshooting areas you can explore for tackle out high wal files generation and failed to clean up.
1. As you have mentioned it is master-slave setup, try to verify your archive_command is working properly. Failing archive_command holds wal files on master.
2. If replication_slot has been configured, make sure no replication_slot present with inactive state.
3. Try to minimize wal_keep_segments value as it will retain the said number of wal files at pg_xlog if you have replication_slot in place or/and archive_command is strongly archiving xlogs.
4. wal files generation keeps growing in circumstances like under heavy load, a failing archive_command, or a high wal_keep_segments setting.
1. We always recommend having the separate mount point (disk_space) for pg_xlog (wal) files which will be symlink with cluster directory.
2. pg_xlog with separate mount provides ease to deal with disk space issues.
Hope this will help you.