In our shop, we have a PostgresSQL instance with many (small) databases running. pg_dumps are taken nightly and pushed to a enterprise tape storage solution. I'm new at managing postgres, and I worry about the scenario of one database getting corrupted, and being forced to restore all databases (the corrupted, and all others) to a point in time. Are there other recommended strategies that allow backup/recovery on a more granular scale (instead of all databases on the server)?
Thank you for your time and consideration!
pg_dump is a tool for taking logical backups which offer its own advantages but can't be used for doing a point in time recovery (PITR). In Postgres only methord for doing a PITR is to have a file system level back of all the database files and apply the archives on top of them to the desired time line which can be defined as a part of the recovery.conf. Details for this can be found at https://www.postgresql.org/docs/9.6/static/continuous-archiving.html
For taking file system level backup of databases you can use pg_basebackup which is shipped as a part of Postgres bundle. The usage is explained in datail at https://www.postgresql.org/docs/9.6/static/app-pgbasebackup.html
PS: I could not find the version of Postgres you are using so the referance links are for 9.6. However same link contains links at the top to switch to documentntaion of older versions.
Hope this helps.
Thank you for the quick reply and references. My apologies, we are indeed using pg_basebackup. I think you answered my question, with pg_basebackup, if a restore is needed for one database on the server, all databases will be restored to that point in time. I may do a little more study and follow up to this thread. Thanks again.