cancel
Showing results for 
Search instead for 
Did you mean: 

EDB Postgres Upgrade Methodology and Best Practices

 

EDB Postgres Database includes exciting new features with every new release and to take advantage of these features, we recommend upgrading to the latest EDB Postgres version. In addition, many of these upgrades may coincide with additional changes to the database setup, include moving to newly purchased server hardware, migrating to different storage architectures, changing the database character set, or even migrating to a completely different operating systems. This means that the upgrade process can be a long and complex one.


Due to the stringent high availability requirements for the databases that store mission critical data, database upgrades have become more challenging these days. Unfortunately, there are essential events that require application downtime, including modifying hardware or database software, upgrading applications, applying software patches, and migrating to different computing architectures. Because such events are not conceived via system or data failures, they are aptly classified as planned outages.


This best practice document explains how organizations can upgrade or migrate from older versions of the EDB Postgres Database to the latest version with or without downtime. The primary goal of this document is to increase awareness of a solution that eliminates downtime during a planned outage for database upgrades. Secondarily, to explain how to minimize the total time required by entire upgrade process.


Please refer to the attached document, on the right hand side under attachments, for the full upgrade methodology and best practice.

 

Version history
Revision #:
2 of 2
Last update:
‎05-21-2019 06:54 AM
Updated by:
 
Comments

hi,

can share any queries on stats collector metrics and threshold values  in postgresql

checkpointer(pg_bgwriter) 

pg_stat_database

pg_stat_user_tables  

 

Hi Somu,

You can have the details required for the query through \d+ meta command for the description.
For checkpoint values this does vary from environment.

bench1=# \d+ pg_stat_database;
View "pg_catalog.pg_stat_database"
Column | Type | Modifiers | Storage | Description
-----------------+--------------------------+-----------+---------+-------------
datid | oid | | plain |
datname | name | | plain |
numbackends | integer | | plain |
xact_commit | bigint | | plain |
xact_rollback | bigint | | plain |
blks_read | bigint | | plain |
blks_hit | bigint | | plain |
blks_icache_hit | bigint | | plain |
tup_returned | bigint | | plain |
tup_fetched | bigint | | plain |
tup_inserted | bigint | | plain |
tup_updated | bigint | | plain |
tup_deleted | bigint | | plain |
conflicts | bigint | | plain |
temp_files | bigint | | plain |
temp_bytes | bigint | | plain |
deadlocks | bigint | | plain |
blk_read_time | double precision | | plain |
blk_write_time | double precision | | plain |
stats_reset | timestamp with time zone | | plain |
View definition:
SELECT d.oid AS datid,
d.datname,
pg_stat_get_db_numbackends(d.oid) AS numbackends,
pg_stat_get_db_xact_commit(d.oid) AS xact_commit,
pg_stat_get_db_xact_rollback(d.oid) AS xact_rollback,
pg_stat_get_db_blocks_fetched(d.oid) - pg_stat_get_db_blocks_hit(d.oid) - pg_stat_get_db_blocks_icache_hit(d.oid) AS blks_read,
pg_stat_get_db_blocks_hit(d.oid) AS blks_hit,
pg_stat_get_db_blocks_icache_hit(d.oid) AS blks_icache_hit,
pg_stat_get_db_tuples_returned(d.oid) AS tup_returned,
pg_stat_get_db_tuples_fetched(d.oid) AS tup_fetched,
pg_stat_get_db_tuples_inserted(d.oid) AS tup_inserted,
pg_stat_get_db_tuples_updated(d.oid) AS tup_updated,
pg_stat_get_db_tuples_deleted(d.oid) AS tup_deleted,
pg_stat_get_db_conflict_all(d.oid) AS conflicts,
pg_stat_get_db_temp_files(d.oid) AS temp_files,
pg_stat_get_db_temp_bytes(d.oid) AS temp_bytes,
pg_stat_get_db_deadlocks(d.oid) AS deadlocks,
pg_stat_get_db_blk_read_time(d.oid) AS blk_read_time,
pg_stat_get_db_blk_write_time(d.oid) AS blk_write_time,
pg_stat_get_db_stat_reset_time(d.oid) AS stats_reset
FROM pg_database d;

bench1=# SELECT d.oid AS datid,
bench1-# d.datname,
bench1-# pg_stat_get_db_numbackends(d.oid) AS numbackends,
bench1-# pg_stat_get_db_xact_commit(d.oid) AS xact_commit,
bench1-# pg_stat_get_db_xact_rollback(d.oid) AS xact_rollback,
bench1-# pg_stat_get_db_blocks_fetched(d.oid) - pg_stat_get_db_blocks_hit(d.oid) - pg_stat_get_db_blocks_icache_hit(d.oid) AS blks_read,
bench1-# pg_stat_get_db_blocks_hit(d.oid) AS blks_hit,
bench1-# pg_stat_get_db_blocks_icache_hit(d.oid) AS blks_icache_hit,
bench1-# pg_stat_get_db_tuples_returned(d.oid) AS tup_returned,
bench1-# pg_stat_get_db_tuples_fetched(d.oid) AS tup_fetched,
bench1-# pg_stat_get_db_tuples_inserted(d.oid) AS tup_inserted,
bench1-# pg_stat_get_db_tuples_updated(d.oid) AS tup_updated,
bench1-# pg_stat_get_db_tuples_deleted(d.oid) AS tup_deleted,
bench1-# pg_stat_get_db_conflict_all(d.oid) AS conflicts,
bench1-# pg_stat_get_db_temp_files(d.oid) AS temp_files,
bench1-# pg_stat_get_db_temp_bytes(d.oid) AS temp_bytes,
bench1-# pg_stat_get_db_deadlocks(d.oid) AS deadlocks,
bench1-# pg_stat_get_db_blk_read_time(d.oid) AS blk_read_time,
bench1-# pg_stat_get_db_blk_write_time(d.oid) AS blk_write_time,
bench1-# pg_stat_get_db_stat_reset_time(d.oid) AS stats_reset
bench1-# FROM pg_database d;
datid | datname | numbackends | xact_commit | xact_rollback | blks_read | blks_hit | blks_icache_hit | tup_returned | tup_fetched | tup_inserted | tup_updated | tup_deleted | conflic
ts | temp_files | temp_bytes | deadlocks | blk_read_time | blk_write_time | stats_reset
-------+-----------+-------------+-------------+---------------+-----------+----------+-----------------+--------------+-------------+--------------+-------------+-------------+--------
---+------------+------------+-----------+---------------+----------------+----------------------------------
13712 | postgres | 0 | 3930 | 0 | 459 | 164712 | 0 | 2910670 | 24986 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 | 2019-07-12 15:14:28.74219+05:30
16644 | test | 0 | 3918 | 0 | 426 | 161896 | 0 | 2554888 | 23636 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 | 2019-07-12 15:14:30.579058+05:30
1 | template1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 |
13711 | template0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 |
16645 | bench1 | 1 | 3933 | 0 | 524 | 165237 | 0 | 2617854 | 25212 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 | 2019-07-12 15:14:32.579712+05:30
16646 | bench2 | 0 | 3925 | 0 | 463 | 163409 | 0 | 2617772 | 24292 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 | 2019-07-12 15:14:34.683886+05:30
(6 rows)

bench1=#
bench1=#