EDB has recently released a major version of Pgpool II and also released updates for all the back branches. This warrants an update to EDB Pgpool II compatibility matrix that I published last year. EDB has made some changes to Pgpool II in order to support the protocol extensions added to EPAS server for Oracle compatibility.
Please note that we have released EDB Pgpool II 4.0.2 and Pgpool II 3.7.7 for EPAS 11, instead of waiting for a year to release the latest major version of pgpool II like we have done in the past.
The compatibility table below shows the version of pgpool II supported with EPAS server. Although any version of pgpool II can work with any version of database server if compiled accordingly. pgpool II has build time dependency on the database server.
Pgpool II Version
pgpool II 3.7.7
pgpool II 4.0.2
pgpool II 3.6.14
pgpool II 3.5.18
pgpool II 3.6.14
pgpool II 3.4.21
... View more
The Pgpool II Community releases a major version every year and minor versions several times in a year. In the last few releases, a lot of emphasis and focus was given to improving the existing functionality of pgpool II, making it more stable, robust and able to handle complex real-world scenarios. The features that have gone through a major overhaul are watchdog, which provides high availability for p gpool II instances; pgpool II performance with extended queries, making backend node failover Quorum aware; and other useability features. As a result of the work done in the last few years, pgpool II is more and more stable, resilient, and ready to handle real workloads.
Before I list the use cases where EDB recommends the usage of p gpool II, the following is a brief description of the recommend p gpool II features.
1 - Connection Pooling
Pgpool -II saves connections to the PostgreSQL servers and reuses the connections whenever a new connection with the same properties (i.e. username, database, protocol version) is requested. This reduces connection overhead and improves the system's overall throughput.
2- Load Balancing
If you have a Postgres cluster set up with streaming replication with primary and multiple read-only nodes, Pgpool II provides read scalability by distributing the read queries across the read-only nodes and write queries to the primary node.
3- Watchdog HA
The Pgpool II Watchdog provides high availability between p gpool II nodes. If the primary pgpool II node goes down, the client queries are automatically shifted to a new pgpool II primary. One of the standby nodes is automatically made the primary node when the pgpool II primary node goes down.
Pgpool II Watchdog handles the split brain scenario using the Quorum.
4- Backend Node Failover
The Pgpool II provides the functionality of automatically managing the backend node failover and it also supports the Quorum capability. EDB recommends EFM for database failover. In scenarios where EFM cannot be used, p gpool II can also be used to manage failover of database nodes.
Pgpool II supports other features like replication which are currently not part of EDB's pgpool II support policy.
1- Pgpool II load balancing can be used when the user is looking for read scalability. The p gpool II load balancing feature will distribute the read queries across the standby nodes and write queries will be sent to the primary node. The specific configuration on how the load should be divided across nodes is configurable in p gpool II.
2- The Pgpool II watchdog feature is recommended if the user has deployed multiple pgpool II nodes. The watchdog feature will provide HA between pgpool II nodes by automatically switching the queries to pgpool II stand-by when the primary pgpool II node goes down. The pgpool II instances can be deployed on the same machine as the client application or database server or they can be deployed on a separate machine; the architecture is dependent on the user workload.
3- The pgpool II connection pooling feature can be deployed when the user has several client connections generating incoming traffic, and the connection pooling feature can be used to utilize the existing connection from the pool instead of spawning a new connection for every client.
4- In use cases where the user needs both read scalability and HA, the pgpool II load balancing feature and watchdog feature can be used together to provide the desired functionality.
5- In cases when EFM cannot be used to provide database node failover, the p gpool II backend node failover feature can be used to manage backend node failover.
... View more
One question that comes up very often is which version of Pgpool II is supported with which database server? I would like to address this question in this post. For a bit of background, Pgpool II is a middleware product that sits between the client and the database server in both PostgreSQL and EDB Postgres Advanced Server architectures and provides functionality like connection pooling, load balancing, high availability, etc. While Pgpool II is a community product, EnterpriseDB (EDB) ships its own version of Pgpool II because of the proprietary enhancements added to the community Pgpool. The changes are done to support the EDB protocol extensions like stored procedure support, redwood style sequences, etc. The table below shows the version of the EDB Pgpool II along with the corresponding EDB Postgres Advanced Server (EPAS) version it is shipped with. This means that the EPAS server and Pgpool II is supported and tested in the variation given below. The latest version of community pgpool II is 3.7, EDB pgpool II 3.7 is not released yet. EPAS Version Pgpool II Version 10 3.6.9 9.6 3.5.13 , 3.6.19 9.5 3.4.16 9.4 3.3.20 , 3.4.16 9.3 3.3.20 In addition to pgpool II main application, EDB also provides the installation for pgpool II extensions. The pgpool II extensions are installed as part of the database server. The table below provides the version of pgpool II extensions and the EPAS version it supports. Please note that the pgpool II extension versions below are same in terms of functionality. EPAS Version Pgpool II extension version 10 3.6.9 9.6 3.5.13 9.5 3.4.16 9.4 3.3.20, 3.4.16 Please note that minor version will get changed after a minor version update is applied..
... View more
BART 2.2 Sneak Peak
The EDB Postgres Backup and Recovery Tool development team is super busy developing features for the next major release and I wanted to give everyone a sneak peek into the exciting new capabilities to be delivered as part of BART 2.2.
The last major release of BART was published in June 2017 and we are already well underway in developing exciting new features for BART 2.2. The beta is expected by March 2018.
The highlight of BART 2.0 was block level incremental backup, a groundbreaking feature as there is no other Postgres-based backup and recovery solution that provides block level incremental backup. There are solutions that provide file level incremental backup, but they no way near provide the performance and reliability of block level incremental backup.
The features under development for BART 2.2 are also groundbreaking in their own right and I will provide a short summary of each feature in this blog. More details will follow once the release is out as, after all, this is only a sneak peek. ;-)
1. Parallel full backup
All the previous releases of BART used pg_basebackup as the underlying tool for performing a full backup of the database cluster. While pg_basebackup is a great tool and it has helped immensely in the ability to deliver stable releases of BART without making many invasive changes to support full backup functionality. The pg_basebackup utility also has some limitations, the largest one being that it doesn't offer any parallelism while taking a backup, which makes it slower than other Postgres-based backup solutions that offer parallel backups.
BART 2.2 will no longer use pg_basebackup for taking a full backup. Instead, it will use its own custom approach to perform full backup. The approach that BART 2.2 uses for implementing full backup is analogous to the approach used by BART for implementing block level incremental backup. The BART incremental backup approach uses a WAL scanner to identify the changed blocks and then fetches those changed blocks from the database server to form the incremental backup. In a nutshell, the full backup approach implemented by BART 2.2 will use the incremental backup approach but instead of getting the changed blocks, it will copy all the data files from the database server.
BART 2.2 will allow the user to specify "thread_count" in the bart.cfg file. The thread_count will launch the specified number of worker threads for performing the file copy for full backup. The approach is so far looking very promising and test benchmarking done in the labs is showing very positive results.
"In the best case scenario, BART 2.2 with 12 parallel threads took around 35 minutes to backup 1TB of database. That's more then 220% faster then pg_basebackup which took just over 114 minutes on the same environment."
The above benchmark was performed on a dedicated AWS instance with BARTHOST and database server running on different machines. Both BART 2.2 and pg_basebackup test were done on the same environment to get an apples-to-apples comparison. In the case of BART 2.2, the parallel processing with 12 parallel threads has produced a significant performance gain.
2. Parallel restore for Incremental backup
A lingering area for improvement in the BART 2.0 release was that BART incremental backup restore pretty much provided the same performance as restoring a full backup and replaying the WAL files. The BART 2.2 release will provide the ability of parallelism in restoring the incremental backup. This feature requires BART running on the restore host although this requirement was added in the BART 2.0 release for restoring incremental backups. The parallel incremental backup restore feature will allow the users to specify "workers" in the BART Restore command; this can be specified passing "--workers value" or passing "-w value" switch in the BART restore command. BART will spawn the specified number of worker threads for performing the incremental backup restore.
Prior to this feature, we restored an incremental backup by copying all of the .blk and .cbm files to the restore host, then we ran "bart APPLY-INCREMENTAL" on the restore host to splice the .blk files into the relfiles.
With this feature, we don't copy the .blk files to the restore host; instead, we stream them from the BART host to the restore host. For example, if you specify that you want 4 worker processes, we fire up 4 "receiver" processes on the restore host and 4 "streamer" processing on the BART host. The stdout of each streamer is connected to the stdin of a receiver process. The streamer and the receiver both work from the same .cbm file. When the receiver gets to the point where it needs a .blk file, it reads those modified blocks (not the bitmaps, but the actual 8K blocks themselves) from stdin - since the streamer process is reading the same .cbm file (in the same order) it sends those blocks to stdout in the same sequence.
The advantage to this mechanism is that we never write the .blk files to disk (on the restore host).
So we cut the disk I/O in half (on the restore host). And, we require much less disk space (on the restore host) since we don't have to write the .blk files to disk before we splice the blocks into the relfiles.
3. Parallel Compression
The backup compression provided by BART 2.0 or prior releases was client level compression. The user would use the -z switch with the backup command to enable compression and provide compression level from 1..9 using the -c switch. BART 2.2 will use the same interface as BART 2.0 but make the compression operations parallel in order to improve the overall time for compressed backup. The thread_count parameter specified in bart.cfg for making the backup process parallel spawns the specified number of processes for performing the backup process parallel. The same set of processes will be used for compression operation in order to provide parallel compressed backups.
So this is pretty much the sneak peek for BART 2.2 at this point. More on this later, stay tuned....
4. BART 2.3 Sneak Peek
While the features for BART 2.3 are still in planning phase, I will just mention one feature already scheduled BART 2.3. We are calling the feature flexible location for now.
The bart.cfg file contains parameter name "backup_path" which specifies the path where backups are stored. This is a global parameter in the default bart.cfg file. The user can only give a path to the local machine that is running the BARTHOST. The backup_path doesn't support a remote location which is the reason driving the Flexible Location feature. The goal of flexible location feature is to give users the ability to store their backups on a location other than the BARTHOST.
The flexible location feature will allow the user to specify the remote host and path on the remote host where backups should be kept. Right now the backup location is a global location which means backups of all the servers will be stored in the global location. With the flexible location feature in BART 2.1, the user can continue to use global backup location or specify backup location for each backup server individually.
All the BART functionality, (i.e. Manage, Delete, Restore, etc.) will continue to work with BART allowing flexible backup locations.
The user will have the flexibility to attach a high speed driver with the remote backup location or have a high speed network connection between the database server and its backup location. Eventually, this feature is also steering BART in the direction of supporting other storage devices and supporting other backup vendors that are supporting the XBSA standard. The next major release after BART 2.2 will support other backup vendors and other storage devices.
Ahsan Hadi is Senior Director, Product Development, at EnterpriseDB.
... View more
(Originally published 6/21/2017) Data retention has become a greater challenge for database administrators as volumes have exploded with new digital applications and data from mobile, web, and machine sources. The sheer volume of data can limit the number of backups DBAs can perform, increasing the importance of creating defined data retention policies in the database platform. Enterprises typically take backups (full or incremental) on a daily basis (many take them more frequently) causing the number of backups for the database servers to grow rapidly. Retention policies are introduced to mark the older backups obsolete so they can be removed from the backup server and free up storage. To give DBAs greater precision in controlling backups, EnterpriseDB® (EDB™) has created multiple important and differentiating Retention Management features for its EDB Postgres™ Backup and Recovery Tool. EDB Postgres Backup and Recovery Tool implements retention policies and point-in-time recovery requirements for large-scale Postgres deployments and is part of the Management Suite of tools that is integrated into the EDB Postgres™ Platform. What follows is an exploration of how retention management is achieved for incremental backups as well as the different modes supported for retention management in the context of full backup. It’s important to note the same mechanisms will apply to incremental backup with minor differences, as explained toward the end of the blog. First, to understand how EDB Postgres Backup and Recovery Tool addresses block-level incremental backup and how the feature works, read my blog, Data Backup Made Easy. The block-level incremental level feature provides users the ability to back up only the blocks that have changed since the last full or incremental backup, making the backup process super fast and much smaller. Further insight may be gained by watching the following videos: EDB Postgres Backup and Recovery Took Incremental Backup Part 1, Intro and Part 2, Demo. The Retention Management feature in EDB Postgres Backup and Recovery Tool provides the ability to define policies for database server(s) in order to retain the backups for the given server based on the specified retention policy. A retention policy can also be configured globally, in which case the policy applies to all of the database servers configured in the EDB Postgres Backup and Recovery Tool configuration file. (More details on that below.) EDB Postgres Backup and Recovery Tool supports the following two modes of backup retention: Recovery Window Recovery window is a period of time that starts with the current time and extends backward to the point of recoverability. The recovery window can be: A number of days A number of weeks A number of months For example, if you have set the recovery window to seven days for a backup server then EDB Postgres Backup and Recovery Tool must retain those backups (and associated write ahead logs (WALs)) that were created between the current time and seven days in the past. EDB Postgres Backup and Recovery Tool ensures that the user will have a backup (and associated WALs) for the given server in case of a failure; backups older than the time period between the current time and seven days prior are marked obsolete, thus satisfying the recovery window retention policy. In order to explain this mode further, consider an example of the database server called "westwing_prod" . The recovery window for this server is defined in the configuration file as the following: retention_policy = 7 DAYS The backups for "westwing_prod" server were generated on the following dates with an interval of every five days: May 15 May 10 May 5 Apr 31 If the current date is May 18 and the recovery window is set to seven days, EDB Postgres Backup and Recovery Tool would need to retain the May 10 backup in order to be restored up to seven days (counting backward from the current time). The user can perform PITR (point-in-time recovery) and only recover data from up to seven days from the current time. Note that a more recent backup taken on May 15 exists in the recovery window but EDB Postgres Backup and Recovery Tool will retain the May 10 backup so it can restore up to 7 days. The backups older than seven days (and the associated WAL files) will be marked obsolete and may be deleted using the BART manage command by passing -d or —delete-obsolete. (More on this later.) Backup Redundancy A backup redundancy retention policy specifies the number of backups to retain for a given server. The recovery_window is the recommended approach. A redundancy-based policy is sometimes preferred because the number of backups that will be retained as part of the recovery window is not constant. A redundancy-based policy lets the user specify the number of backups. For example: retention_policy = 2 BACKUPS EDB Postgres Backup and Recovery Tool will ensure that two recent backups are always retained for the given server. Consider a case where backups are five days apart: if the EDB Postgres Backup and Recovery Tool manage command is executed on May 18 with the above retention policy, EDB Postgres Backup and Recovery Tool will retain the May 15 and May 10 backups (and the associated WALs). Backups: May 15 May 10 May 5 Apr 31 Configuration Now that we have examined the two modes for Retention Management supported by EDB Postgres Backup and Recovery Tool, I will briefly discuss how these are configured. It is a simple process. The user needs to specify the retention mode in the bart.cfg file. The retention mode can be specified globally, in which case the policy will apply to all the backup servers in the bart.cfg file, or a policy can be applied to an individual server. In the bart.cfg sample below, I have configured redundancy mode retention for the ppas96_34 database server, and recovery window retention for the PG_96 database server. Retention can be specified on the server level in the global section (under [BART] in the sample below) and it will apply to all the servers. The value specified on the server level takes precedence over the value specified in the global section. [BART]
backup_path = /home/ec2-user/backups
logfile = /tmp/bart.log
scanner_logfile = /tmp/bart_scanner.log
host = 127.0.0.1
port = 5432
user = postgres
cluster_owner = postgres
retention_policy = 7 Days
description = "Postgres server"
host = 172.25.1.34
port = 5444
user = edb
cluster_owner = ec2-user
description = "EPAS server"
retention_policy = 1 backups allow_incremental_backups=enabled That’s it! The retention policy is good to go. Next, in order to invoke the retention policy, the user needs to run the “ bart MANAGE " command or make the command part of a cron job so it can run after a specified interval. The help for “ bart MANAGE " command looks like this: [ec2-user@ip-172-25-1-34 bin]$ ./bart MANAGE –help
./bart: backup and recovery tool
./bart MANAGE [OPTION]...
-h, --help Show this help message and exit
-s, --server Name of the server or 'all' to specify all servers
-i, --backupid ID or Name of the backup or specify 'all'.
(only valid with -c)
-l, --list-obsolete List backups marked obsolete
-d, --delete-obsolete Delete backups marked obsolete
-c, --change-status Change the status of a backup to (keep or nokeep)
-n, --dry-run Do everything except no changes are made
[ec2-user@ip-172-25-1-34 bin]$ Here I will run the bart MANAGE command for ppas96_34 server: [ec2-user@ip-172-25-1-34 bin]$ ./bart manage -s ppas96_34
INFO: processing server 'ppas96_34', backup '1494509492492'
INFO: processing server 'ppas96_34', backup '1494509223047'
INFO: marking backup '1494509223047' as obsolete
INFO: 2 WAL file(s) marked obsolete
INFO: processing server 'ppas96_34', backup '1494508985643'
INFO: marking backup '1494508985643' as obsolete
INFO: 4 incremental(s) of backup '1494508985643' will be marked obsolete
INFO: marking incremental backup '1494509186211' as obsolete
INFO: marking incremental backup '1494509169195' as obsolete
INFO: marking incremental backup '1494509089030' as obsolete
INFO: marking incremental backup '1494509064743' as obsolete
INFO: 12 WAL file(s) marked obsolete
INFO: 1 Unused WAL file(s) present
INFO: 2 Unused file(s) (WALs included) present, use 'MANAGE -l' for the list These were the backups for the ppas96_34 server before running the bart MANAGE command: The backup status for ppas96_34 after running the bart MANAGE command is given below. As you can see, the bart MANAGE command has marked all the full backups and the corresponding incremental backups (except one full backup and corresponding incrementals) as obsolete. I will explain how this works for incremental backup in bit more detail later. [ec2-user@ip-172-25-1-34 bin]$ ./bart show-backups Setting Up KEEP/NO KEEP Once a backup is marked obsolete by the bart MANAGE command, you can delete the backup by passing —delete-obsolete (or -d ) to the bart MANAGE command: Before you decide to delete the obsolete backups, you can also list them using the bart MANAGE command using the -l switch: [ec2-user@ip-172-25-1-34 bin]$ ./bart manage -s ppas96_34 -l SERVER NAME: ppas96_34 BACKUP ID: 1494509223047 BACKUP STATUS: obsolete BACKUP TIME: 2017-05-11 13:27:03 UTC BACKUP SIZE: 61.74 MB WAL FILE(s): 2 ........... Once EDB Postgres Backup and Recovery Tool has marked a backup as obsolete (based on your retention policy) you may later decide that you want to keep the backup. You can change the backup status using bart MANAGE command: -c, --change-status Change the status of a backup to (keep or nokeep) You must provide the server with -s server-name and -i backup-id options to specify the server and backups. Specify ‘all’ to these switches to change the status for all obsolete backups of all servers. If you have changed the status to KEEP , the bart MANAGE command won’t delete the backup when it deletes obsolete backups. Once you have marked a backup as KEEP and you later decide that you don’t want to keep it, the status of the backup can be changed using the same bart MANAGE command: -c, --change-status Change the status of a backup to (keep or nokeep) Specify NOKEEP status and appropriate values for –s server-name and -i backup-id . Once you have marked the backups as NOKEEP, the next run of BART MANAGE command will mark those as obsolete and they can be deleted using the BART MANAGE command as explained above. Retention Management for Incremental Backups So far we have examined Retention Management in the context of full backups. While the same policies also apply to incremental backups, there are some subtle but important differences. As mentioned earlier, EDB Postgres Backup and Recovery Tool supports block-level incremental backup. An incremental backup is a backup of those blocks that have changed since the last parent backup. The parent backup can be a full backup or it can be an incremental backup. The bart show-backups command shows the active full and incremental backups. The parent for a full backup is none and the parent for an incremental backup is the backup ID of the parent (which, again, may be a full or incremental backup). Note that when you create an incremental backup, you specify the ID (or name) of the parent backup. -d, --delete-obsolete Delete backups marked obsolete
[ec2-user@ip-172-25-1-34 bin]$ ./bart manage -s ppas96_34 -d
INFO: removing all obsolete backups of server 'ppas96_34'
INFO: removing obsolete backup '1494509223047'
INFO: 2 WAL file(s) will be removed
INFO: removing WAL file '000000010000000B00000071'
INFO: removing WAL file '000000010000000B00000070'
INFO: removing obsolete backup '1494508985643'
INFO: 12 WAL file(s) will be removed
INFO: 4 incremental(s) of backup '1494508985643' will be removed
INFO: removing obsolete incremental backup '1494509186211'
INFO: removing obsolete incremental backup '1494509169195'
INFO: removing obsolete incremental backup '1494509089030'
INFO: removing obsolete incremental backup '1494509064743'
INFO: removing WAL file '000000010000000B0000006F'
INFO: removing WAL file '000000010000000B0000006E'
INFO: removing WAL file '000000010000000B0000006D'
INFO: removing WAL file '000000010000000B0000006C'
INFO: removing WAL file '000000010000000B0000006B'
INFO: removing WAL file '000000010000000B0000006A'
INFO: removing WAL file '000000010000000B00000069'
INFO: removing WAL file '000000010000000B00000068'
INFO: removing WAL file '000000010000000B00000067'
INFO: removing WAL file '000000010000000B00000066'
INFO: removing WAL file '000000010000000B00000065'
INFO: removing WAL file '000000010000000B00000064'
INFO: 2 Unused file(s) will be removed
INFO: removing (unused) file '000000010000000B00000063'
INFO: removing (unused) file
'000000010000000B630000280000000B64000000.mbm' When EDB Postgres Backup and Recovery Tool manage command is executed for the ppas96_34 server with retention policy set to “ 1 BACKUP ” (a redundancy-based policy), EDB Postgres Backup and Recovery Tool will mark the full backup, and its associated child backups, obsolete leaving one full backup and associated child backups as per the retention policy. It is important to note that the retention policy always applies to a full backup; the child backups and WALs associated with full backups are also marked obsolete. As you can see below, running the bart MANAGE command for the ppas96_34 server, one full backup and all children of that full backup are retained; the other full backups and associated child backups are marked as obsolete. To reiterate the point, the retention policy is always applied to full backups. The full backup and its associated incremental backups are treated as a single unit. When a full backup is marked obsolete by a retention policy, the associated incremental backups are also marked obsolete. Changing the backup status to KEEP/NOKEEP follows the same approach. The status change is applied to full backup and, in turn, applies to all associated incremental backup as well. The addition of Retention Management to EDB Postgres Backup and Recovery Tool provides DBAs with greater precision in plotting their overall data management strategies. As data volumes continue to soar, such features will become increasingly important. For information on the EDB Postgres Platform and the Management Suite, which includes EDB Postgres Backup and Recovery Tool, contact firstname.lastname@example.org.
... View more
3/30/2017 Middleware can have a significant impact on the operation and performance of the database system. Pgpool II is the middleware product that sits between the PostgreSQL server and database clients and is developed and maintained by a committed open source Postgres community that includes EnterpriseDB employees. A major version of pgpool II is released every year and the latest is pgpool II 3.6. Released in November 2016, pgpool II 3.6 enhances failover capabilities, increases performance, and adds new abilities for setting configuration parameters. There were also a series of bug fixes, which are outlined on the pgpool II Wiki here. As a major contributor to the pgpool II project, I encourage users to play around with the new functionality and provide feedback. In the meantime, I will explore here the most important new features in pgpool II 3.6. For a complete list of the features and capabilities in pgpool II 3.6, find the documentation here. Failover Management The feature that excites me the most in this release is the improvement to failover behavior. High availability is a core requirement for OLTP workloads. In fact, the need to maintain connectivity without disruption is perhaps the key requirement, as the ability to continually serve customers is crucial for most businesses. Pgpool II 3.6 improves the pgpool II user failover experience by making the failover behavior more robust and flexible. The pgpool II show_nodes command will show the primary and the slave node with which the client is connected via pgpool II. When a new client connects, pgpool II establishes the connection with the primary node and one of the slave nodes using the "weighted random sampling method" approach. Depending on the load balancing configuration, all write queries for the client are sent to the master node and read queries are load balanced to the slave node. When a failover happens and one of the slave nodes dies due to a network or other issue, the failover process is triggered, causing all client sessions to disconnect even if the client wasn't connected to the slave nodes that passed out. Additionally, the health check timeout will try to reconnect to the failover node and new connections to pgpool II won't be entertained until the health check is complete. The enhancements to failover behavior in pgpool II 3.6 address the drawbacks in the failover mechanism. Prior to the release of pgpool II 3.6, users had to wrestle with all clients being disconnected during failover but new clients not connecting while the health check was in progress. Now, if the client is not connected to the slave node that died, the client connection will continue to work without disruption. Again as mentioned above, the pgpool II shows_nodes command can be used to see the primary and secondary nodes used by the current client session. The issue with the failed node can be addressed in parallel, and it can be added back to the pgpool II cluster by using the PCP interface. Additionally, new clients will be permitted access rather than denying access until the health check operation is complete. Prior to these enhancements, the new client connection would be denied access until the health check retries are done. Please note that this functionality is only available when the cluster is configured with streaming replication. The pgpool II replication is not a highly used feature as most users are using Postgres streaming replication for replication across nodes. The failover enhancements should improve the experience for users that have deployed pgpool II as a high availability solution. There will likely be additional improvements in this area in the next set of pgpool II releases. Performance Enhancements Performance is of paramount importance for middleware products as they support the complex components in the software stack and ensure they function properly. Because middleware means adding another layer between the client and the database server, the overhead must be minimal and performance must improve to keep up with other components in the stack. In the release prior to pgpool II 3.6, major performance improvements were added for queries using the extended query protocol. This was addressed in the blog about the new features in the pgpool II 3.5 release, A Little Respect for Pgpool. Pgpool II 3.6 also enhances performance significantly, this time addressing large data loads. Prior to 3.6 when pgpool II flushes data to the network, it send write() every time it sends a row data ("Data Row" message) to the frontend. For example, if 10,000 rows must be transferred, 10,000 write()s are issued. This is pretty expensive for the system. Since after repeating to send row data, "Command Complete" message is sent, it's enough to issue a write() with the command complete message. With pgpool II 3.6, the write() is only called once with command complete message. This eliminates the overhead of calling write() for transferring every row and hence improves performance. Also there is unnecessary flushing in handling the command complete message. Testing showed performance increases of 47% to 62% were achieved in some cases. However, the change will not affect performance for workloads requiring the transfer of few rows because such rows must flush to the network anyway. Flexible Configuration The ability to set pgpool II configuration parameters on the fly was an important missing functionality. Postgres provides the set/reset command that lets the user change the configuration parameters settings in the session; the set command only lets you change the configuration parameters that don't require a server restart. Similarly pgpool II 3.6 has added the pgpool II_show, pgpool II_set, and pgpool II_reset commands that lets the user change the session level pgpool II configuration parameters. The pgpool II_show command will show the latest values for these parameters. The following pgpool II parameters can be changed using the pgool_set/reset commands: Log_statement Log_per_node_statement Check_temp_table Check_unlogged_table Allow_sql_comments Client_idle_limit Log_error_verbosity Client_min_messages Log_min_messages client_idle_limit_in_recovery Pgpool II 3.6 also takes care of a very important limitation of pgpool II. Prior to v3.6, the pg_terminate_backend(PID) function would trigger the failover operation even if this was not intended by the user. This happened because the server returned the same error message as it would when the postmaster died due to some reason. The pgpool II would interpret that as a call for triggering the failover operation. In order to address that, pgpool II 3.6 now retrieves the PID of the backend intended to be terminated by pg_terminate_backend call. The PID is then passed as a constant to pg_terminate_backend() and it executes the pg_terminate_backend command without triggering the failover operation. Please note that enhancement to pg_terminate_backend handling only works for simple query protocol, it doesn’t work for prepared queries which uses the extended query protocol. Like all the previous major releases of pgpool II, the pgpool II 3.6 release imports the parser changes of the latest PostgreSQL release. The parser changes for PostgreSQL 9.6 was imported as part of pgpool II 3.6, including all the new syntax introduced, i.e. COPY INSERT RETURNING, is also recognized by pgpool II. If a particular syntax is not recognized by pgpool II, it simply sends it to the master node instead of performing load balancing or any other operations. What's Coming in Pgpool II 3.7 It is too soon to identify the exact enhancements that will be part of the next major release of pgpool II. But development will continue to focus on stability of pgpool II, and it will address any critical as well as non-critical bugs in the product. In terms of new functionality, we in the pgpool development community will look at making further improvements to the failover mechanism and supporting more authentication methods. For updates on pgpool II 3.7 development as well as bug fixes, keep an eye on the community wiki set up for that purpose here. Ahsan Hadi is Senior Director, Product Development, at EnterpriseDB.
... View more