cancel
Showing results for 
Search instead for 
Did you mean: 

Upgrade Existing EDB Failover Manager 3.1 cluster to EFM 3.2. Explained in 5 Steps.

EDB Team Member

New EDB Failover Manager(EFM) 3.2 includes exciting new features such as better integration with PgPool and other load balancer through script hooks, managing minimum severity level for notification scripts and improved nodes information logging to make agent logs more readable. To take advantage of these and other new features will often need to upgrade the existing EFM cluster from older versions to EFM 3.2.

 

Upgrading existing EFM cluster to the new version is quite easy. EFM 3.2 provides "upgrade-conf" option in "efm" utility. Utilities "upgrade-conf" option reads an existing <efm-clustername>.properties & <efm-clustername>.nodes configuration files and creates a new version of configuration files with the latest version changes. Using those upgraded configuration files, the latest EFM 3.2 service can be started as the last step of the upgradation process.

This article covers EFM cluster upgrade procedure. It will not cover the configuration of EFM or Streaming Replication.

 

Here are high-level steps to upgrade the existing EFM 3.1 cluster to EFM 3.2

  1. At this point, we assume existing EFM 3.1 cluster configured and all nodes agents are up and running.
  2. Install EFM 3.2 binaries on each node of the EFM Cluster.
  3. Run EFM 3.2 "efm upgrade-conf" on each node in EFM cluster.
  4. Stop EFM 3.1 Service/agents on all nodes.
  5. Start EFM 3.2 Service/agents on all nodes.

Now, let's see in play

 

Step 1. Check the existing EFM 3.1 cluster status.

[root@masterdb ~]# /usr/edb/efm-3.1/bin/efm cluster-status efm
Cluster Status: efm
	Agent Type  Address              Agent  DB       VIP
	-----------------------------------------------------------------------
	Standby     172.31.32.122        UP     UP
	Standby     172.31.34.34         UP     UP
	Master      172.31.41.249        UP     UP
Allowed node host list:
	172.31.34.34 172.31.32.122 172.31.41.249
Membership coordinator: 172.31.34.34
Standby priority host list:
	172.31.34.34 172.31.32.122
Promote Status:
	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.31.41.249        0/190000D0
	Standby     172.31.34.34         0/190000D0
	Standby     172.31.32.122        0/190000D0
	Standby database(s) in sync with master. It is safe to promote.

We can notice from above efm cluster-status output, there's one master and 2 standbys in EFM cluster.

 

Step 2. Install latest version EFM 3.2 binaries on Master & 2 Standby nodes. EFM binaries come in rpms, to download follow the instructions mentioned here.

          yum install edb-efm32

Step 3. After installing new EFM 3.2 binaries on all the 3 nodes, invoke "/usr/edb/efm-3.2/bin/efm upgrade-conf" option to upgrade the exiting <clustername>.properties & <clustername>.nodes configuration files to create a new verion copy of files.

efm "upgrade-conf" description and usage syntax:

# /usr/edb/efm-3.2/bin/efm --help
		<trimmed other options>
		upgrade-conf
			Will create a 3.2 compatible .properties and .nodes file based on existing files.
			Must be run with root privileges for default configuration.
			Full command: efm upgrade-conf <cluster name>
			To upgrade files from a non-sudo configuration, include the -source switch to
			specify the path to the files. The new files will be written to the directory from
			which the command is invoked, and will be owned by the user executing the command.
			Full command: efm upgrade-conf <cluster name> -source <directory>
Syntax:-
	/usr/edb/efm-3.2/bin/efm upgrade-conf <efm-cluster-name>

Execute "efm upgrade-conf" command on each node of efm cluster.

	[root@masterdb efm-3.2]# /usr/edb/efm-3.2/bin/efm upgrade-conf efm
	Checking directory /etc/edb/efm-3.1
	Processing efm.properties file
	The following properties were added in addition to those in previous installed version:
		virtualIp.interface
		script.load.balancer.attach
		script.load.balancer.detach
		lock.dir
		log.dir
	Checking directory /etc/edb/efm-3.1
	Processing efm.nodes file
	Upgrade of files is finished. The owner and group for properties and nodes files have been set as 'efm'.

After executing the command we can notice from below directory list there will be a new set of configuration files created.

[root@master ]# ls -lrth /etc/edb/efm-3.1/
-rw-r--r--. 1 efm efm   180 Jun 18 19:00 efm.nodes
-rw-r--r--. 1 efm efm 16087 Jun 17 08:07 efm.properties
-rw-r--r--. 1 efm efm   139 Feb 22 02:20 efm.nodes.in
-rw-r--r--. 1 efm efm 15771 Feb 22 02:20 efm.properties.in

[root@master ]# ls -lrth /etc/edb/efm-3.2/
-rw-r--r--. 1 root root 18K Jul 23 06:43 efm.properties.in
-rw-r--r--. 1 root root 139 Jul 23 06:43 efm.nodes.in
-rw-r--r--. 1 efm  efm  18K Oct  5 13:34 efm.properties
-rw-r--r--. 1 efm  efm  195 Oct  5 13:34 efm.nodes

Step 4. Stop the EFM 3.1 agents/service on all the nodes with any one of the two methods

Method 1: Using OS service/unit script
	systemctl stop efm-3.1.service
Method 2: Using EFM utility "stop-cluster" option
	/usr/edb/efm-3.1/bin/efm stop-cluster <efm-cluster-name>

Recommended method to use "stop-cluster" option in efm utility.

[root@master ]# /usr/edb/efm-3.1/bin/efm stop-cluster efm
Stop cluster command sent to 3 nodes.

Step 5. Now, start the EFM 3.2 agents/service on all the nodes.

[root@master ~]# systemctl start efm-3.2.service

After starting EFM agents we can check the cluster status using the latest binaries.

[root@masterdb ~]# /usr/edb/efm-3.2/bin/efm cluster-status efm
Cluster Status: efm
	Agent Type  Address              Agent  DB       VIP
	-----------------------------------------------------------------------
	Standby     172.31.32.122        UP     UP
	Standby     172.31.34.34         UP     UP
	Master      172.31.41.249        UP     UP
Allowed node host list:
	172.31.34.34 172.31.32.122 172.31.41.249
Membership coordinator: 172.31.34.34
Standby priority host list:
	172.31.34.34 172.31.32.122
Promote Status:
	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.31.41.249        0/190001B0
	Standby     172.31.34.34         0/190001B0
	Standby     172.31.32.122        0/190001B0
	Standby database(s) in sync with master. It is safe to promote.

That's all, we have completed EFM cluster upgrade in 5 simple & easy steps.

 

--Raghav