Organizations are deploying Postgres for an increasing array of important applications. As a result, operations teams, developers and DBAs need a deeper understanding of the tools and options for maintaining high availability. This online workshop will explore:
Join EnterpriseDB’s Marc Linster, SVP of Product Development, and Bobby Bissett, Java Architect, to gain the knowledge you need to maintain a mission-critical database with minimal downtime.
Presenters: Marc Linster and Bobby Bissett
Date: May 9th 11AM - 12PM ET
Hope you had a chance to join us for the webinar!!
If not, don't worry!
Additionally, some great questions were asked.
Here is are a few:
Did you just insert a value into a slave during the demo?
It probably looked like it, but no. I had two tabs in that window open, which probably wasn't very clear after I started doing things. In one I was connected to the VIP address, which followed the master around. In the other tab, I connected directly to a standby to show that it was actually streaming from the master. With that connection, attempting to insert data gives:
edb=# insert into simple values (123);
ERROR: cannot execute INSERT in a read-only transaction
In a cloud environment where we're paying for CPU cycles and/or I/Os, in very rough terms what kind of workload would we expect to see on a witness node?
Very roughly: The smallest machine I have running now is a virtual machine in an OpenStack environment with 2 VCPUs/4 Ram, and it's way overpowered for an EFM witness node. The 'top' CPU usage alternates between 0.0% and 0.3%. The process runs with 128Meg of memory, but that could be reduced a great deal if wanted. All it's doing is sending out a small heartbeat TCP message every few seconds, and responding to heartbeat messages from the other agents. So the network usage is a constant small stream (when someone runs the 'efm cluster-status' command, each agent responds with ~1 line of text). In case you wanted to test this with a small amount of setup, a cluster with nothing but X witness nodes would run fine so you could get that running without needing to install database servers.
If VIP is not used what are the other alternative solutions?
Without VIP, how the application connects to the databases after standby promotes to Master?
That's more of an architecture question about how to set up your application/database infrastructure. With "bare metal" servers, you could use a VIP, a load balancer, set up your applications with a multi-host DB connector, reconfigure applications directly after a failover, or probably other options. On cloud providers, using AWS as an example, the typical way is to use an EIP, but you could also use an elastic load balancer, the multi-host DB connector, etc. EFM doesn't prescribe any particular way to connect your applications to your master (and possibly standby) database. It has features to hopefully work with whatever you want to use. What I typically advise people is "Forget EFM for a minute; how do you *want* your applications to find your database(s)?" and then allow them to determine how that aligns with using an IPv4 or IPv6 address to allow EFM to handle the transition.