Installing and configuring PACEMAKER as a Cluster Manager – CentOS7

how to install and configure pacemaker

In this post, I will try to document the steps needed to set up and install Pacemaker as a cluster manager on CentOS7. Relate this post to the one that is about DRBD and combine them to get yourself a fully automated HA cluster.

Now, we need a cluster manager to make sure we can automatically transition between nodes and switch the master when necessary without manual intervention.

Let’s set up Pacemaker as our cluster manager.

Try installing without the steps below first, if that does not work then you can disable some security configurations and try again as described in the 4 steps below:

Disable SElinux

  • vim /etc/sysconfig/linux
  • set enforce 0
  • systemctl stop firewalld
  • systemctl disable firewalld

$yum install pacemaker pcs fence-agents-all -y

After the installation you can find the user account hacluster  inside /etc/passwd

this account will be used while configuring the cluster

On both nodes:

Set the password for this account

$password hacluster

Start and enable pacemaker

$systemctl start pcsd

$systemctl enable pcsd

Add entries to hosts file for all nodes

Example:

Now authorize the hacluster user for the nodes in the cluster:

$pcs cluster auth node1 node2 -u hacluster

Now create the cluster:

$pcs cluster setup --name mycluster node1 node2

Once the cluster is configured, see the contents of /etc/corosync/corosync.conf

$cat /etc/corosync/corocync.conf -> observe the same file in the secondary node.

Once we authenticate and create the cluster, this config file will be replicated to the secondary node automatically by corosync.

Now start and enable the cluster:

$pcs cluster start --all

$pcs cluster enable all

Cluster will be started in all nodes

$pcs status  -> will display the status of our cluster

Now the cluster is configured. We need to define the resources for the cluster.

In this example we will configure 2 resources

  • a virtual IP
  • httpd service

2 nodes will have separate IPs but we will configure a virtual IP. If one node goes down, the other will keep serving the requests on this same virtual IP.

To create a resource for virtual IP:

$pcs resource create VirtIP IPAddr ip=192.168.5.150 cidr_netmask=24 op monitor interval=30s

To create a resource for apache server:

$pcs resource create Httpd apache configuration=/etc/httpd/conf/ttpd.conf op monitor interval=30s --force

Now check the status:

$pcs status

$pcs status resources

$pcs constraint colocation add Httpd with VirtIP INFINITY

$pcs property set stonith-enabled=false

$pcs property set no-quorum-policy=ignore

$pcs property set default-resource-stickiness=INFINITY

$pcs status

$pcs status resources

$ip a  -> see the assigned virtualIP

Now to test the failover add separate index files to both nodes under /var/www/html

Type the virtual ip address in your browser. index from the primary node will be rendered.

$pcs cluster stop node1

Now the other index file with a different content from the secondary node will be rendered showing that pacemaker and corosync are doing their jobs.

Of course, this is a test case to actually see that node2 takes over when node1 is down, so we use seperate index files. Thinking about making the pacemaker work together with drbd we won’t be creating separate html files. We would be replicating the same html files and serving the same content from the secondary node if primary node goes down.

video ref: https://www.youtube.com/watch?v=PPIwnz2aXbI&t=10s

 

Hope this helps.
Good Luck,
Serdar