Building an Nextcloud HA Cluster Part 5 Initialize Corosync/Pacemaker


So we havent achieved anything…
only created a few Mountpoints and the necessary DRBD Devices.

To put all the Stuff together we have to create the Basic Cluster Configuration.

Please do yourself a favor, dont miss the Logging function, if you want to generate the Clusterconfig with a tool like Yast, oder Redhat Framework.
Please read the Explanatory Note at the End of this Chapter.

Also think about Logrotation, cause those Cluster Logfiles are getting very large if you don’t think
about Logrotation…

SO in first place we put a logging Function in Place… please on both Nodes…

We create the Logrotation Configuration.

vi /etc/logrotate.d/cluster

The Configuration less logrotate that the Log should be rotated once a week.
Five Logfiles should be held back as compressed files.
The new Logfile should be created with the permissions 660 owned by the User
hacluster and the Group haclient.

/var/log/cluster/corosync.log {
rotate 5
create 660 hacluster haclient

Create SSH Keys on Both Nodes…

Add your Keys to to the /root/.ssh/authorized_keys file
Ensure that the Partner Nodes are able to log in to each other.

Create the Corosync Key on one Node.

# cd /etc/corosync

# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.

Copy the genereated Key to the Partnernode into the Directory

and chmod the file with 0400

If you look at the Configuration File you will noticte that the cluster has a nace its called webcluster01
Also the nodeid’s are set as a parameter.
If you don’t set the Node ID’s in the Configuration File, Corosync/Pacemaker will create Node ID’s, which is hihgly unlikeable, cause they are generated randomly.

totem {
version: 2
secauth: off
cluster_name: webcluster01
transport: udp

interface {
ringnumber: 0
mcastport: 5405
ttl: 1

nodelist {
node {
nodeid: 1

node {
nodeid: 2


quorum {
provider: corosync_votequorum
two_node: 0

logging {
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes

Logrotation Test.

Rotate the Log manually.

:~ # logrotate -vf /etc/logrotate.d/cluster
reading config file /etc/logrotate.d/cluster

Handling 1 logs

rotating pattern: /var/log/cluster/corosync.log forced from command line (5 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/cluster/corosync.log
log needs rotating
rotating log /var/log/cluster/corosync.log, log->rotateCount is 5
dateext suffix '-20170822'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
previous log /var/log/cluster/corosync.log.1 does not exist
renaming /var/log/cluster/corosync.log.5.gz to /var/log/cluster/corosync.log.6.gz (rotatecount 5, logstart 1, i 5),
old log /var/log/cluster/corosync.log.5.
copying /var/log/cluster/corosync.log to /var/log/cluster/corosync.log.1
truncating /var/log/cluster/corosync.log

Starting Up the Cluster Service
If you followed up all the Steps you’re able to get the Cluster up and running.

To get the Cluster up and running Enable the Pacemaker and the Corosync Service on ONE NODE!!
It might be a good idea to start with the Node which is the Current DRBD Master.

systemctl start pacemaker.service
systemctl start corosync.service

Stack: corosync
Current DC: suseclusternode0 (version 1.1.15-5.1-e174ec8) - partition WITHOUT quorum
Last updated: Wed Aug 23 12:28:18 2017
Last change: Wed Aug 23 12:27:35 2017 by hacluster via crmd on suseclusternode0

1 node configured
0 resources configured

Online: [ suseclusternode0 ]

It’s absolutely necessary that you dont ignore the fact that you will face Split Brain
if you start the Services on both Nodes at the same time.

Before you go further we set several properties on the first Node to make sure the second Node is able to join.

crm configure property no-quorum-policy=ignore
crm configure property stonith-enabled=false
crm configure property default-resource-stickiness=100
crm configure property stonith-action=poweroff

On the Secondary Node you will have to join the now existing One Node Cluster. 🙂

systemctl start pacemaker.service
systemctl start corosync.service

After a while the Cluster Monitor should present you a comparable Output.

Stack: corosync
Current DC: suseclusternode1 (version 1.1.15-5.1-e174ec8) - partition with quorum
Last updated: Wed Aug 23 12:15:42 2017
Last change: Wed Aug 23 12:15:40 2017 by root via cibadmin on suseclusternode1

2 nodes configured
0 resources configured

Online: [ suseclusternode0 suseclusternode1 ]

If you’re done an the Cluster is up and running…
Reboot your Nodes. for the further Steps.
After the Reboot your filesystems will be unmounted and DRBD will be down.

Explanatory Note:
There are several Distribution related Tools on SuSE and on CentOS/Redhat who can generate all this Stuff.
But i have noticed that those Tools don’t generate the Configuration as i wanted… so i in general i do it this way.

Don’t get me Wrong, those Tools especially on SuSE are well designed and they also make Sense. In some way it’s a question of personal taste, i prefer doing it this way.
There’s nothing wrong working with Distribution Toolsets.

But the way described in this Document works on all other Distributions too.

Building an Nextcloud HA Cluster Part 5 Initialize Corosync/Pacemaker