
I already wrote how to configure a basic High Availability Ubuntu cluster. The steps to setup a basic cluster are detailed in the previous post, so please read the post if you don’t know how to make the cluster up&running. Same conventions are used here.
One of the topic I didn’t covered on the old post was “application replication/synchronization between the nodes“. Now it’s time to show you how to keep in sync files between cluster nodes, using DRBD software. DRBD is a powerful component of Linux kernel and is designed to keep in sync data via TCP/IP between nodes volumes. In this post we will setup a clustered freeradius service that sync /etc/freeradius/clients.conf file between nodes.
DRBD handle data replication at block device level; if you want read more on DRDB capabilities and functionalities have a look here (version 8.4 is the one included in Ubuntu 16.04 LTS, the version tested in this post).
For the test I added a small (50MB) dedicated Virtual Hard Disk (VHD) to my Virtual Box nodes; these devices are mapped as /dev/sdb on both nodes.
PRIMARY/SECONDARY# ls -l /dev/sdb brw-rw---- 1 root disk 8, 16 mar 7 17:59 /dev/sdb
Let’s start with DRBD installation and configuration on both nodes (PRIMARY / SECONDARY). Please be aware that each node must resolve both hostnames and that both nodes must be in time sync (ntp suggested).
PRIMARY/SECONDARY# apt-get install drbd-utils file: /etc/drbd.conf [...] resource freeradius { --> this is the DRBD resource name options { on-no-data-accessible suspend-io; } net { cram-hmac-alg "sha1"; shared-secret "yoursharedsecrethere"; --> place your unique shared secret here } on PRIMARY { device /dev/drbd0; --> the DRBD device will be created disk /dev/sdb; --> the original block device that DRBD syncs address PRIMARY_IP:7788; meta-disk internal; } on SECONDARY { device /dev/drbd0; disk /dev/sdb; address SECONDARY_IP:7788; meta-disk internal; } }
On both nodes we create the Mirror Device (md) and bring it up.
PRIMARY/SECONDARY# drbdadm create-md freeradius drbdadm create-md freeradius [...] ==> This might destroy existing data! <== Do you want to proceed? [need to type 'yes' to confirm] yes [...] New drbd meta data block successfully created. PRIMARY/SECONDARY# drbdadm up freeradius
We can check the status. We expect both device connected as Secondary and data Inconsistent. That’s ok!
PRIMARY/SECONDARY# drbd-overview
0:freeradius/0 Connected Secondary/Secondary Inconsistent/Inconsistent
Now in one of the nodes (PRIMARY) we force the primary role for DRBD.
PRIMARY# drbdadm -- --overwrite-data-of-peer primary freeradius
PRIMARY# drbd-overview
0:freeradius/0 SyncSource Primary/Secondary UpToDate/Inconsistent
[===>................] sync'ed: 23.1% (41024/51160)K
Check the sync status on both nodes until everything is ok (minutes, hours… depends from the volume size).
PRIMARY# drbd-overview 0:freeradius/0 Connected Primary/Secondary UpToDate/UpToDate SECONDARY# drbd-overview 0:freeradius/0 Connected Secondary/Primary UpToDate/UpToDate
From the PRIMARY node we can setup our logical DRBD partition /dev/drbd0.
PRIMARY# mkfs.ext4 /dev/drbd0
We copy the /etc/freeradius/clients.conf file inside DRBD logical device /dev/drbd0. This device will be mounted in the active node of the cluster in the directory /opt/freeradius/ (to be made in both nodes).
PRIMARY/SECONDARY# mkdir /opt/freeradius/ PRIMARY# mount /dev/drbd0 /opt/freeradius/ PRIMARY# cp /etc/freeradius/clients.conf /opt/freeradius/ PRIMARY# umount /dev/drbd0
Now we are ready to integrate DRBD into Corosync/Pacemaker, so crm (Cluster Resource Manager) can handle, on our behalf, the mount/umount of DRBD resources in sync with the start/stop of freeradius and VIP services. Most of the concepts, commands and considerations are the same of my previous post.
DRBD is an OCF resource, so first of all we need to disable drbd service at boot time. We need to be sure also that /dev/drbd0 is not mounted and DRBD device not used on both nodes.
PRIMARY/SECONDARY# systemctl disable drbd PRIMARY/SECONDARY# umount /dev/drbd0 PRIMARY/SECONDARY# drbdadm down freeradius PRIMARY/SECONDARY# drbd-overview 0:freeradius/0 Unconfigured . .
We prepare our freeradius local services to use the DRBD shared and synced clients.conf file.
On both nodes we create a symlink to that file; of course in the active node of the cluster the link will be ok, in the standby node will be broken (DRBD device is not mounted).
PRIMARY/SECONDARY# mv /etc/freeradius/clients.conf /etc/freeradius/clients.conf.orig PRIMARY/SECONDARY# ln -s /opt/freeradius/clients.conf /etc/freeradius/
We proceed with resources configuration (base setup described here).
PRIMARY# crm configure // define a DRBD OCF resource for freeradius crm(live)configure# primitive drbd_freeradius ocf:linbit:drbd \ params drbd_resource="freeradius" \ op monitor interval="15s" // define a Master/slave process that ensure that only one Master role is assigned to the DRBD freeradius resource crm(live)configure# ms ms_drbd_freeradiusdrbd_freeradius \ meta master-max="1" master-node-max="1" \ clone-max="2" clone-node-max="1" \ notify="true" // define a filesystem OCF resource to mount the drbd disk /dev/drbd0 in /opt/freeradius/ crm(live)configure# primitive fs_freeradius ocf:heartbeat:Filesystem \ params device="/dev/drbd0" directory="/opt/freeradius/" fstype="ext4" // define a IPaddr2 OCF resource for freeradius VIP crm(live)configure# primitive ip_freeradius ocf:heartbeat:IPaddr2 \ params ip="VIP" nic="eth0" // define an LSB resource for freeradius crm(live)configure# primitive freeradiusd lsb:freeradius // define a group with filesystem, VIP and freeradius service crm(live)configure# group freeradius_group fs_freeradius ip_freeradius freeradiusd // define a colocation to ensure that group resources and master/slave resources run togheter crm(live)configure# colocation freeradius_on_drbd inf: freeradius_groupms_drbd_freeradius:Master // define resource order that ensure DRBD is started first, then group resources crm(live)configure# order freeradius_after_drbd inf: ms_drbd_freeradius:promote freeradius_group:start // apply configuration crm(live)configure# commit crm(live)configure# exit
Now your cluster is up&running and your clients.conf will be in sync.
Enjoy!
Hi all,
is the HA shown above, suited to IMAP servers too amd the related Maildir ?
LikeLike
Hi, the HA config is indipendent from the service. You can configure freeradius, nginx (see other comment) or other services.
Giovanni
LikeLike
See comment on other post for nginx config, for imap is more or less the same
https://scubarda.wordpress.com/2016/10/30/configure-linux-high-availability-cluster-in-ubuntu-with-corosync/
LikeLike
hi,
I want make HA Cluster with Ubuntu with application of LAMP.
It is Ok i make this configuration ? it will also sycn data mysql ?
Thanks
LikeLike
Hi,
to sync 2 MySQL istances you need to configure a MySQL cluster, due to possible DB issue while HA switching.
I think the best think to do is to have the cluster that have Apache&Mysql (cluster configured) active on both nodes on same time and the VIP announced from the active node.
You can sync with drbd apache dirs to keep consistency of user sessions in case of cluster switch.
Never tried but I think this is the way
GIovanni
LikeLike
this is the first howto that really works for me. thank you! (Ubuntu 20.04)
LikeLike