{"id":4236,"date":"2019-07-29T15:23:23","date_gmt":"2019-07-29T18:23:23","guid":{"rendered":"https:\/\/www.dbarj.com.br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/"},"modified":"2019-08-26T16:48:38","modified_gmt":"2019-08-26T19:48:38","slug":"deploying-a-highly-available-mysql-cluster-with-drbd-on-oci","status":"publish","type":"post","link":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/","title":{"rendered":"Deploying a highly available MySQL Cluster with DRBD on OCI"},"content":{"rendered":"<p>This tutorial walks you through the process of deploying a MySQL database to Oracle Cloud Infrastructure (<strong>OCI<\/strong>) by using Distributed Replicated Block Device (<strong>DRBD<\/strong>). DRBD is a distributed replicated storage system for the Linux platform.<\/p>\n<p><strong>PS: This post was based in many other articles that I&#8217;ve read over internet and I adapted them for OCI. To avoid having to write from scratch those beautiful definitions of the tools I use here, many statements were simply copied and pasted from those articles. My main source was this one from Google Cloud: <a href=\"https:\/\/cloud.google.com\/solutions\/deploying-highly-available-mysql-cluster-with-drbd-on-compute-engine\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/cloud.google.com\/solutions\/deploying-highly-available-mysql-cluster-with-drbd-on-compute-engine<\/a>.<\/strong><\/p>\n<p>The following diagram describe the proposed architecture:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"762\" height=\"530\" class=\"alignnone size-full wp-image-4220 \" src=\"https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/07\/img_5d38e74c9ac5c.png\" alt=\"\" srcset=\"https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/07\/img_5d38e74c9ac5c.png 762w, https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/07\/img_5d38e74c9ac5c-300x209.png 300w, https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/07\/img_5d38e74c9ac5c-720x500.png 720w\" sizes=\"auto, (max-width: 762px) 100vw, 762px\" \/><\/p>\n<p>For this first moment, I will only be focusing in building the <strong>HA solution (AD-1)<\/strong>, where we will have the MySQL Cluster working in Active\/Passive mode with a Quorum Server to avoid DRBD Split-brains. In a second moment (another article), I will describe the steps to build the DR solution (AD-2).<\/p>\n<p>Note we are placing all resources from HA in the same AD for better network throughput and lower latency. However, each server will be placed in a <strong>different Fault Domain<\/strong> to ensure the HA. The Floating IP will be a secondary IP on the primary VNIC of the compute, moving automatically using OCI-CLI. I&#8217;m proposing this over Load Balancer architecture to reduce the costs and complexity. Also, as this is not Active\/Active model, LB is not really necessary.<\/p>\n<p><strong>This article uses the following tools:<\/strong><\/p>\n<ul>\n<li>Oracle Cloud Resources (Instances, Subnets, VCN, etc)<\/li>\n<li>DRBD<\/li>\n<li>Pacemaker<\/li>\n<li>Corosync Cluster Engine<\/li>\n<li>Oracle Linux 7<\/li>\n<li>MySQL 5.7<\/li>\n<li>OCI-CLI (Oracle Command Line Interface)<\/li>\n<\/ul>\n<h2>Why a Quorum Server?<\/h2>\n<p>In a cluster, each node votes for the node that should be the active node\u2014that is, the one that runs MySQL. In a two-node cluster, it takes only one vote to determine the active node. In such a case, the cluster behavior might lead to <a href=\"https:\/\/wikipedia.org\/wiki\/Split-brain_(computing)\" target=\"external\" rel=\"noopener noreferrer\">split-brain<\/a> issues or downtime. Split-brain issues occur when both nodes take control because only one vote is needed in a two-node scenario. Downtime occurs when the node that shuts down is the one configured to always be the primary in case of connectivity loss. If the two nodes lose connectivity with each other, there&#8217;s a risk that more than one cluster node assumes it&#8217;s the active node.<\/p>\n<h2>How the Cluster works?<\/h2>\n<p>Pacemaker is a cluster resource manager. Corosync is a cluster communication and participation package, that&#8217;s used by Pacemaker. In this tutorial, you use DRBD to replicate the MySQL disk from the primary instance to the passive instance. In order for clients to connect to the MySQL cluster, you also deploy an internal virtual IP.<\/p>\n<p>You deploy a DRBD cluster on three compute instances. You install MySQL on two of the instances, which serve as your primary and standby instances. The third instance serves as a DRBD quorum device.<\/p>\n<p>Adding a quorum device prevents this situation. A quorum device serves as an arbiter, where its only job is to cast a vote. This way, in a situation where the <code>Active<\/code> and <code>Passive<\/code> instances cannot communicate, this quorum device node can communicate with one of the two instances and a majority can still be reached.<\/p>\n<h2>Lets Start<\/h2>\n<p>Before we begin, some important notes:<\/p>\n<ol>\n<li><strong>I like SELinux<\/strong>. I prefer to add exceptions or adapt it, but never disabling, specially when the software supports.<\/li>\n<li><strong>I like iptables and firewalld<\/strong>. Same reasons above.<\/li>\n<li>People usually don&#8217;t like to waste time setting up them and just disable those amazing security tools. If you also don&#8217;t know how to deal with them, disable at your own risk. In this article I will describe how to build everything with them <span style=\"color: #008000;\"><strong>enabled<\/strong><\/span>.<\/li>\n<\/ol>\n<p><span style=\"color: #ff0000;\"><strong>PS: Important!<br \/>\n<\/strong><\/span><span style=\"color: #ff0000;\"><strong>When you read:<\/strong><\/span><\/p>\n<ul>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>[root@ALL]$<\/strong>\u00a0 \u00a0 &#8211; Means commands that must be executed in Quorum Server and MySQL Nodes 1 and 2.<\/span><\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>[root@NODES]$<\/strong>\u00a0 &#8211; Means commands that must be executed in MySQL Nodes 1 and 2.<\/span><\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>[root@NODE_1]$<\/strong> &#8211;\u00a0Means commands that must be executed in MySQL Node 1 only.<\/span><\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>[root@NODE_2]$<\/strong> &#8211; Means commands that must be executed in MySQL Node 2 only.<\/span><\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>[root@QUORUM]$<\/strong> &#8211; Means commands that must be executed in Quorum Server only.<\/span><\/li>\n<\/ul>\n<p>The following variables must always be declared during the setup, whenever you reboot your instance or sudo to another user:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">### FOLLOWING VARIABLES MUST ALWAYS BE DECLARED\r\n\r\nINST_MYSQL_N1_IP=10.100.2.11\r\nINST_MYSQL_N2_IP=10.100.2.12\r\nINST_QUORUM_IP=10.100.2.13\r\n\r\n# VIRTUAL IP\r\nTARGET_VIP=10.100.2.10\r\n\r\nINST_MYSQL_N1_HOST=rj-mysql-node-1\r\nINST_MYSQL_N2_HOST=rj-mysql-node-2\r\nINST_QUORUM_HOST=rj-mysql-quorum<\/pre>\n<p>Note that I don&#8217;t call a node &#8220;primary&#8221; and the other &#8220;standby&#8221; as this role can change very dynamically. So I prefer to number them.<\/p>\n<h3>Building your OCI servers<\/h3>\n<p>First step is to build your servers. I like to use OCI-CLI for agility but you may also use the web interface. Just adapt the compartment-id, subnet-id, your ssh public key and the display name. For image-id, I&#8217;ve used the latest one for Linux 7.6. The Shape can be the minimum one or higher depending on your workload.<\/p>\n<p><strong>Note that each compute must be placed in a different Fault Domain.<\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">$ oci compute instance launch \\\r\n--availability-domain FfLG:US-ASHBURN-AD-1 \\\r\n--compartment-id ocid1.compartment.oc1..xxx \\\r\n--shape VM.Standard.E2.1 \\\r\n--display-name RJ_MYSQL_NODE_1 \\\r\n--image-id ocid1.image.oc1.iad.xxx \\\r\n--metadata '{ \"ssh_authorized_keys\": \"ssh-rsa xxx\" }' \\\r\n--subnet-id ocid1.subnet.oc1.iad.xxx \\\r\n--wait-for-state RUNNING \\\r\n--assign-public-ip false \\\r\n--private-ip ${INST_MYSQL_N1_IP} \\\r\n--fault-domain FAULT-DOMAIN-1 \\\r\n--hostname-label ${INST_MYSQL_N1_HOST}\r\n\r\n$ oci compute instance launch \\\r\n--availability-domain FfLG:US-ASHBURN-AD-1 \\\r\n--compartment-id ocid1.compartment.oc1..xxx \\\r\n--shape VM.Standard.E2.1 \\\r\n--display-name RJ_MYSQL_NODE_2 \\\r\n--image-id ocid1.image.oc1.iad.xxx \\\r\n--metadata '{ \"ssh_authorized_keys\": \"ssh-rsa xxx\" }' \\\r\n--subnet-id ocid1.subnet.oc1.iad.xxx \\\r\n--wait-for-state RUNNING \\\r\n--assign-public-ip false \\\r\n--private-ip ${INST_MYSQL_N2_IP} \\\r\n--fault-domain FAULT-DOMAIN-2 \\\r\n--hostname-label ${INST_MYSQL_N2_HOST}\r\n\r\n$ oci compute instance launch \\\r\n--availability-domain FfLG:US-ASHBURN-AD-1 \\\r\n--compartment-id ocid1.compartment.oc1..xxx \\\r\n--shape VM.Standard.E2.1 \\\r\n--display-name RJ_MYSQL_QUORUM \\\r\n--image-id ocid1.image.oc1.iad.xxx \\\r\n--metadata '{ \"ssh_authorized_keys\": \"ssh-rsa xxx\" }' \\\r\n--subnet-id ocid1.subnet.oc1.iad.xxx \\\r\n--wait-for-state RUNNING \\\r\n--assign-public-ip false \\\r\n--private-ip ${INST_QUORUM_IP} \\\r\n--fault-domain FAULT-DOMAIN-3 \\\r\n--hostname-label ${INST_QUORUM_HOST}<\/pre>\n<p>After computers get created, attach an external block volume in Node 1 and Node 2. They may have the same size. Finally, run your <span style=\"font-family: 'courier new', courier, monospace;\">iscsiadm<\/span> and check if your nodes can detect them:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@rj-mysql-node-1 ~]$ lsblk\r\nNAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT\r\nsdb       8:16   0   50G  0 disk\r\nsda       8:0    0 46.6G  0 disk\r\n\u251c\u2500sda2    8:2    0    8G  0 part [SWAP]\r\n\u251c\u2500sda3    8:3    0 38.4G  0 part \/\r\n\u2514\u2500sda1    8:1    0  200M  0 part \/boot\/efi\r\n\r\n[root@rj-mysql-node-2 ~]$ lsblk\r\nNAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT\r\nsdb       8:16   0   50G  0 disk\r\nsda       8:0    0 46.6G  0 disk\r\n\u251c\u2500sda2    8:2    0    8G  0 part [SWAP]\r\n\u251c\u2500sda3    8:3    0 38.4G  0 part \/\r\n\u2514\u2500sda1    8:1    0  200M  0 part \/boot\/efi<\/pre>\n<p>Connect on all nodes and add all their hostname entries from one in the other:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ cat \/etc\/hosts\r\n127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\r\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\r\n10.100.2.11 rj-mysql-node-1.demohubrjtestsu.demohub.oraclevcn.com rj-mysql-node-1\r\n10.100.2.12 rj-mysql-node-2.demohubrjtestsu.demohub.oraclevcn.com rj-mysql-node-2\r\n10.100.2.13 rj-mysql-quorum.demohubrjtestsu.demohub.oraclevcn.com rj-mysql-quorum<\/pre>\n<p>We are all set. Let&#8217;s begin.<\/p>\n<h3>Installing MySQL<\/h3>\n<p>Connect on Node 1 and Node 2, download and install the latest Community edition.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODES]$ wget http:\/\/repo.mysql.com\/yum\/mysql-5.7-community\/el\/7\/x86_64\/mysql-community-client-5.7.26-1.el7.x86_64.rpm\r\n[root@NODES]$ wget http:\/\/repo.mysql.com\/yum\/mysql-5.7-community\/el\/7\/x86_64\/mysql-community-common-5.7.26-1.el7.x86_64.rpm\r\n[root@NODES]$ wget http:\/\/repo.mysql.com\/yum\/mysql-5.7-community\/el\/7\/x86_64\/mysql-community-libs-5.7.26-1.el7.x86_64.rpm\r\n[root@NODES]$ wget http:\/\/repo.mysql.com\/yum\/mysql-5.7-community\/el\/7\/x86_64\/mysql-community-libs-compat-5.7.26-1.el7.x86_64.rpm\r\n[root@NODES]$ wget http:\/\/repo.mysql.com\/yum\/mysql-5.7-community\/el\/7\/x86_64\/mysql-community-server-5.7.26-1.el7.x86_64.rpm\r\n\r\n[root@NODES]$ yum -y localinstall mysql-community-*<\/pre>\n<p>Disable it (pacemaker will deal with the service) and open the firewall port.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">[root@NODES]$ systemctl stop mysqld\r\n[root@NODES]$ systemctl disable mysqld\r\n\r\n[root@NODES]$ firewall-cmd --permanent --add-service=mysql\r\n[root@NODES]$ firewall-cmd --reload<\/pre>\n<h3>Installing DRBD<\/h3>\n<p>DRBD 9 is currently still not available on yum repo (from the time of this article). So we will deploy it manually. DRDB 9 is a requirement as it has the quorum capability also on the DRBD layer (not only in corosync), protecting our cluster from some odd cascade scenarios described here: <a href=\"https:\/\/docs.linbit.com\/docs\/users-guide-9.0\/#s-configuring-quorum\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/docs.linbit.com\/docs\/users-guide-9.0\/#s-configuring-quorum<\/a><\/p>\n<p>Run commands below in Node 1, Node 2 and Quorum Server.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\"># Build the folder structure\r\n[root@ALL]$ cd; mkdir -p rpmbuild\/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}\r\n# Getting RPM Build\r\n[root@ALL]$ yum -y install rpm-build<\/pre>\n<p>First compiling DRBD version 9:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ yum -y install kernel-uek-devel kernel-devel\r\n[root@ALL]$ wget http:\/\/www.linbit.com\/downloads\/drbd\/9.0\/drbd-9.0.19-1.tar.gz\r\n[root@ALL]$ tar zxvf drbd-9.0.19-1.tar.gz\r\n[root@ALL]$ cd drbd-9.0.19-1\/\r\n[root@ALL]$ make kmp-rpm\r\n[root@ALL]$ cd<\/pre>\n<p>Now compiling drbd-utils:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ yum -y install flex po4a gcc-c++ automake libxslt docbook-style-xsl\r\n[root@ALL]$ wget http:\/\/www.linbit.com\/downloads\/drbd\/utils\/drbd-utils-9.10.0.tar.gz\r\n[root@ALL]$ tar zxvf drbd-utils-9.10.0.tar.gz\r\n[root@ALL]$ cd drbd-utils-9.10.0\/<\/pre>\n<p>Note: We need to add the &#8216;<span style=\"font-family: 'courier new', courier, monospace;\">%undefine with_sbinsymlinks<\/span>&#8216; after &#8216;<span style=\"font-family: 'courier new', courier, monospace;\">%bcond_without sbinsymlinks<\/span>&#8216; to avoid the self-conflicting file <span style=\"font-family: 'courier new', courier, monospace;\">\/usr\/sbin\/drbdadm<\/span> conflicts (and two others) errors.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ sed -i '\/%bcond_without sbinsymlinks\/a %undefine with_sbinsymlinks' drbd.spec.in\r\n[root@ALL]$ .\/configure\r\n[root@ALL]$ make rpm\r\n[root@ALL]$ cd<\/pre>\n<p>Now installing only the required packages.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ yum -y localinstall \/root\/rpmbuild\/RPMS\/x86_64\/drbd-utils-9.10.0-1.el7.x86_64.rpm\r\n[root@ALL]$ yum -y localinstall \/root\/rpmbuild\/RPMS\/x86_64\/drbd-bash-completion-9.10.0-1.el7.x86_64.rpm\r\n[root@ALL]$ yum -y localinstall \/root\/rpmbuild\/RPMS\/x86_64\/drbd-pacemaker-9.10.0-1.el7.x86_64.rpm\r\n[root@ALL]$ yum -y localinstall \/root\/rpmbuild\/RPMS\/x86_64\/kmod-drbd-9.0.19_4.14.35_1902.3.1.el7uek.x86_64-1.x86_64.rpm\r\n[root@ALL]$ yum -y localinstall \/root\/rpmbuild\/RPMS\/x86_64\/drbd-udev-9.10.0-1.el7.x86_64.rpm<\/pre>\n<p>As some kernel modules were replaced, to make sure everything is fine, run a reboot. Don&#8217;t forget to reload instance IPs and Hosts variables after connecting again.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ reboot\r\n<\/pre>\n<h3>Configuring DRBD<\/h3>\n<p>Disable DRBD if enabled by default.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ systemctl list-unit-files | grep enabled | grep drbd\r\n[root@ALL]$ systemctl disable drbd<\/pre>\n<p>Open firewall rules for the cluster HA.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ firewall-cmd --permanent --add-port=7788\/tcp\r\n[root@ALL]$ firewall-cmd --permanent --add-service=high-availability\r\n[root@ALL]$ firewall-cmd --reload<\/pre>\n<p>Changing <span style=\"font-family: 'courier new', courier, monospace;\">global_common.conf<\/span> and adding r0 (the main disk resource):<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ cp -avn \/etc\/drbd.d\/global_common.conf \/etc\/drbd.d\/global_common.conf.orig\r\n[root@ALL]$ cat &lt;&lt;EOF &gt; \/etc\/drbd.d\/global_common.conf\r\nglobal {\r\n    usage-count no;\r\n}\r\ncommon {\r\n    protocol C;\r\n    options {\r\n        quorum majority;\r\n        auto-promote no;\r\n    }\r\n}\r\nEOF\r\n\r\n[root@ALL]$ cat &lt;&lt;EOF &gt; \/etc\/drbd.d\/r0.res\r\nresource r0 {\r\n    meta-disk internal;\r\n    device \/dev\/drbd0;\r\n    net {\r\n        allow-two-primaries no;\r\n        after-sb-0pri discard-zero-changes;\r\n        after-sb-1pri discard-secondary;\r\n        after-sb-2pri disconnect;\r\n        rr-conflict disconnect;\r\n    }\r\n    on ${INST_QUORUM_HOST} {\r\n        node-id 0;\r\n        disk none;\r\n        address ${INST_QUORUM_IP}:7788;\r\n    }\r\n    on ${INST_MYSQL_N1_HOST} {\r\n        node-id 1;\r\n        disk \/dev\/sdb;\r\n        address ${INST_MYSQL_N1_IP}:7788;\r\n    }\r\n    on ${INST_MYSQL_N2_HOST} {\r\n        node-id 2;\r\n        disk \/dev\/sdb;\r\n        address ${INST_MYSQL_N2_IP}:7788;\r\n    }\r\n    connection-mesh {\r\n        hosts ${INST_MYSQL_N1_HOST} ${INST_MYSQL_N2_HOST} ${INST_QUORUM_HOST};\r\n    }\r\n    handlers {\r\n        quorum-lost \"echo b &gt; \/proc\/sysrq-trigger\";\r\n    }\r\n}\r\nEOF<\/pre>\n<p>Allowing DRBD on SELinux:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ semanage permissive -a drbd_t<\/pre>\n<p>Let the service auto-start on Quorum server only. On the other nodes it will be managed by pacemaker:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@QUORUM]$ systemctl enable drbd<\/pre>\n<p>Prepare disks on Node 1 and Node 2:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODES]$ mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard \/dev\/sdb\r\n[root@NODES]$ dd if=\/dev\/zero of=\/dev\/sdb bs=1k count=1024\r\n[root@NODES]$ drbdadm create-md r0<\/pre>\n<p>Bring the resource up in all of them:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ drbdadm up r0<\/pre>\n<p>Tell Node 1 he has the Primary data:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODE_1]$ drbdadm primary r0 --force\r\n[root@NODE_1]$ drbdadm -- --overwrite-data-of-peer primary r0\r\n[root@NODE_1]$ mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard \/dev\/drbd0<\/pre>\n<p>You may check the status running the commands below:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@ALL]$ drbdmon\r\n[root@ALL]$ cat \/sys\/kernel\/debug\/drbd\/resources\/r0\/connections\/*\/0\/proc_drbd<\/pre>\n<h3><strong>Configuring MySQL<\/strong><\/h3>\n<p>We are going to hold MySQL database, tmp files and config in <span style=\"font-family: 'courier new', courier, monospace;\">\/u01\/<\/span>. Let&#8217;s prepare it in both nodes:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">[root@NODES]$ mkdir \/u01\r\n[root@NODES]$ semanage fcontext -a -e \/var\/lib\/mysql \/u01\/mysql\r\n[root@NODES]$ semanage fcontext -a -t tmp_t \/u01\/tmp\r\n[root@NODES]$ semanage fcontext -a -t mysqld_etc_t \/u01\/my.cnf<\/pre>\n<p>Let&#8217;s remove old config and database files and point to new ones (that we will create later):<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODES]$ rm -rf \/var\/lib\/mysql\r\n[root@NODES]$ ln -s \/u01\/mysql \/var\/lib\/mysql\r\n\r\n[root@NODES]$ rm -f \/etc\/my.cnf\r\n[root@NODES]$ ln -s \/u01\/my.cnf \/etc\/my.cnf<\/pre>\n<p>Now on Node 1 (our current primary), mount the disk and create the folders:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODE_1]$ mount -o discard,defaults \/dev\/drbd0 \/u01\r\n\r\n[root@NODE_1]$ chown mysql: \/u01\r\n[root@NODE_1]$ mkdir \/u01\/tmp\r\n[root@NODE_1]$ chmod 1777 \/u01\/tmp\r\n[root@NODE_1]$ restorecon -v \/u01\/tmp\r\n\r\n[root@NODE_1]$ mkdir \/u01\/mysql\r\n[root@NODE_1]$ chown mysql: \/u01\/mysql\r\n\r\n[root@NODE_1]$ cat &lt;&lt;EOF &gt; \/u01\/my.cnf\r\n[mysqld]\r\nbind-address = 0.0.0.0  # You may want to listen at localhost at the beginning\r\ndatadir = \/u01\/mysql\r\ntmpdir = \/u01\/tmp\r\nsocket=\/var\/run\/mysqld\/mysql.sock\r\nuser = mysql\r\nsymbolic-links=0\r\nlog-error=\/var\/log\/mysqld.log\r\npid-file=\/var\/run\/mysqld\/mysqld.pid\r\nEOF\r\n\r\n[root@NODE_1]$ chown mysql: \/u01\/my.cnf\r\n[root@NODE_1]$ restorecon -v \/u01\/my.cnf<\/pre>\n<p>Start and stop the service to create the initial files. If you have any problem starting it, check for &#8216;success=no&#8217; in audit.log and also the mysqld.log for issues..<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODE_1]$ systemctl start mysqld\r\n[root@NODE_1]$ ls -la \/u01\/mysql\/\r\n[root@NODE_1]$ systemctl stop mysqld<\/pre>\n<h3><strong>Installing and Configuring Pacemaker, Corosync and PCSD<\/strong><\/h3>\n<p>On both nodes, install the following tools:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODES]$ yum -y install pcs pacemaker corosync\r\n[root@NODES]$ systemctl enable pcsd\r\n[root@NODES]$ systemctl enable pacemaker\r\n[root@NODES]$ systemctl enable corosync<\/pre>\n<p>Set a common password for hacluster user on the node servers:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODES]$ echo 'hsh918hs8fah89fh198fh' | passwd --stdin hacluster\r\n<\/pre>\n<p>In Node 1, create the corosync key and share it with node 2:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODE_1]$ corosync-keygen -l\r\n[root@NODE_1]$ cp -av \/etc\/corosync\/authkey \/home\/opc\/authkey\r\n[root@NODE_1]$ chown opc: \/home\/opc\/authkey\r\n\r\n[YOUR_HOST]$ scp -p -3 opc@${INST_MYSQL_N1_IP}:\/home\/opc\/authkey opc@${INST_MYSQL_N2_IP}:\/home\/opc\/authkey\r\n\r\n[root@NODE_1]$ rm -f \/home\/opc\/authkey\r\n\r\n[root@NODE_2]$ mv \/home\/opc\/authkey \/etc\/corosync\/authkey\r\n[root@NODE_2]$ chown root: \/etc\/corosync\/authkey<\/pre>\n<p>Set <span style=\"font-family: 'courier new', courier, monospace;\">corosync.conf<\/span> for both nodes. In Node 2, we need to adapt <span style=\"font-family: 'courier new', courier, monospace;\">Bindnetaddr<\/span> to his IP address.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">[root@NODES]$ cat &lt;&lt;EOF  &gt; \/etc\/corosync\/corosync.conf\r\ntotem {\r\n    version: 2\r\n    cluster_name: mysql_cluster\r\n    transport: udpu\r\n    interface {\r\n        ringnumber: 0\r\n        Bindnetaddr: ${INST_MYSQL_N1_IP}\r\n        broadcast: yes\r\n        mcastport: 5405\r\n    }\r\n}\r\nquorum {\r\n    provider: corosync_votequorum\r\n    two_node: 1\r\n}\r\nnodelist {\r\n    node {\r\n        ring0_addr: ${INST_MYSQL_N1_HOST}\r\n        name:  ${INST_MYSQL_N1_HOST}\r\n        nodeid: 1\r\n    }\r\n    node {\r\n        ring0_addr:  ${INST_MYSQL_N2_HOST}\r\n        name:  ${INST_MYSQL_N2_HOST}\r\n        nodeid: 2\r\n    }\r\n}\r\nlogging {\r\n    to_logfile: yes\r\n    logfile: \/var\/log\/corosync\/corosync.log\r\n    timestamp: on\r\n}\r\nEOF\r\n\r\n# On Node 2 only\r\n[root@NODE_2]$ sed -i \"s\/${INST_MYSQL_N1_IP}\/${INST_MYSQL_N2_IP}\/\" \/etc\/corosync\/corosync.conf<\/pre>\n<p>On both nodes, adding pacemaker to corosync:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODES]$ mkdir \/etc\/corosync\/service.d\/\r\n[root@NODES]$ cat &lt;&lt;EOF  &gt; \/etc\/corosync\/service.d\/pcmk\r\nservice {\r\n    name: pacemaker\r\n    ver: 1\r\n}\r\nEOF<\/pre>\n<p>Adding some corosync variables to the config file:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">[root@NODES]$ cp -avn \/etc\/sysconfig\/corosync \/etc\/sysconfig\/corosync.orig\r\n[root@NODES]$ cat &lt;&lt;EOF &gt;&gt; \/etc\/sysconfig\/corosync\r\n\r\n# Path to corosync.conf\r\nCOROSYNC_MAIN_CONFIG_FILE=\/etc\/corosync\/corosync.conf\r\n# Path to authfile\r\nCOROSYNC_TOTEM_AUTHKEY_FILE=\/etc\/corosync\/authkey\r\n# Enable service by default\r\nSTART=yes\r\nEOF<\/pre>\n<p>Create a folder for corosync logs, start the service and check if you can see both members:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">[root@NODES]$ mkdir \/var\/log\/corosync\/\r\n[root@NODES]$ systemctl start corosync\r\n[root@NODES]$ corosync-cmapctl | grep members<\/pre>\n<p>Create the pacamaker script below to have a better logging and auto cleanup of your in case of cancelled actions:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">[root@NODES]$ cat &lt;&lt; 'EOF'  &gt; \/var\/lib\/pacemaker\/drbd_cleanup.sh\r\n#!\/bin\/sh\r\nif [ -z $CRM_alert_version ]; then\r\n    echo \"$0 must be run by Pacemaker version 1.1.15 or later\"\r\n    exit 0\r\nfi\r\n\r\ntstamp=\"$CRM_alert_timestamp: \"\r\n\r\ncase $CRM_alert_kind in\r\n    resource)\r\n        if [ ${CRM_alert_interval} = \"0\" ]; then\r\n            CRM_alert_interval=\"\"\r\n        else\r\n            CRM_alert_interval=\" (${CRM_alert_interval})\"\r\n        fi\r\n\r\n        if [ ${CRM_alert_target_rc} = \"0\" ]; then\r\n            CRM_alert_target_rc=\"\"\r\n        else\r\n            CRM_alert_target_rc=\" (target: ${CRM_alert_target_rc})\"\r\n        fi\r\n\r\n        case ${CRM_alert_desc} in\r\n            Cancelled) ;;\r\n            *)\r\n                echo \"${tstamp}Resource operation \"${CRM_alert_task}${CRM_alert_interval}\" for \"${CRM_alert_rsc}\" on \"${CRM_alert_node}\": ${CRM_alert_desc}${CRM_alert_target_rc}\" &gt;&gt; \"${CRM_alert_recipient}\"\r\n                if [ \"${CRM_alert_task}\" = \"stop\" ] &amp;&amp; [ \"${CRM_alert_desc}\" = \"Timed Out\" ]; then\r\n                    echo \"Executing recovering...\" &gt;&gt; \"${CRM_alert_recipient}\"\r\n                    pcs resource cleanup ${CRM_alert_rsc}\r\n                fi\r\n                ;;\r\n        esac\r\n        ;;\r\n    *)\r\n        echo \"${tstamp}Unhandled $CRM_alert_kind alert\" &gt;&gt; \"${CRM_alert_recipient}\"\r\n        env | grep CRM_alert &gt;&gt; \"${CRM_alert_recipient}\"\r\n        ;;\r\nesac\r\nEOF\r\n\r\n[root@NODES]$ chmod 0755 \/var\/lib\/pacemaker\/drbd_cleanup.sh\r\n[root@NODES]$ touch \/var\/log\/pacemaker_drbd_file.log\r\n[root@NODES]$ chown hacluster:haclient \/var\/log\/pacemaker_drbd_file.log<\/pre>\n<h3><strong>Configuring PCSD<\/strong><\/h3>\n<p>Ensure PCSD service is started on all the servers:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">[root@NODES]$ systemctl start pcsd.service<\/pre>\n<p>Now let&#8217;s setup our cluster resources. We will do all the setup from Node 1 only. First, authenticate from Node 1 in Node 2:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODE_1]$ pcs cluster auth --name mysql_cluster ${INST_MYSQL_N1_HOST} ${INST_MYSQL_N2_HOST} -u hacluster\r\n<\/pre>\n<p>Now it&#8217;s time to set our resources. What the code below is doing:<\/p>\n<ul>\n<li>The DRBD service will be named <strong>mysql_drbd<\/strong>.<\/li>\n<li>The \/u01 mount point will be named <strong>mystore_fs<\/strong>.<\/li>\n<li>The MySQL database will be named <strong>mysql_database<\/strong>.<\/li>\n<li>Defining the DRBD as multi-state (Master\/Slave).<\/li>\n<li><strong>mystore_fs<\/strong> can only start if mysql_drbd is Master in that node.<\/li>\n<li>The start order is defined to <strong>mysql_drbd<\/strong> -&gt; <strong>mystore_fs<\/strong> -&gt; <strong>mysql_database<\/strong><\/li>\n<\/ul>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODE_1]$ pcs cluster start --all\r\n[root@NODE_1]$ pcs status\r\n[root@NODE_1]$ pcs cluster cib mysql_cfg\r\n[root@NODE_1]$ pcs -f mysql_cfg property set stonith-enabled=false\r\n[root@NODE_1]$ pcs -f mysql_cfg property set no-quorum-policy=stop\r\n[root@NODE_1]$ pcs -f mysql_cfg resource defaults resource-stickiness=200\r\n[root@NODE_1]$ pcs -f mysql_cfg resource create mysql_drbd ocf:linbit:drbd \\\r\n    drbd_resource=r0 \\\r\n    op monitor role=Master interval=29 timeout=20 \\\r\n    op monitor role=Slave interval=31 timeout=20 \\\r\n    op start timeout=120 \\\r\n    op stop timeout=60\r\n[root@NODE_1]$ pcs -f mysql_cfg resource master mysql_primary mysql_drbd \\\r\n    master-max=1 master-node-max=1 \\\r\n    clone-max=2 clone-node-max=1 \\\r\n    notify=true\r\n[root@NODE_1]$ pcs -f mysql_cfg resource create mystore_fs Filesystem \\\r\n    device=\"\/dev\/drbd0\" \\\r\n    directory=\"\/u01\" \\\r\n    fstype=\"ext4\"\r\n[root@NODE_1]$ pcs -f mysql_cfg constraint colocation add mystore_fs with mysql_primary INFINITY with-rsc-role=Master\r\n[root@NODE_1]$ pcs -f mysql_cfg constraint order promote mysql_primary then start mystore_fs\r\n[root@NODE_1]$ pcs -f mysql_cfg resource create mysql_database ocf:heartbeat:mysql \\\r\n    binary=\"\/usr\/sbin\/mysqld\" \\\r\n    config=\"\/u01\/my.cnf\" \\\r\n    datadir=\"\/u01\/mysql\" \\\r\n    pid=\"\/var\/run\/mysqld\/mysql.pid\" \\\r\n    socket=\"\/var\/run\/mysqld\/mysql.sock\" \\\r\n    additional_parameters=\"--bind-address=0.0.0.0\" \\\r\n    op start timeout=60s \\\r\n    op stop timeout=60s \\\r\n    op monitor interval=20s timeout=30s\r\n[root@NODE_1]$ pcs -f mysql_cfg constraint colocation add mysql_database with mystore_fs INFINITY\r\n[root@NODE_1]$ pcs -f mysql_cfg constraint order mystore_fs then mysql_database\r\n[root@NODE_1]$ pcs -f mysql_cfg alert create id=drbd_cleanup_file description=\"Monitor DRBD events and perform post cleanup\" path=\/var\/lib\/pacemaker\/drbd_cleanup.sh\r\n[root@NODE_1]$ pcs -f mysql_cfg alert recipient add drbd_cleanup_file id=logfile value=\/var\/log\/pacemaker_drbd_file.log\r\n[root@NODE_1]$ pcs cluster cib-push mysql_cfg\r\n[root@NODE_1]$ pcs status<\/pre>\n<p>Now check if mysql database has started. If it fails to do so, check for &#8216;success=no&#8217; in <span style=\"font-family: 'courier new', courier, monospace;\">audit.log<\/span> and also the <span style=\"font-family: 'courier new', courier, monospace;\">mysqld.log<\/span> for issues.<\/p>\n<h3><strong>Configuring OCI-CLI<\/strong><\/h3>\n<p>The OCI-CLI utility\u00a0 included in this build and will be responsible for moving a secondary IP from one VNIC to the other. The user account who is going to execute oci-cli will be <span style=\"font-family: 'courier new', courier, monospace;\">hacluster<\/span>. So we are going to deploy it in <span style=\"font-family: 'courier new', courier, monospace;\">\/home\/oracle-cli\/<\/span> and give the proper permissions:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODES]$ mkdir \/home\/oracle-cli\/\r\n[root@NODES]$ chown root: \/home\/oracle-cli\/\r\n[root@NODES]$ chmod 755 \/home\/oracle-cli\/\r\n\r\n[root@NODES]$ wget https:\/\/raw.githubusercontent.com\/oracle\/oci-cli\/master\/scripts\/install\/install.sh\r\n[root@NODES]$ bash install.sh --accept-all-defaults --exec-dir \/home\/oracle-cli\/bin\/ --install-dir \/home\/oracle-cli\/lib\/\r\n\r\n[root@NODES]$ rm -f install.sh\r\n[root@NODES]$ rm -rf \/root\/bin\/oci-cli-scripts\r\n\r\n[root@NODES]$ mkdir \/home\/oracle-cli\/.oci\r\n[root@NODES]$ chown hacluster:haclient \/home\/oracle-cli\/.oci\r\n[root@NODES]$ chmod 700 \/home\/oracle-cli\/.oci<\/pre>\n<p>Now it&#8217;s time to configure it. There are 2 ways you can make oci-cli calls to the tenancy:<\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Option A &#8211; Creating a new User in your OCI Tenancy\u00a0<\/strong><\/span><\/p>\n<p>One option is to create a user to perform this API activity with the minimum required privileges.<\/p>\n<ol>\n<li>Create a new user in your OCI tenancy. Eg: <strong>rj-mysql-user-change-vip<\/strong><\/li>\n<li>Create a new group and add the user to the group: Eg: <strong>rj-mysql-group-change-vip<\/strong><\/li>\n<li>Create a new policy and add only the minimal privilege required: Eg: <strong>rj-mysql-policy-change-vip<\/strong>\n<ul>\n<li><span style=\"font-family: 'courier new', courier, monospace;\">Allow group <strong>rj-mysql-group-change-vip<\/strong> to use private-ips in compartment <strong>ABC<\/strong><\/span><\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\">Allow group <strong>rj-mysql-group-change-vip<\/strong> to use vnics in compartment <strong>ABC<\/strong><\/span><\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>After the user is setup, proceed with the oci-cli configuration:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODE_1]$ sudo -u hacluster \/home\/oracle-cli\/bin\/oci setup config\r\n\r\n# Answers:\r\n# \/home\/oracle-cli\/.oci\/config\r\n# ocid1.user.oc1..xxx (Your user OCID)\r\n# ocid1.tenancy.oc1..xxx (Your tenancy OCID)\r\n# us-ashburn-1 (Your region)\r\n# Y\r\n# \/home\/oracle-cli\/.oci\r\n# oci_api_key\r\n# f1k10k0fk10k1f (Create a random password, just to give an extra security to your Priv Key)\r\n# f1k10k0fk10k1f (Retype it)\r\n# Y\r\n<\/pre>\n<p>Finally, copy and paste the generated public key <span style=\"font-family: 'courier new', courier, monospace;\">\/home\/oracle-cli\/.oci\/oci_api_key_public.pem\u00a0<\/span>in the <strong>API Keys<\/strong> section of your new user.<\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Option B &#8211; Creating a Dynamic Group in your OCI Tenancy\u00a0<\/strong><\/span><\/p>\n<p>The second option is to authorise your compute instance to make direct REST calls to the OCI using dynamic groups.<\/p>\n<ol>\n<li>Create a new dynamic group in your OCI tenancy with the following rules. Eg: <strong>rj-mysql-dyngroup-change-vip<\/strong>\n<ul>\n<li>\n<div class=\"listing-cell\"><span style=\"font-family: 'courier new', courier, monospace;\">instance.id = &#8216;ocid1.instance.oc1.iad.xxxx&#8217; (replace with your Node 1 OCID)<\/span><\/div>\n<\/li>\n<li>\n<div class=\"listing-cell\">\n<p><span style=\"font-family: 'courier new', courier, monospace;\">instance.id = &#8216;ocid1.instance.oc1.iad.yyyy&#8217; (replace with your Node 2 OCID)<\/span><\/p>\n<div class=\"listing-cell\"><\/div>\n<\/div>\n<\/li>\n<\/ul>\n<\/li>\n<li>Create a new policy and add only the minimal privilege required. Eg: <strong>rj-mysql-policy-change-vip<\/strong>\n<ul>\n<li><span style=\"font-family: 'courier new', courier, monospace;\">Allow group <strong>rj-mysql-group-change-vip<\/strong> to use private-ips in compartment <strong>ABC<\/strong><\/span><\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\">Allow group <strong>rj-mysql-group-change-vip<\/strong> to use vnics in compartment <strong>ABC<\/strong><\/span><\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>After the dynamic group is setup, proceed with the oci-cli configuration (adapt with your tenancy OCID):<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">[root@NODE_1]$ cat &lt;&lt; EOF &gt; \/home\/oracle-cli\/.oci\/config\r\n[DEFAULT]\r\ntenancy=ocid1.tenancy.oc1..xxx\r\nregion=us-ashburn-1\r\nEOF\r\n\r\n[root@NODE_1]$ chmod 600 \/home\/oracle-cli\/.oci\/config\r\n[root@NODE_1]$ chown -R hacluster:haclient \/home\/oracle-cli\/.oci\/<\/pre>\n<h3>OCI-CLI Test<\/h3>\n<p>Now that you have already decided either Option A or Option B, let&#8217;s proceed with the last configuration steps.<\/p>\n<p>Copy the .oci config folder to the other node:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODE_1]$ cp -ar \/home\/oracle-cli\/.oci\/ \/home\/opc\/.oci\/\r\n[root@NODE_1]$ chown -R opc: \/home\/opc\/.oci\/\r\n\r\n[YOUR_HOST]$ scp -r -p -3 opc@${INST_MYSQL_N1_IP}:\/home\/opc\/.oci\/ opc@${INST_MYSQL_N2_IP}:\/home\/opc\/.oci\/\r\n\r\n[root@NODE_1]$ rm -rf \/home\/opc\/.oci\/\r\n\r\n[root@NODE_2]$ mv \/home\/opc\/.oci\/ \/home\/oracle-cli\/\r\n[root@NODE_2]$ chown -R hacluster:haclient \/home\/oracle-cli\/.oci\/<\/pre>\n<p>Finally, create a script to perform the Virtual IP move and give the proper permissions on both nodes:<\/p>\n<p>PS: Change <strong>node1vnic<\/strong> and <strong>node2vnic<\/strong> variables with the right OCID of the primary VNICs of your nodes.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODES]$ cat &lt;&lt; EOF  &gt; \/home\/oracle-cli\/move_secip.sh\r\n#!\/bin\/sh\r\n##### OCI vNIC variables\r\nocibin=\"\/home\/oracle-cli\/bin\/oci\"\r\nconfigfile=\"\/home\/oracle-cli\/.oci\/config\"\r\nserver=\"\\$(hostname -s)\"\r\nnode1vnic=\"ocid1.vnic.oc1.iad.xxxx\"\r\nnode2vnic=\"ocid1.vnic.oc1.iad.yyyy\"\r\nvnicip=\"${TARGET_VIP}\"\r\n##### OCI\/IPaddr Integration\r\nif [ \"\\${server}\" = \"${INST_MYSQL_N1_HOST}\" ]\r\nthen\r\n   \\${ocibin} --config-file \\${configfile} network vnic assign-private-ip --unassign-if-already-assigned --vnic-id \\${node1vnic} --ip-address \\${vnicip}\r\nelse\r\n   \\${ocibin} --config-file \\${configfile} network vnic assign-private-ip --unassign-if-already-assigned --vnic-id \\${node2vnic} --ip-address \\${vnicip}\r\nfi\r\nEOF\r\n\r\n[root@NODES]$ chmod +x \/home\/oracle-cli\/move_secip.sh\r\n[root@NODES]$ chmod 700 \/home\/oracle-cli\/move_secip.sh\r\n[root@NODES]$ chown hacluster:haclient \/home\/oracle-cli\/move_secip.sh<\/pre>\n<p>Before testing this script, if you followed this article and gave the minimum required privileges in your policy (<span style=\"font-family: 'courier new', courier, monospace;\">use private-ips and use vnics<\/span>), those privileges will allow you move the Private IP from one node to the other, but not to create the private IP initially. So, first assign the IP manually using your OCI web-console with a privileged user to any of the nodes. Then you can test if the IPs will move:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODE_1]$ \/home\/oracle-cli\/move_secip.sh # Will move Virtual IP to Node 1\r\n[root@NODE_2]$ \/home\/oracle-cli\/move_secip.sh # Will move Virtual IP to Node 2<\/pre>\n<h3>Adding VIP script to PCSD<\/h3>\n<p>Our last step before testing is adding this script call to our configuration.<\/p>\n<p>First we create another script on both nodes that will call our <span style=\"font-family: 'courier new', courier, monospace;\">move_secip.sh<\/span> in some specific conditions:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODES]$ cat &lt;&lt; 'EOF'  &gt; \/var\/lib\/pacemaker\/ip_move.sh\r\n#!\/bin\/sh\r\nif [ -z $CRM_alert_version ]; then\r\n    echo \"$0 must be run by Pacemaker version 1.1.15 or later\"\r\n    exit 0\r\nfi\r\n\r\nif [ \"${CRM_alert_kind}\" = \"resource\" -a \"${CRM_alert_target_rc}\" = \"0\" -a \"${CRM_alert_task}\" = \"start\" -a \"${CRM_alert_rsc}\" = \"mysql_VIP\" ]\r\nthen\r\n    tstamp=\"$CRM_alert_timestamp: \"\r\n    echo \"${tstamp}Moving IP\" &gt;&gt; \"${CRM_alert_recipient}\"\r\n    \/home\/oracle-cli\/move_secip.sh &gt;&gt; \"${CRM_alert_recipient}\" 2&gt;&gt; \"${CRM_alert_recipient}\"\r\nfi\r\nEOF\r\n\r\n[root@NODES]$ chmod 0755 \/var\/lib\/pacemaker\/ip_move.sh\r\n\r\n[root@NODES]$ touch \/var\/log\/pacemaker_ip_move.log\r\n[root@NODES]$ chown hacluster:haclient \/var\/log\/pacemaker_ip_move.log<\/pre>\n<p>Finally, connect on Node 1 and add the <strong>ip_move.sh<\/strong> as a triggered alert:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODE_1]$ pcs cluster cib mysql_cfg\r\n[root@NODE_1]$ pcs -f mysql_cfg resource create mysql_VIP ocf:heartbeat:IPaddr2 ip=${TARGET_VIP} cidr_netmask=28 op monitor interval=20s\r\n[root@NODE_1]$ pcs -f mysql_cfg alert create id=ip_move description=\"Move IP address using oci-cli\" path=\/var\/lib\/pacemaker\/ip_move.sh\r\n[root@NODE_1]$ pcs -f mysql_cfg alert recipient add ip_move id=logfile_ip_move value=\/var\/log\/pacemaker_ip_move.log\r\n[root@NODE_1]$ pcs -f mysql_cfg constraint colocation add mysql_VIP with mysql_database INFINITY\r\n[root@NODE_1]$ pcs -f mysql_cfg constraint order mysql_database then mysql_VIP\r\n[root@NODE_1]$ pcs cluster cib-push mysql_cfg<\/pre>\n<h3>Testing the HA architecture<\/h3>\n<p>Finally, our last step is to test everything we have built so far.<\/p>\n<p>You can start\/stop the cluster on both nodes and check the resources moving using:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\">[root@NODE_1]$ pcs cluster stop\r\n[root@NODE_1]$ pcs cluster start\r\n[root@NODE_2]$ pcs cluster stop\r\n[root@NODE_2]$ pcs cluster start<\/pre>\n<p>Or you can trigger a resource move:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">[ANY_NODE]$ pcs resource move --master mysql_primary rj-mysql-node-1\r\n[ANY_NODE]$ pcs resource clear mysql_primary<\/pre>\n<p>Below are the timings I could measure with the building above:<\/p>\n<h3>Failure Scenarios<\/h3>\n<p>Time to test the auto-failover. The following commands were executed in each test:<\/p>\n<ul>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>Soft Stop<\/strong> -&gt; $ reboot (Services were disabled for no auto-start)<\/span><\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>Hard Stop<\/strong> -&gt; $ echo b &gt; \/proc\/sysrq-trigger (Services were disabled for no auto-start)<\/span><\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>Network Isolation<\/strong>:<\/span>\n<ul>\n<li><span style=\"font-family: 'courier new', courier, monospace;\">$ iptables -I INPUT 1 -s 10.100.2.X\/32 -j DROP; <\/span><span style=\"font-family: 'courier new', courier, monospace;\">iptables -I INPUT 1 -d 10.100.2.X\/32 -j DROP;<\/span><\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\">$ firewall-cmd &#8211;reload (To clear the 2 rules above and re-establish the communication)<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>For every scenario, the current configuration is:<\/p>\n<ul>\n<li><strong>Node 1 is <span style=\"color: #339966;\">Active Primary<\/span>.<\/strong><\/li>\n<li><strong>Node 2 is <span style=\"color: #ffcc00;\">Passive Primary<\/span>.<\/strong><\/li>\n<\/ul>\n<ol>\n<li><strong><span style=\"font-family: 'courier new', courier, monospace;\">Node 2 soft stop -&gt;\u00a0<\/span><\/strong><strong><span style=\"color: #339966;\">Nothing changes.<\/span><\/strong> Node 2 will tell node 1 it is leaving the cluster and node 1 will still be the primary with no delay as it still has quorum.<\/li>\n<li><strong><span style=\"font-family: 'courier new', courier, monospace;\">Node 2 hard stop -&gt;\u00a0<\/span><\/strong><span style=\"color: #0000ff;\"><strong>IO in node 1 freezes for about 10 secs.<\/strong><\/span> Node 2 <span style=\"text-decoration: underline;\"><strong>won&#8217;t<\/strong><\/span> tell node 1 it is shutting down, so node 1 will wait for a while until he declares node 2 is not in sync anymore.<\/li>\n<li><strong><span style=\"font-family: 'courier new', courier, monospace;\">Node 1 soft stop -&gt;\u00a0<\/span><\/strong><span style=\"color: #0000ff;\"><strong>Node 2 will be our new Primary with all services up after 20 secs.<\/strong><\/span> Node 1 will tell node 2 it is leaving the cluster and node 2 will be the converted into your new primary.<\/li>\n<li><strong><span style=\"font-family: 'courier new', courier, monospace;\">Node 1 hard stop -&gt;\u00a0<\/span><\/strong><span style=\"color: #0000ff;\"><strong>Node 2 will be our new Primary with all services up after 50 secs.<\/strong><\/span> Node 1 <span style=\"text-decoration: underline;\"><strong>won&#8217;t<\/strong><\/span> tell node 2 it is shutting down and node 2 will be the converted into primary after a timeout as it has now the quorum.<\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>Quorum Server soft\/hard stop -&gt; <\/strong><\/span><strong><span style=\"color: #339966;\">Nothing changes. <\/span><\/strong>Node 1 will continue to be your primary and node 2, the standby.<\/li>\n<li><strong><span style=\"font-family: 'courier new', courier, monospace;\">Node 1 gets isolated -&gt;\u00a0<\/span><\/strong><span style=\"color: #0000ff;\"><strong>Transactions in node 1 will be suspended. Node 2 will be our new Primary with all services up after 20 secs.<\/strong><\/span> As node 1 cannot talk to neither node 2 nor the tiebreaker node, it lost quorum to continue the operation and will be rebooted. Node 2 will be our new primary after about 20 secs.<\/li>\n<li><strong><span style=\"font-family: 'courier new', courier, monospace;\">Node 2 gets isolated -&gt;\u00a0<\/span><\/strong>Same as\u00a0 &#8220;<strong><span style=\"font-family: 'courier new', courier, monospace;\">Node 2 hard stop&#8221;.<\/span><\/strong><\/li>\n<li><strong><span style=\"font-family: 'courier new', courier, monospace;\">Quorum Server gets isolated -&gt;\u00a0<\/span><\/strong>Same as\u00a0 &#8220;<span style=\"font-family: 'courier new', courier, monospace;\"><strong>Quorum Server hard stop<\/strong><\/span><strong><span style=\"font-family: 'courier new', courier, monospace;\">&#8220;.<\/span><\/strong><\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>Node 1 and node 2 connection is broken -&gt;\u00a0<\/strong><\/span><span style=\"color: #0000ff;\"><strong>IO in node 1 freezes for about 10 secs.<\/strong><\/span><strong><span style=\"color: #339966;\">\u00a0<\/span><\/strong>Both nodes will try to get the quorum to become the new primary. However, the tiebreaker server will tell Node 2 he is Outdated. PCS will try to start the service on Node 2, but DRBD won&#8217;t allow it with the Outdated flag.<\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>Node 2 and Quorum Server connection is broken -&gt;<\/strong> <\/span><strong><span style=\"color: #339966;\">Nothing changes. <\/span><\/strong>Node 1 will continue to be your primary and node 2, the standby.<\/li>\n<li><span style=\"font-family: 'courier new', courier, monospace;\"><strong>Node 1 and Quorum Server connection is broken -&gt; <\/strong><\/span><strong><span style=\"color: #339966;\">Nothing changes. <\/span><\/strong>Node 1 will continue to be your primary and node 2, the standby.<\/li>\n<\/ol>\n<p>Imagine now this scenario:<\/p>\n<ul>\n<li>A is Node 1 &#8211; Primary<\/li>\n<li>B is Node 2 &#8211; Standby<\/li>\n<li>C is the Quorum Server<\/li>\n<\/ul>\n<p><img decoding=\"async\" src=\"https:\/\/docs.linbit.com\/ug-src\/users-guide-9.0\/images\/quorum-tiebreaker-disconnect-case2a.png\" alt=\"quorum tiebreaker disconnect case2a\" \/><\/p>\n<p>&nbsp;<\/p>\n<p id=\"ZpjVdCi\">Node 2 suddenly gets isolated. In this case, the tiebreaker node forms a partition with the primary node. The primary therefore keeps quorum, while the secondary becomes outdated. <strong>Note that the secondary compute doesn&#8217;t know he is outdated. His state will still be &#8220;<span style=\"color: #ff0000;\">UpToDate<\/span>&#8220;, but regardless it cannot be promoted to primary because it lacks quorum.<\/strong><\/p>\n<p>The application is still running on the primary. However, after some time of production execution, node 1 gets suddenly isolated and a bit later node 2 is back to the cluster. Note that now node 2 would have quorum and with the status <strong>UpToDate<\/strong>, it could become primary again. This would result in data loss. However, this can&#8217;t happen because <strong>a node that has lost quorum cannot regain quorum by connecting to a diskless node<\/strong>. Thus, in this case, no node has quorum and the cluster halts. We are safe. =]<\/p>\n<p><code>rj-mysql-node-2 kernel: drbd r0: 1 of 2 nodes visible, need 2 for quorum<br \/>\nrj-mysql-node-2 kernel: drbd r0: State change failed: No quorum<br \/>\nrj-mysql-node-2 kernel: drbd r0: Failed: role( Secondary -&gt; Primary )<br \/>\nrj-mysql-node-2 drbd(mysql_drbd)[15728]: ERROR: r0: Called drbdadm -c \/etc\/drbd.conf primary r0<br \/>\nrj-mysql-node-2 drbd(mysql_drbd)[15728]: ERROR: r0: Exit code 11<\/code><\/p>\n<p>And the scenario above is the main reason I&#8217;m using DRBD 9 instead of DRBD 8 for my cluster. With DRBD 9, my tiebreaker server will be a DRBD quorum disk. The Corosync quorum device don&#8217;t have this kind of intelligence as the DRBD quorum has and wouldn&#8217;t allow us to avoid this odd scenario.<\/p>\n<p>Hope you enjoyed. Soon I will write another article how to add the MySQL Replica DR into this configuration.<\/p>\n<p>Some useful links:<\/p>\n<ul>\n<li><a href=\"https:\/\/docs.linbit.com\/docs\/users-guide-9.0\/#s-configuring-quorum\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/docs.linbit.com\/docs\/users-guide-9.0\/#s-configuring-quorum<\/a><\/li>\n<li><a href=\"https:\/\/www.tecmint.com\/setup-drbd-storage-replication-on-centos-7\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.tecmint.com\/setup-drbd-storage-replication-on-centos-7\/<\/a><\/li>\n<li><a href=\"https:\/\/docs.linbit.com\/man\/v9\/drbd-conf-5\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/docs.linbit.com\/man\/v9\/drbd-conf-5\/<\/a><\/li>\n<li><a href=\"https:\/\/www.linbit.com\/en\/cheap-votes-drbd-diskless-quorum\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.linbit.com\/en\/cheap-votes-drbd-diskless-quorum\/<\/a><\/li>\n<li><a href=\"https:\/\/docs.linbit.com\/docs\/users-guide-8.3\/p-performance\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/docs.linbit.com\/docs\/users-guide-8.3\/p-performance\/<\/a><\/li>\n<li><a href=\"https:\/\/www.lisenet.com\/2016\/activepassive-mysql-high-availability-pacemaker-cluster-with-drbd-on-centos-7\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.lisenet.com\/2016\/activepassive-mysql-high-availability-pacemaker-cluster-with-drbd-on-centos-7\/<\/a><\/li>\n<\/ul>\n<b>Have you enjoyed? Please leave a comment or give a \ud83d\udc4d!<\/b>\n<div class='watch-action'><div class='watch-position align-left'><div class='action-like'><a class='lbg-style2 like-4236 jlk' href='javascript:void(0)' data-task='like' data-post_id='4236' data-nonce='de4404f630' rel='nofollow'><img class='wti-pixel' src='https:\/\/www.dbarj.com.br\/wp-content\/plugins\/wti-like-post\/images\/pixel.gif' title='Like' \/><span class='lc-4236 lc'>0<\/span><\/a><\/div><\/div> <div class='status-4236 status align-left'><\/div><\/div><div class='wti-clear'><\/div>","protected":false},"excerpt":{"rendered":"<p>This tutorial walks you through the process of deploying a MySQL database to Oracle Cloud Infrastructure (OCI) by using Distributed Replicated Block Device (DRBD). DRBD is a distributed replicated storage system for the Linux platform. PS: This post was based in many other articles that I&#8217;ve read over internet and I adapted them for OCI. &hellip; <\/p>\n<p><a class=\"more-link btn\" href=\"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/\">Continue lendo<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2,22],"tags":[],"class_list":["post-4236","post","type-post","status-publish","format-standard","hentry","category-database","category-rac","item-wrap"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Deploying a highly available MySQL Cluster with DRBD on OCI - DBA - Rodrigo Jorge - Oracle Tips and Guides<\/title>\n<meta name=\"description\" content=\"This tutorial walks you through the process of deploying a MySQL database to Oracle Cloud Infrastructure (OCI) by using Distributed Replicated Block Device (DRBD). DRBD is a distributed replicated storage system for the Linux platform.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"DBA RJ\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. tempo de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"26 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/\"},\"author\":{\"name\":\"DBA RJ\",\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/#\\\/schema\\\/person\\\/28a44ca3a6633fe4156ad1ea209d40a9\"},\"headline\":\"Deploying a highly available MySQL Cluster with DRBD on OCI\",\"datePublished\":\"2019-07-29T18:23:23+00:00\",\"dateModified\":\"2019-08-26T19:48:38+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/\"},\"wordCount\":2684,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/#\\\/schema\\\/person\\\/28a44ca3a6633fe4156ad1ea209d40a9\"},\"image\":{\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.dbarj.com.br\\\/wp-content\\\/uploads\\\/2019\\\/07\\\/img_5d38e74c9ac5c.png\",\"articleSection\":[\"Oracle Database General\",\"RAC, ASM &amp; Clusterware\"],\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/\",\"url\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/\",\"name\":\"Deploying a highly available MySQL Cluster with DRBD on OCI - DBA - Rodrigo Jorge - Oracle Tips and Guides\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.dbarj.com.br\\\/wp-content\\\/uploads\\\/2019\\\/07\\\/img_5d38e74c9ac5c.png\",\"datePublished\":\"2019-07-29T18:23:23+00:00\",\"dateModified\":\"2019-08-26T19:48:38+00:00\",\"description\":\"This tutorial walks you through the process of deploying a MySQL database to Oracle Cloud Infrastructure (OCI) by using Distributed Replicated Block Device (DRBD). DRBD is a distributed replicated storage system for the Linux platform.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/#breadcrumb\"},\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.dbarj.com.br\\\/wp-content\\\/uploads\\\/2019\\\/07\\\/img_5d38e74c9ac5c.png\",\"contentUrl\":\"https:\\\/\\\/www.dbarj.com.br\\\/wp-content\\\/uploads\\\/2019\\\/07\\\/img_5d38e74c9ac5c.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/2019\\\/07\\\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Deploying a highly available MySQL Cluster with DRBD on OCI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/#website\",\"url\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/\",\"name\":\"DBA - Rodrigo Jorge - Oracle Tips and Guides\",\"description\":\"Blog about Databases, Security and High Availability\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/#\\\/schema\\\/person\\\/28a44ca3a6633fe4156ad1ea209d40a9\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-BR\"},{\"@type\":[\"Person\",\"Organization\"],\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/pt-br\\\/#\\\/schema\\\/person\\\/28a44ca3a6633fe4156ad1ea209d40a9\",\"name\":\"DBA RJ\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/wp-content\\\/uploads\\\/2019\\\/09\\\/RodrigoJorgePOUG19.png\",\"url\":\"https:\\\/\\\/www.dbarj.com.br\\\/wp-content\\\/uploads\\\/2019\\\/09\\\/RodrigoJorgePOUG19.png\",\"contentUrl\":\"https:\\\/\\\/www.dbarj.com.br\\\/wp-content\\\/uploads\\\/2019\\\/09\\\/RodrigoJorgePOUG19.png\",\"width\":712,\"height\":712,\"caption\":\"DBA RJ\"},\"logo\":{\"@id\":\"https:\\\/\\\/www.dbarj.com.br\\\/wp-content\\\/uploads\\\/2019\\\/09\\\/RodrigoJorgePOUG19.png\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Deploying a highly available MySQL Cluster with DRBD on OCI - DBA - Rodrigo Jorge - Oracle Tips and Guides","description":"This tutorial walks you through the process of deploying a MySQL database to Oracle Cloud Infrastructure (OCI) by using Distributed Replicated Block Device (DRBD). DRBD is a distributed replicated storage system for the Linux platform.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/","twitter_misc":{"Escrito por":"DBA RJ","Est. tempo de leitura":"26 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/#article","isPartOf":{"@id":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/"},"author":{"name":"DBA RJ","@id":"https:\/\/www.dbarj.com.br\/pt-br\/#\/schema\/person\/28a44ca3a6633fe4156ad1ea209d40a9"},"headline":"Deploying a highly available MySQL Cluster with DRBD on OCI","datePublished":"2019-07-29T18:23:23+00:00","dateModified":"2019-08-26T19:48:38+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/"},"wordCount":2684,"commentCount":0,"publisher":{"@id":"https:\/\/www.dbarj.com.br\/pt-br\/#\/schema\/person\/28a44ca3a6633fe4156ad1ea209d40a9"},"image":{"@id":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/07\/img_5d38e74c9ac5c.png","articleSection":["Oracle Database General","RAC, ASM &amp; Clusterware"],"inLanguage":"pt-BR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/","url":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/","name":"Deploying a highly available MySQL Cluster with DRBD on OCI - DBA - Rodrigo Jorge - Oracle Tips and Guides","isPartOf":{"@id":"https:\/\/www.dbarj.com.br\/pt-br\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/#primaryimage"},"image":{"@id":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/07\/img_5d38e74c9ac5c.png","datePublished":"2019-07-29T18:23:23+00:00","dateModified":"2019-08-26T19:48:38+00:00","description":"This tutorial walks you through the process of deploying a MySQL database to Oracle Cloud Infrastructure (OCI) by using Distributed Replicated Block Device (DRBD). DRBD is a distributed replicated storage system for the Linux platform.","breadcrumb":{"@id":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/#breadcrumb"},"inLanguage":"pt-BR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/"]}]},{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/#primaryimage","url":"https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/07\/img_5d38e74c9ac5c.png","contentUrl":"https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/07\/img_5d38e74c9ac5c.png"},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbarj.com.br\/pt-br\/2019\/07\/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.dbarj.com.br\/pt-br\/"},{"@type":"ListItem","position":2,"name":"Deploying a highly available MySQL Cluster with DRBD on OCI"}]},{"@type":"WebSite","@id":"https:\/\/www.dbarj.com.br\/pt-br\/#website","url":"https:\/\/www.dbarj.com.br\/pt-br\/","name":"DBA - Rodrigo Jorge - Oracle Tips and Guides","description":"Blog about Databases, Security and High Availability","publisher":{"@id":"https:\/\/www.dbarj.com.br\/pt-br\/#\/schema\/person\/28a44ca3a6633fe4156ad1ea209d40a9"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbarj.com.br\/pt-br\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-BR"},{"@type":["Person","Organization"],"@id":"https:\/\/www.dbarj.com.br\/pt-br\/#\/schema\/person\/28a44ca3a6633fe4156ad1ea209d40a9","name":"DBA RJ","image":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/09\/RodrigoJorgePOUG19.png","url":"https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/09\/RodrigoJorgePOUG19.png","contentUrl":"https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/09\/RodrigoJorgePOUG19.png","width":712,"height":712,"caption":"DBA RJ"},"logo":{"@id":"https:\/\/www.dbarj.com.br\/wp-content\/uploads\/2019\/09\/RodrigoJorgePOUG19.png"}}]}},"_links":{"self":[{"href":"https:\/\/www.dbarj.com.br\/pt-br\/wp-json\/wp\/v2\/posts\/4236","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbarj.com.br\/pt-br\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbarj.com.br\/pt-br\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbarj.com.br\/pt-br\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbarj.com.br\/pt-br\/wp-json\/wp\/v2\/comments?post=4236"}],"version-history":[{"count":5,"href":"https:\/\/www.dbarj.com.br\/pt-br\/wp-json\/wp\/v2\/posts\/4236\/revisions"}],"predecessor-version":[{"id":4260,"href":"https:\/\/www.dbarj.com.br\/pt-br\/wp-json\/wp\/v2\/posts\/4236\/revisions\/4260"}],"wp:attachment":[{"href":"https:\/\/www.dbarj.com.br\/pt-br\/wp-json\/wp\/v2\/media?parent=4236"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbarj.com.br\/pt-br\/wp-json\/wp\/v2\/categories?post=4236"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbarj.com.br\/pt-br\/wp-json\/wp\/v2\/tags?post=4236"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}