Oracle 19c RAC on

Google Cloud Platform

If you are looking for a "quick and dirty" installation guide of Oracle® Database 19c and Oracle Real Application Cluster suitable for training, development and test purpose, this is the right place to start !

In this series of articles we are going thru installation steps of a single node of an Oracle Database 19c RAC on CentOS 7 VM hosted on Google Cloud Platform.

Some considerations: please do not consider following instructions to implement your production environment, because if on the one hand you can for sure take advantage of the Google Cloud Free Tier offer (3-month free trial with $300 credit) to play around and test Oracle Database and RAC, but on the other hand it must be considered that Oracle Database seems not fully supported on GCP. That can be deduct from this official Oracle document (Licensing Oracle Software in the Cloud Computing Environment), where both Microsoft Azure and Amazon AWS are clearly listed, whereas there is no trace of Google Cloud Platform

Well, Let's start with part 1: Oracle RAC prerequisites

1) Go to https://console.cloud.google.com/ and create a new project, here we named it Oracle19cRAC. Due to the fact Oracle Clusterware and related Grid Infrastructure use private network link to coordinate and synchronise nodes of the cluster, we need to create a new VPC network we will use later to crete additional network card for enabling private nodes interconnection.


Name: oracle-rac-priv-sub-network

IP address ranges : 192.168.0.0/16

Gateway: 192.168.0.1

2) Go to Compute Engine menu and create a new VM Instance, here we named it oracle-rac. Press Create button.

  • Please select as "Machine type" at least n1-standard-1 and as boot image CentosOS 7 as described in the firs image below.

  • You need two Network interfaces: public (nic0 in default VPC network) and private (nic1) bind to priv network created at previous point.

  • Create 3 additional persistent disks:

      1. oracle-rac-voting1: 10 GB

      2. oracle-rac-fra1: 30 GB

      3. oracle-rac-data1 50 GB

3) Connect to the brand new vm oracle-rac via SSH in GCP console and then login to root and performe following system tasks to enable next Oracle installations.


# sudo su -# yum update

disable SELinux: vi /etc/selinux/config and set SELINUX=disabled

check if firewall is disabled


# systemctl stop firewalld.service# systemctl disable firewalld.service # systemctl status firewalld● firewalld.service Loaded: masked (/dev/null; bad) Active: inactive (dead)

4) Enable chrony daemon vi /etc/chrony.conf and add your public and private IP range, in our case:


# Allow NTP client access from local network.allow 192.168.0.0/16allow 10.142.0.0/16

restart, enable and check chrony


# systemctl restart chronyd.service# systemctl enable chronyd.service# systemctl status chronyd
# chronyc -a sources210 Number of sources = 2MS Name/IP address Stratum Poll Reach LastRx Last sample ===============================================================================^* metadata.google.internal 2 8 377 86 -257us[ -253us] +/- 514us^- propjet.latt.net 3 7 377 339 +353us[ +358us] +/- 79ms


5) Install xRDP to manege later Oracle Installe GUI in a remote desktop session:

# rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm# yum groupinstall "GNOME Desktop" "Graphical Administration Tools"# yum -y install xrdp tigervnc-server# systemctl start xrdp.service

check if xRDP is running (as mentioned above: systemctl disable firewalld)

# netstat -antup | grep xrdp# systemctl enable xrdp.service
# reboot

6) Install prerequisite rpm for Red Hat Enterprise Linux "Family" and so CentOS!

This package avoid to perform all kernel parameters manually, as required on old Oracle versions.

Official instruction can be find here at Installing Oracle Database RPM Manually section.

# curl -o oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm \https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
# yum -y localinstall oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm

7) Oracle ASM support RPM (Oracle ASMLib)

Following package let you create and manage ASM (Automatic Storage Management) Disks. You can find official download page here

# wget https://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el7.x86_64.rpm# wget https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracleasm-support-2.1.11-2.el7.x86_64.rpm# yum -y localinstall oracleasmlib-2.0.12-1.el7.x86_64.rpm# yum -y localinstall oracleasm-support-2.1.11-2.el7.x86_64.rpm
# reboot

8) Create and configure oracle and grid user

Best practices suggest to create two dedicate OS users for Oracle Grid and Database administration: grid and oracle


Oracle user has just been created by oracle-database-preinstall rpm installed at point 6.# id oracle uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall), 54322(dba), 54323(oper), 54324(backupdba), 54325(dgdba), 54326(kmdba), 54330(racdba)

Create additional OS groups on both servers.
# groupadd -g 54333 asmdba# groupadd -g 54334 asmoper# groupadd -g 54335 asmadmin# useradd -m -u 54341 -g oinstall -G dba,asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash grid# passwd oracle# passwd grid

Add oracle user to asmdba group.
usermod -a -G asmdba oracleid oracleid grid
Create the directories in which the Oracle software will be installed.
# mkdir -p /oracle/grid/19.3.0/grid_home# mkdir -p /oracle/grid/gridbase/# mkdir -p /oracle/db/19.3.0/db_home# chown -R oracle.oinstall /oracle/# chown -R grid.oinstall /oracle/grid/# chmod -R 775 /oracle/
Now configure bash profile of oracle and grid users
# su - oracle # vi .bash_profile
and add following lines at the end of the file
# Oracle Settings
export TMP=/tmpexport TMPDIR=$TMPexport ORACLE_HOSTNAME=oracle-racexport ORACLE_UNQNAME=ORA19Cexport ORACLE_BASE=/oracle/db/19.3.0export DB_HOME=$ORACLE_BASE/db_homeexport ORACLE_HOME=$DB_HOMEexport ORACLE_SID=ORA19C1export ORACLE_TERM=xtermexport PATH=/usr/sbin:/usr/local/bin:$PATHexport PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
# su - grid # vi .bash_profile
and add following lines at the end of the file
# Grid Settings
export TMP=/tmpexport TMPDIR=$TMPexport ORACLE_HOSTNAME=oracle-racexport ORACLE_BASE=/oracle/grid/gridbaseexport ORACLE_HOME=/oracle/grid/19.3.0/grid_homeexport GRID_BASE=/oracle/grid/gridbaseexport GRID_HOME=/oracle/grid/19.3.0/grid_homeexport ORACLE_SID=+ASM1export ORACLE_TERM=xtermexport PATH=/usr/sbin:/usr/local/bin:$PATHexport PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

9) Configure /etc/hosts file to resolve all minimum IPs needed by Oracle Grid

Find public and private IPs defined at point 1. when we created our Compute Engine VM instance. Please consider that public IP highlighted below is depending to the zone where your VM is located, in our case us-east1-b

# ip addr2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000 link/ether 42:01:0a:8e:00:04 brd ff:ff:ff:ff:ff:ff inet 10.142.0.4/16 brd 10.142.255.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet 10.142.0.8/16 brd 10.142.255.255 scope global secondary eth0:1 valid_lft forever preferred_lft forever inet 10.142.0.5/16 brd 10.142.255.255 scope global secondary eth0:2 valid_lft forever preferred_lft forever inet6 fe80::4001:aff:fe8e:4/64 scope link noprefixroute valid_lft forever preferred_lft forever3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000 link/ether 42:01:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff inet 192.168.0.2/32 brd 192.168.0.2 scope global dynamic eth1 valid_lft 85894sec preferred_lft 85894sec inet 169.254.12.146/19 brd 169.254.31.255 scope global eth1:1 valid_lft forever preferred_lft forever inet6 fe80::4001:c0ff:fea8:2/64 scope link valid_lft forever preferred_lft forever

Edit your /etc/hosts and add following lines paying attention to replace our public IPS in of our subnet 10.142.0.0/16 to your values. Best practices and production Oracle installation require to define 3 SCAN IPs, but for our experiment one ip will be enough to start the Clusterware services.


10.142.0.4 oracle-rac oracle-rac.localdomain <- please replace with an IP in your VPC network (usually is "default")10.142.0.5 oracle-rac-vip oracle-rac-vip.localdomain <- please replace with an IP in your VPC network (usually is "default")10.142.0.8 oracle-rac-scan oracle-rac-scan.localdomain <- please replace with an IP in your VPC network (usually is "default")192.168.0.2 oracle-rac-priv oracle-rac-priv.localdomain <- this is the private IP defined at point 1. When we create the VM (eth1 above)

10) DNS Configuration

If you do not want to get a warning at last step of installation, you can add your alias to your domain server. Also, you can configure dns information of nodes in /etc/resolf.conf file.


Ok Folks! Let's have a cup of coffee before going on to Oracle 19c RAC on Google Cloud Platform PART 2 /// -->