VMware and Oracle together – A brave World – Oracle Cloud Days, 2020

 

 

The announcement on Sept 16th, 2019 by Larry Ellison, CEO Oracle Corporation at the Oracle Open World 2019 keynote heralded a new era in the VMware Oracle partnership, centerpiece of this relationship being a new service that will be offered by Oracle called Oracle Cloud VMware Solution.

This service will enable customers to run VMware Cloud Foundation on Oracle’s Generation 2 Cloud infrastructure. But equally important for the thousands of customers running Oracle software on VMware today in their data centers, Oracle and VMware have entered into a new mutual support agreement for currently supported Oracle products running on VMware environments.

More details can be found here

 

Further on Oct 9th, 2019, the 2nd part of this strategic alliance culminated in a new agreement for enhanced support for Oracle products running on VMware Environments running on Oracle Supported Computing Environments. Oracle Corporation has now changed its landmark Metalink/MyOracleSupport.com note 249212.1 describing that mutual support.  

More details can be found here

 

 

 

VMworld 2019, EMEA  – Oracle Cloud VMware Solution (OCVS) session

 

An offshoot of the above landmark agreement also resulted in Oracle actively participating in VMworld 2019 EMEA, having a large booth presence in VMworld 2019 EMEA.

 

 

 

 

Scott Macy, Sr. Principal Product Manager, Oracle and Faisal Hasan, Principal Product Manager, Oracle   presented a breakout session “VMware on Gen2 Cloud Infrastructure: Meet OCVS HBI5204BES” on Nov 6th, 2019.

 

The sessions video recording can be found here

 

 

 

 

Oracle Cloud Days, Dallas, Feb 7th, 2020

 

As part of the synergy between VMware and Oracle, VMware was the diamond sponsor of the Oracle Cloud Days, Dallas on Feb 7th, 2020.

 

 

https://eventreg.oracle.com/profile/web/index.cfm?PKwebID=0x677769abcd

 

Ajay Patel, SVP and GM, Cloud Provider Software, VMware and Edward Screven, Chief Corporate Architect, Oracle also presented a Keynote during the event, the essence of the keynote is aptly captured in the slide below.

 

 

 

 

In addition, VMware also had a booth at the event where we were able to interact with customers and talk in depth about running Oracle workloads on VMware vSphere and the joint offering between the 2 companies.

 

 

 

 

Oracle Cloud Days, Boston, Mar 10th, 2020

 

Next Oracle Cloud Day event is scheduled on March 10th, 2020 at Boston. VMware will be the diamond sponsor for that event as well along with having a both presence talking to customers and cementing this relationship furthermore.

We will be having Geoff Thompson, Vice President, Global Cloud Provider Sales, VMware delivering a joint VMware Oracle keynote with Oracle.

 

 

https://eventreg.oracle.com/profile/web/index.cfm?PKwebID=0x677731abcd

 

Finally, the Oracle Collaborate event will be occurring the week of April 20th at Mandalay Bay in Las Vegas. “Collaborate” is the world’s biggest combined Oracle user groups conference now owned by “Quest Oracle Community” which will include the users groups Independent Oracle User group known as IOUG, the Oracle Applications user group known as OAUG, JDEdwards and Peoplesoft”. About 10-12,000 attendees are expected.

VMware has been sponsoring this event for over a decade but there is now a renewed sense of focus wince we will be promoting both the VMware Cloud on AWS as well as the Oracle Cloud VMware Solution.

 

 

Collaterals

 

All Oracle on vSphere white papers including Oracle on VMware vSphere / VMware vSAN /  VMware Cloud on AWS , Best practices, Deployment guides, Workload characterization guide can be found in the url below :

 

Oracle on VMware Collateral – One Stop Shop

http://www.vmw.re/oracle-one-stop-shop

https://blogs.vmware.com/apps/2017/01/oracle-vmware-collateral-one-stop-shop.html

 

This blog was released by Sudhir Balasubramanian

Posted in Uncategorized

VMware and Rocky Mountain Oracle User Group (RMOUG) – Training Days, Feb 18th, 2020

 

 

 

RMOUG is one of the largest Oracle User groups in North America.  The RMOUG’s Training Days is the premier (and largest) grass-roots Oracle user training event in the U.S., offering an incredible variety of technical and functional educational sessions over three days from world-renowned experts.

https://rmoug.org/training/

 

 

 

VMware Workshop

 

VMware and Rocky Mountain User Group (RMOUG) collaborated to offer a “Oracle Workloads on VMware” workshop during the RMOUG’s Training Days on Feb 18th, 2020.

The workshop titled “Architecting & Implementing Oracle Workloads on VMware—Including Hybrid Clouds” was held on Feb 18th, 2020 from 9:00 am – 11:00 am.

 

 

https://events.bizzabo.com/TD2020/page/1433921/workshops

 

The workshop was delivered by Sudhir Balasubramanian, Staff Solution Architect and Oracle Practice Lead, VMware.

The workshop was then followed by an Oracle Licensing on VMware Platform discussion led by Dean Bolton, LicenseFortress.

The workshop was very well received by the Oracle audience. We will be offering similar workshops at the other Oracle User group events as well.

 

 

In addition, VMware had a shared booth presence with LicenseFortress at the events where we were able to speak to customers about running their Oracle workload on VMware vSphere platform.

 

 

 

Workshop Abstract

 

On a mission to arm yourself with the latest knowledge and skills needed to master application virtualization? Then this workshop is for you. Get prepared to lead the virtualization effort in your organization, with instructor-led demos designed to put you in the ranks of the IT elite. This class will provide attendees with the opportunity to learn the essential skills necessary to run Oracle implementations on VMware vSphere.

This technical workshop will deliver practical, instructor-led, live training and incorporate the recommended design and configuration practices for architecting Oracle Databases on VMware vSphere infrastructure. We’ll cover a lot of ground, such as Real Applications Clusters, Automatic Storage Management, vSAN and NSX, as well as Oracle running on hybrid clouds, including VMware Cloud on AWS, Azure VMware Solution, and Oracle Cloud VMware Solution.

Plus, we’ll discuss Oracle licensing options and strategies for both on-premise and hybrid clouds.

 

https://events.bizzabo.com/TD2020/page/1433921/workshops

 

 

Collaterals

 

All Oracle on vSphere white papers including Oracle on VMware vSphere / VMware vSAN /  VMware Cloud on AWS , Best practices, Deployment guides, Workload characterization guide can be found in the url below :

 

Oracle on VMware Collateral – One Stop Shop

http://www.vmw.re/oracle-one-stop-shop

https://blogs.vmware.com/apps/2017/01/oracle-vmware-collateral-one-stop-shop.html

 

This blog was released by Sudhir Balasubramanian

Posted in Uncategorized

VMware and DOAG German Oracle User Group – January 23th – 24th, 2020

 

 

 

 

The DOAG German Oracle User Group was founded in 1988. Since then, DOAG has been the network of the German-speaking Oracle community, encouraging the exchange of knowledge on Oracle’s products and helping users to meet their daily challenges.

 

https://community.oracle.com/groups/doag

 

 

 

VMware Workshop

 

VMware Experts Program, Oracle Branch and DOAG German Oracle User Group (DOAG) collaborated to offer a NOON2NOON “Oracle on VMware” workshop on January 23 – 24, 2020 at Munich. It was aimed at database administrators and developers as well as decision makers close to technology.

 

 

 

 

Martin Klier (Oracle ACE Director), Johannes Ahrends (Oracle ACE), Axel vom Stein were the key organizers for the NOON2NOON DOAG event. They are part of the VMware Experts Program, Oracle Branch.

This workshop was delivered by Sudhir Balasubramanian and Valentin Bondzio, VMware using the VMware world class lab “VSLAB” for creating and running all lab exercises.

 

 

The workshop started on Jan 23rd at 12 noon and ended on Jan 24th at 4pm. The attendees were subjected to core VMware concepts followed by intense hands on lab as part of this workshop.

 

 

 

VMware Solutions Lab

 

The VMware Solutions Labs, founded in 2016 has a charter to bring together the many Hardware, Software and Implementation partners in the greater VMware ecosystem in an externally accessible lab environment.

Constant requests for collaboration from VMware partners in all disciplines accentuated the need for a unique location and process to streamline the submissions, selection and subsequent creation of co-branded collateral including white papers and videos.

The solutions lab is located outside the VMware network and accessible via the Internet facilitating collaboration between VMware and partners in joint solutions.

 

 

 

Workshop Abstract

 

DOAG 2020 NOON2NOON, January 23 – 24, 2020, Munich

 

https://www.doag.org/de/eventdetails?tx_doagevents_single%5Bid%5D=587261

 

Oracle and VMware – Love at first sight?

At the practice oriented DOAG 2020 Noon2Noon from January 23rd to 24th in Munich, we offer you the opportunity to create under neutral expert guidance and thus get the most out of the database and the virtual environment.

We will have VMware Experts Sudhir Balasubramanian and Valentin Bondzio walk the attendees through core VMware concepts followed by intense hands on lab as part of this workshop.

The unusual format is an event delivered by technicians for technicians. It is aimed at database administrators and developers as well as decision makers close to technology. From 12 to 12 p.m., noon to noon, we reduce the slide battles to a minimum. Noon2Noon stands for fun: The setting and the program are relaxing, and the technology is up to date!

 

Agenda:

Even though there are always discussions about licensing or support from Oracle on VMware, it is a common liaison. Oracle database administrators are often forced to accept configurations from infrastructure peers that may be less than optimal for the database. The goal is to gain a deeper knowledge of Oracle databases running on VMware to help the infrastructure colleagues determine the optimal setup for the Guests. Of course, we will also address the topic of licensing.

 

Noon:

After a short introduction to the day and a hearty lunch we will start with the topic of licensing or: “Prenuptial agreement or community of property?

After that, we’ll get down to business: First, we’ll look at vCenter and its components – don’t worry, we don’t want you to become a VMware expert – some background should be enough. Then we start with the setup.

 

Mid:

In the evening, there will be networking with delicious food is on the agenda. During the exchange of experiences with our experts there are no stupid questions. Before the day is over, we’ll take a look at the snapshots section and how or if they are suitable for Oracle backups.

 

Noon:

Fresh and lively we start the second day with the topic performance, or: “a working marriage” – you should pay attention to this, so that even after years the two components still work harmoniously together.

Although the event officially ends with lunch, our supervisors will be on site until at least 4 p.m. and will support you in everything you want to try out.

 

 

Collaterals

 

All Oracle on vSphere white papers including Oracle on VMware vSphere / VMware vSAN /  VMware Cloud on AWS , Best practices, Deployment guides, Workload characterization guide can be found in the url below :

 

Oracle on VMware Collateral – One Stop Shop

http://www.vmw.re/oracle-one-stop-shop

https://blogs.vmware.com/apps/2017/01/oracle-vmware-collateral-one-stop-shop.html

 

This blog was released by Sudhir Balasubramanian

Posted in Uncategorized

Oracle RAC storage migration from non-vSAN to vSAN 6.7 P01 – Through Thick to Thin

Storage migration of an Oracle RAC from non-vSAN Storage to vSAN 6.7 P01 – No remediation required for shared disks anymore 

 

This blog is a follow up of the previous blog post “Oracle RAC on vSAN 6.7 P01 – No more Eager Zero Thick requirement for shared vmdk’s”.

This blog describes the steps to storage vMotion an Oracle RAC cluster with shared EZT vmdk’s from a non-vSAN (FC / NAS / iSCSI ) storage to a vSAN 6.7 P01 storage and highlights the fact that apart from putting the multi-writer attribute back on the shared vmdk’s , there is no need for any further reconfiguration of the shared vmdk’s.

 

 

 

 

 

Oracle RAC on VMware platform – Concepts

 

By default, in a VM, the simultaneous multi-writer “protection” is enabled for all. vmdk files ie all VM’s have exclusive access to their vmdk files. So, in order for all of the VM’s to access the shared vmdk’s simultaneously, the multi-writer protection needs to be disabled. The use case here is Oracle RAC.

More details can be found in KB 1034165 and KB 2121181.

 

 

 

Oracle RAC on VMware vSAN prior vSAN 6.7 P01

 

KB 2121181 provides more details on how to set the multi-writer option to allow VM’s to share vmdk’s on VMware vSAN .

2 requirements for sharing disks for an Oracle RAC cluster on VMware vSAN prior vSAN 6.7 P01

  • the shared disk/s have to be Eager Zero Thick provisioned
  • the shared disk/s need to have the multi-writer attribute set

In vSAN , the Eager Zero thick attribute can be controlled by a rule in the VM Storage Policy called ‘Object Space Reservation’ which need to be set to 100% which pre-allocates all object’s components on disk.

More details can be found in KB 2121181.

 

 

 

 

 

VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001)  – release date DEC 5, 2019

 

VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001) release date DEC 5, 2019 , addresses a key limitation when provisioning shared disks with multi-writer for Oracle RAC cluster.

 

The current limitation is Oracle RAC shared vmdk’s need to be Eager Zero Thick for the multi-writer attribute to be enabled for the clustering software to work.This patch resolves this issue and now one can use thin provisioned vmdk’s with multi-writer attribute for Oracle RAC cluster starting VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001).

More details can be found here. The text relevant to Oracle RAC cluster is shown as below.

 

PR 2407141: You cannot provision virtual disks to be shared in multi-writer mode as eager zeroed thick-provisioned by using the vSphere Client

A shared virtual disk on a vSAN datastore for use in multi-writer mode, such as for Oracle RAC, must be eager zeroed thick-provisioned. However, the vSphere Client does not allow you to provision the virtual disk as eager zeroed thick-provisioned.

This issue is resolved in this release. You can share any type of virtual disks on the vSAN datastore in multi-writer mode.

 

Now there is only 1 requirement for sharing disks for an Oracle RAC cluster on VMware vSAN vSAN 6.7 P01

  • the shared disk/s need to have the multi-writer attribute set

 

What VMware vSAN 6.7 P01 means to customers is

  • customers NO longer have to provision storage up-front when provisioning space for an Oracle RAC shared storage on VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001)
  • The RAC shared vmdk’s can be thin provisioned to start with and can slowly consume space as and when needed.

 

 

 

Oracle RAC on non-vSAN storage (FC / NAS / iSCSI)

 

On a non-vSAN storage ( FC / NAS / iSCSI storage ) , when using the multi-writer mode, the virtual disk must be Eager Zeroed thick (EZT) ; it cannot be zeroed thick or thin provisioned.

2 requirements for sharing disks for an Oracle RAC cluster on non-vSAN storage (FC / NAS / iSCSI storage)

  • the shared disk/s have to be Eager Zero Thick provisioned
  • the shared disk/s need to have the multi-writer attribute set

This is described in detail in “Enabling or disabling simultaneous write protection provided by VMFS using the multi-writer flag (1034165)”.

 

 

 

 

 

Key points to take away from this blog

 

Migrating Oracle RAC storage from non-vSAN storage (FC / NAS / iSCSI storage) to

  • VMware vSAN prior vSAN 6.7 P01
    • Further reconfiguration of the shared vmdk’s needed ie remediation of shared disks from LZT to EZT
    • Add multi-writer attribute to the shared vmdk’s
  • VMware vSAN vSAN 6.7 P01
    • No further reconfiguration of the shared vmdk’s needed
    • Only step is to Add multi-writer attribute to the shared vmdk’s

This blog focuses on validating a test case ie Storage migrate a 2 node Oracle RAC from All Flash FC Storage array with VMFS6 datastores to VMware vSAN vSAN 6.7 P01.

The reverse process ie Storage migrate a 2 node Oracle RAC from VMware vSAN vSAN 6.7 P01 back to an All Flash FC Storage array with VMFS6 datastores would be the same process as shown in the test case except the further reconfiguration of the shared vmdk’s is needed ie remediation of shared disks from LZT to EZT.

 

 

 

 

 

Setup – Oracle RAC on VMware 6.7, 15160138 version

 

Lab setup is shown as below

  • 4 node VMware vSphere cluster VMware ESXi, 6.7.0, 15160138
  • vSphere cluster was connected to an All Flash FC Storage array with VMFS6 datastores
  • Oracle RAC storage was provisioned from the All Flash FC Storage array with VMFS6 datastores
  • 2 Node 19c Oracle RAC cluster on Oracle Linux Server release 7.6
  • Database Shared storage on Oracle ASM using Oracle ASMLIB
  • A 4 node VMware vSAN vSAN 6.7 P01 was also created on the same ESXi nodes
    • Each node has 28 CPU’s, 384 GB memory
    • Each node has 2 vSAN disk groups
    • Each disk group has 1x750GB for cache and 1×1.75 TB for capacity, all drives NVMe
    • Create a new VM Storage Policy – ‘Oracle RAC vSAN – OSR 0’

 

vSAN Storage policy “Oracle RAC vSAN – OSR 0” is as below

  • Failures to tolerate – 1 failure – RAID-1 (Mirroring)
  • Object space reservation – Thin provisioning

 

 

In this Oracle RAC Cluster

  • Both RAC VM has 8 vCPU’s, 96GB RAM
  • RAC 19c instances with Grid Infrastructure and RDBMS binaries with multi-tenancy option (1 PDB)
  • 1 ASM disk group DATA_DG with 1 ASM Disk of 1TB, EZT with MWF

 

RAC VM ‘rac19c1’

 

 

RAC VM ‘rac19c2’

 

 

Details of RAC VM ‘rac19c1’ non-shared vmdk and shared vmdk with multi-writer attribute is shown as below.

  • Hard Disk 1 is Operating System
  • Hard Disk 2 is for Grid Infrastructure and RDBMS binaries
  • Hard Disk 3 is the shared storage for RAC Cluster. Observe the shared vmdk at SCSI 2:0 of size 1TB

 

 

 

Details of RAC VM ‘rac19c2’  non-shared vmdk and shared vmdk with multi-writer attribute is shown as below.

  • Hard Disk 1 is Operating System
  • Hard Disk 2 is for Grid Infrastructure and RDBMS binaries
  • Hard Disk 3 is the shared storage for RAC Cluster. Observe ‘rac19c2’ points to ‘rac19c1’ shared vmdk at SCSI 2:0 of size 1TB

 

 

 

 

All Cluster Services & Resource are up.

 

 

[root@rac19c1 ~]# /u01/app/19.0.0/grid/bin/crsctl stat res -t
——————————————————————————–
Name Target State Server State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.LISTENER.lsnr
ONLINE ONLINE rac19c1 STABLE
ONLINE ONLINE rac19c2 STABLE
ora.chad
ONLINE ONLINE rac19c1 STABLE
ONLINE ONLINE rac19c2 STABLE
ora.net1.network
ONLINE ONLINE rac19c1 STABLE
ONLINE ONLINE rac19c2 STABLE
ora.ons
ONLINE ONLINE rac19c1 STABLE
ONLINE ONLINE rac19c2 STABLE
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE rac19c1 STABLE
2 ONLINE ONLINE rac19c2 STABLE
3 ONLINE OFFLINE STABLE
ora.DATA_DG.dg(ora.asmgroup)
1 ONLINE ONLINE rac19c1 STABLE
2 ONLINE ONLINE rac19c2 STABLE
3 OFFLINE OFFLINE STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac19c2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac19c1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac19c1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE rac19c1 169.254.2.146 192.16
8.140.154,STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE rac19c1 Started,STABLE
2 ONLINE ONLINE rac19c2 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE rac19c1 STABLE
2 ONLINE ONLINE rac19c2 STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE rac19c1 STABLE
ora.mgmtdb
1 ONLINE ONLINE rac19c1 Open,STABLE
ora.qosmserver
1 ONLINE ONLINE rac19c1 STABLE
ora.rac19c.db
1 ONLINE ONLINE rac19c1 Open,HOME=/u01/app/o
racle/product/19.0.0
/dbhome_1,STABLE
2 ONLINE ONLINE rac19c2 Open,HOME=/u01/app/o
racle/product/19.0.0
/dbhome_1,STABLE
ora.rac19c1.vip
1 ONLINE ONLINE rac19c1 STABLE
ora.rac19c2.vip
1 ONLINE ONLINE rac19c2 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac19c2 STABLE
ora.scan2.vip
1 ONLINE ONLINE rac19c1 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac19c1 STABLE
——————————————————————————–
[root@rac19c1 ~]#

 

 

 

 

Test Suite Setup

 

2 test cases were performed against the RAC cluster. The test cases are to Storage migrate a 2 node Oracle RAC from

  • All Flash FC Storage array with VMFS6 datastores to VMware vSAN vSAN 6.7 P01
  • VMware vSAN vSAN 6.7 P01 back to an All Flash FC Storage array with VMFS6 datastores

 

Test Case 1 :  Storage migrate a 2 node Oracle RAC from All Flash FC Storage array with VMFS6 datastores to VMware vSAN vSAN 6.7 P01

One of the limitations of shared vmdk’s with multi-writer attribute is Storage vMotion is disallowed. This has been documented in both KB 1024165 and KB 2121181.

 

 

 

Owing to this limitation the Oracle RAC storage migration would have to be an offline one

1) Stop all RAC Cluster services on RAC VM’s ‘rac19c1’ and ‘rac19c2’

2) Power off RAC VM’s ‘rac19c1’ and ‘rac19c2’

3) On RAC VM ‘rac19c2’ , delete the shared 1TB vmdk . DO NOT check ‘delete files from datastore’ . Click Ok

 

 

Now RAC VM ‘rac19c2’ has 2 non-shared vmdk’s .

4) Storage vMotion RAC VM ‘rac19c2’ to vSAN 6.7 P01 using Storage Policy ‘vSAN Default Storage Policy’

 

 

6) RAC VM ‘rac19c2’ is now on to vSAN 6.7 P01 using Storage Policy ‘vSAN Default Storage Policy’

 

 

7) On RAC VM ‘rac19c1’ , remove the multi-writer attribute from the shared 1TB vmdk . DO NOT delete OR check ‘delete files from datastore’ . Click Ok

 

 

 

8) Storage vMotion RAC VM ‘rac19c1’ to vSAN 6.7 P01 using Storage Policy

  • ‘vSAN Default Storage Policy’ for Hard Disk 1 (OS) and Hard Disk 2 (Oracle binaries). You can also choose to use vSAN Storage policy ‘Oracle RAC vSAN – OSR 0’.
  • ‘Oracle RAC vSAN – OSR 0’ for Hard Disk 3 ie the shared vmdk

 

 

 

Click Finish

 

9) RAC VM ‘rac19c1’ is now on to vSAN 6.7 P01 using Storage Policy

  • ‘vSAN Default Storage Policy’ for Hard Disk 1 (OS) and Hard Disk 2 (Oracle binaries)
  • ‘Oracle RAC vSAN – OSR 0’ for Hard Disk 3 ie the shared vmdk

 

 

 

The Storage Policy ‘Oracle RAC vSAN – OSR 0’ shows that RAC VM ‘rac19c1’ is compliant as well.

 

 

RAC VM ‘rac19c2’ is now on to vSAN 6.7 P01 using Storage Policy

  • ‘vSAN Default Storage Policy’ for Hard Disk 1 (OS) and Hard Disk 2 (Oracle binaries)

 

 

 

 

10) On RAC VM ‘rac19c1’, reapply the multi-writer attribute to the shared vmdk

 

 

 

11) On RAC VM ‘rac19c2’, Add the same shared vmdk on the same SCSI X:Y position as in VM ‘rac19c1’ with the multi-writer attribute using the ‘add existing hard drive’ option.

Ensure that the Storage Policy for the shared vmdk is set to ‘Oracle RAC vSAN – OSR 0’.

 

 

 

RAC VM ‘rac19c2’ is now on to vSAN 6.7 P01 using Storage Policy

  • ‘vSAN Default Storage Policy’ for Hard Disk 1 (OS) and Hard Disk 2 (Oracle binaries)
  • ‘Oracle RAC vSAN – OSR 0’ for Hard Disk 3 ie the shared vmdk

 

 

 

 

 

12) Power RAC VM’s ‘rac19c1’ and ‘rac19c2’, Check for all cluster services. All cluster services are up.

 

[root@rac19c1 ~]# /u01/app/19.0.0/grid/bin/crsctl stat res -t
——————————————————————————–
Name Target State Server State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.LISTENER.lsnr
ONLINE ONLINE rac19c1 STABLE
ONLINE ONLINE rac19c2 STABLE
ora.chad
ONLINE ONLINE rac19c1 STABLE
ONLINE ONLINE rac19c2 STABLE
ora.net1.network
ONLINE ONLINE rac19c1 STABLE
ONLINE ONLINE rac19c2 STABLE
ora.ons
ONLINE ONLINE rac19c1 STABLE
ONLINE ONLINE rac19c2 STABLE
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE rac19c1 STABLE
2 ONLINE ONLINE rac19c2 STABLE
3 ONLINE OFFLINE STABLE
ora.DATA_DG.dg(ora.asmgroup)
1 ONLINE ONLINE rac19c1 STABLE
2 ONLINE ONLINE rac19c2 STABLE
3 OFFLINE OFFLINE STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac19c1 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac19c2 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac19c2 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE rac19c2 169.254.26.87 192.16
8.140.155,STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE rac19c1 Started,STABLE
2 ONLINE ONLINE rac19c2 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE rac19c1 STABLE
2 ONLINE ONLINE rac19c2 STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE rac19c2 STABLE
ora.mgmtdb
1 ONLINE ONLINE rac19c2 Open,STABLE
ora.qosmserver
1 ONLINE ONLINE rac19c2 STABLE
ora.rac19c.db
1 ONLINE ONLINE rac19c1 Open,HOME=/u01/app/o
racle/product/19.0.0
/dbhome_1,STABLE
2 ONLINE ONLINE rac19c2 Open,HOME=/u01/app/o
racle/product/19.0.0
/dbhome_1,STABLE
ora.rac19c1.vip
1 ONLINE ONLINE rac19c1 STABLE
ora.rac19c2.vip
1 ONLINE ONLINE rac19c2 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac19c1 STABLE
ora.scan2.vip
1 ONLINE ONLINE rac19c2 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac19c2 STABLE
——————————————————————————–
[root@rac19c1 ~]#

 

 

 

 

Summary

 

This blog focuses on validating a test case ie Storage migrate a 2 node Oracle RAC from All Flash FC Storage array with VMFS6 datastores to VMware vSAN vSAN 6.7 P01.

The reverse process ie Storage migrate a 2 node Oracle RAC from VMware vSAN vSAN 6.7 P01 back to an All Flash FC Storage array with VMFS6 datastores would be the same process as shown in the test case except the further reconfiguration of the shared vmdk’s is needed ie remediation of shared disks from LZT to EZT.

Migrating Oracle RAC storage from non-vSAN storage (FC / NAS / iSCSI storage) to

  • VMware vSAN prior vSAN 6.7 P01
    • Further reconfiguration of the shared vmdk’s needed ie remediation of shared disks from LZT to EZT
    • Add multi-writer attribute to the shared vmdk’s
  • VMware vSAN vSAN 6.7 P01
    • No further reconfiguration of the shared vmdk’s needed
    • Only step is to Add multi-writer attribute to the shared vmdk’s

 

All Oracle on vSphere white papers including Oracle on VMware vSphere / VMware vSAN / VMware Cloud on AWS , Best practices, Deployment guides, Workload characterization guide can be found in the url below

 

Oracle on VMware Collateral – One Stop Shop

http://www.vmw.re/oracle-one-stop-shop

Posted in Uncategorized

Oracle RAC on vSAN 6.7 P01 – No more Eager Zero Thick requirement for shared vmdk’s

Oracle RAC on vSAN 6.7 P01 – No more Eager Zero Thick requirement for shared vmdk’s

 

 

 

 

Read my lips – “No More Eager Zero Thick (EZT) requirement for Oracle RAC multi-writer disks starting VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001)”

 

This blog addresses an important update for running Oracle RAC on VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001).

This applies to other clustered applications as well running on vSAN which uses multi-writer disks for clustering purposes e.g Oracle RAC, Redhat Clustering , Veritas Infoscale etc

 

 

Oracle RAC on VMware platform – Concepts

 

By default, in a VM, the simultaneous multi-writer “protection” is enabled for all. vmdk files ie all VM’s have exclusive access to their vmdk files. So, in order for all of the VM’s to access the shared vmdk’s simultaneously, the multi-writer protection needs to be disabled. The use case here is Oracle RAC.

 

 

 

Oracle RAC on VMware vSAN prior vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001)

 

KB 2121181 provides more details on how to set the multi-writer option to allow VM’s to share vmdk’s on VMware vSAN .

2 requirements for sharing disks for an Oracle RAC cluster on VMware vSAN prior vSAN 6.7 P01

  • the shared disk/s have to be Eager Zero Thick provisioned
  • the shared disk/s need to have the multi-writer attribute set

In vSAN , the Eager Zero thick attribute can be controlled by a rule in the VM Storage Policy called ‘Object Space Reservation’ which need to be set to 100% which pre-allocates all object’s components on disk.

More details can be found in KB 2121181.

 

 

 

 

VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001)  – release date DEC 5, 2019

 

VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001) release date DEC 5, 2019 , addresses a key limitation when provisioning shared disks with multi-writer for Oracle RAC cluster.

The current limitation is Oracle RAC shared vmdk’s need to be Eager Zero Thick for the multi-writer attribute to be enabled for the clustering software to work. This patch resolves this issue and now one can use thin provisioned vmdk’s with multi-writer attribute for Oracle RAC cluster starting VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001).

More details can be found here. The text relevant to Oracle RAC cluster is shown as below.

 

PR 2407141: You cannot provision virtual disks to be shared in multi-writer mode as eager zeroed thick-provisioned by using the vSphere Client

A shared virtual disk on a vSAN datastore for use in multi-writer mode, such as for Oracle RAC, must be eager zeroed thick-provisioned. However, the vSphere Client does not allow you to provision the virtual disk as eager zeroed thick-provisioned.  This issue is resolved in this release. You can share any type of virtual disks on the vSAN datastore in multi-writer mode.

 

Now there is only 1 requirement for sharing disks for an Oracle RAC cluster on VMware vSAN vSAN 6.7 P01

  • the shared disk/s need to have the multi-writer attribute set

 

What this means to customers is , with VMware vSAN 6.7 P01

  • Customers NO longer have to provision storage up-front when provisioning space for an Oracle RAC shared storage on VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001)
  • RAC shared vmdk’s can be thin provisioned to start with and can slowly consume space as and when needed.

 

The Build numbers and versions of VMware vSAN can be found at KB 2150753

 

 

 

Key points to take away from this blog

 

  • Prior VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001),  Oracle RAC on vSAN requires
    • shared VMDKs to be Eager Zero Thick provisioned (OSR=100)
    • shared VMDKs to have the multi-writer attribute enabled
  • Starting VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001), Oracle RAC on vSAN does NOT require the shared VMDKs to be Eager Zero Thick provisioned (OSR=100) for multi-writer mode to be enabled
    • shared VMDKs can be thin provisioned (OSR=0)
    • shared VMDKs to have the multi-writer attribute enabled
  • For Existing Oracle RAC deployments, with VMware vSAN 6.7 P01 patch
    • Storage Policy change of a shared vmdk from OSR=100 to OSR=0 (and vice-versa if needed) can be performed ONLINE for existing RAC clusters without any RAC downtime

 

This blog focuses on validating the various test cases which was run on

  • 4 node VMware vSAN Clusters 6.7 build-14766710
  • 2 Node 18c Oracle RAC cluster on Oracle Linux Server release 7.6
  • Database Shared storage on Oracle ASM using Oracle ASMLIB

 

Details of the testing can be found below.

 

 

 

 

Setup – Oracle RAC on VMware vSAN 6.7 P01

 

The VMware vSAN ESXi 6.7 Patch Release ESXi670-201912001 setup is shown as below

  • 4 node VMware vSAN Clusters 6.7
  • Each node has 28 CPU’s, 384 GB memory
  • Each node has 2 vSAN disk groups
  • Each disk group has 1x750GB for cache and 1×1.75 TB for capacity, all drives NVMe
  • 2 Node 18c Oracle RAC cluster on Oracle Linux Server release 7.6
  • Database Shared storage on Oracle ASM using Oracle ASMLIB
  • Current shared vmdk Storage Policy – ‘Oracle RAC vSAN Storage Policy’
  • Create a new VM Storage Policy – ‘Oracle RAC vSAN – OSR 0’

 

The storage policies are shown as below. 2 vSAN Storage policies were created – “Oracle RAC vSAN Storage Policy” and “Oracle RAC vSAN – OSR 0”.

  • “Oracle RAC vSAN Storage Policy” was the existing Storage policy for Oracle RAC shared vmdk’s
  • “Oracle RAC vSAN – OSR 0” was the new Storage policy for Oracle RAC shared vmdk’s

 

  • Oracle RAC vSAN Storage Policy
    • Failures to tolerate – 1 failure – RAID-1 (Mirroring)
    • Object space reservation – Thick provisioning

 

 

 

 

  • Oracle RAC vSAN – OSR 0
    • Failures to tolerate – 1 failure – RAID-1 (Mirroring)
    • Object space reservation – Thin provisioning

 

 

 

A clone was created of an existing 2 Node 18c Oracle RAC cluster with cloned RAC VM’s ‘clone_pvrdmarac1’ and ‘clone_pvrdmarac2’ for this test case. Operating system Oracle Linux Server release 7.6, was chosen for the test cases.

 

This RAC cluster had shared storage on Oracle ASM using Oracle ASMLIB. The shared disks for the RAC shared vmdk’s were provisioned with OSR=100 as per KB 2121181.

 

In this RAC Cluster

  • each RAC VM has 8 vCPU’s, 96GB RAM
  • Both running 18c instances with Grid Infrastructure and RDBMS binaries with multi-tenancy option (1 PDB)
  • 1 ASM disk group DATA_DG with 1 ASM Disk of 2TB, EZT with MWF
  • ASM disk group was initially created with storage policy ‘Oracle RAC vSAN Storage Policy’

 

 

 

 

 

Details of RAC VM ‘clone_pvrdmarac1’  shared vmdk with multi-writer attribute and Storage Policy is shown as below. Observe the shared vmdk at SCSI 2:0 of size 2TB.

 

 

 

 

Details of RAC VM ‘clone_pvrdmarac2’  shared vmdk with multi-writer attribute and Storage Policy is shown as below. Observe ‘clone_pvrdmarac1’ shared vmdk at SCSI 2:0 of size 2TB.

 

 

 

 

Looking at the actual space consumed for RAC VM ‘clone_pvrdmarac1’, the shared vmdk shows up as 4TB , reason the VM storage policy had

  • OSR=100 , so Eager Zero Thick provisioned
  • Failures to tolerate (FTT) set to ‘1 failure – RAID-1 (Mirroring)’ , so 2 copies

 

 

All Cluster Services & Resource are up.

 

[root@prdmarac1 ~]# /u01/app/18.0.0/grid/bin/crsctl stat res -t
——————————————————————————–
Name Target State Server State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE prdmarac1 STABLE
ONLINE ONLINE prdmarac2 STABLE
ora.DATA_DG.dg
ONLINE ONLINE prdmarac1 STABLE
ONLINE OFFLINE prdmarac2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE prdmarac1 STABLE
ONLINE ONLINE prdmarac2 STABLE
ora.chad
ONLINE OFFLINE prdmarac1 STABLE
ONLINE OFFLINE prdmarac2 STABLE
ora.net1.network
ONLINE ONLINE prdmarac1 STABLE
ONLINE ONLINE prdmarac2 STABLE
ora.ons
ONLINE ONLINE prdmarac1 STABLE
ONLINE ONLINE prdmarac2 STABLE
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE prdmarac2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE prdmarac1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE prdmarac1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE prdmarac1 169.254.5.190 192.16
8.140.161,STABLE
ora.asm
1 ONLINE ONLINE prdmarac1 Started,STABLE
2 ONLINE OFFLINE Instance Shutdown,ST
ABLE
3 ONLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE prdmarac1 STABLE
ora.mgmtdb
1 ONLINE OFFLINE STABLE
ora.prdmarac.db
1 ONLINE ONLINE prdmarac2 Open,HOME=/u01/app/o
racle/product/18.0.0
/dbhome_1,STABLE
2 ONLINE ONLINE prdmarac1 Open,HOME=/u01/app/o
racle/product/18.0.0
/dbhome_1,STABLE
ora.prdmarac1.vip
1 ONLINE ONLINE prdmarac1 STABLE
ora.prdmarac2.vip
1 ONLINE ONLINE prdmarac2 STABLE
ora.qosmserver
1 ONLINE ONLINE prdmarac1 STABLE
ora.scan1.vip
1 ONLINE ONLINE prdmarac2 STABLE
ora.scan2.vip
1 ONLINE ONLINE prdmarac1 STABLE
ora.scan3.vip
1 ONLINE ONLINE prdmarac1 STABLE
——————————————————————————–
[root@prdmarac1 ~]#

 

 

 

Test Suite Setup

 

4 test cases were performed against the RAC cluster. The test cases are

  • Using existing ASM Disk Group (DATA_DG)
    • Change Storage Policy of the shared vmdk’s OSR=100 to OSR=0 online without any RAC downtime
    • Change Storage Policy of the shared vmdk’s OSR=0 to OSR=100 online without any RAC downtime
    • Repeat the above 2 tests again just for consistency sake and check DB and OS error logs

 

SLOB Test Suite 2.4.2 was used with run time of 15 minutes with 128 threads

  • UPDATE_PCT=30
  • SCALE=10G
  • THINK_TM_FREQUENCY=0
  • RUN_TIME=900

 

All test cases were performed and the results were captured . Only Test Case 1 is published as part of this blog. The other test cases had the same exact outcome and hence is not included in this blog.

 

 

 

 

Test Case 1: Online change of Storage Policy for shared vmdk with no RAC downtime

 

 

As part of this test case, while the Oracle RAC was running, the Storage Policy of the shared vmdk was changed ONLINE without any downtime from ‘Oracle RAC vSAN Storage Policy’ to ‘Oracle RAC vSAN – OSR 0’.

 

1) Start SLOB load generator for 15 minutes against the RAC database with 128 threads against the SCAN to load balance between the 2 RAC instances

 

2) Edit settings for RAC VM ‘clone_pvrdmarac1’.

 

 

 

 

 

3) Change storage policy from ‘Oracle RAC vSAN Storage Policy’ to ‘Oracle RAC vSAN – OSR 0’ ie from OSR=100 to OSR=0

 

 

 

 

 

4) Edit settings for RAC VM ‘clone_pvrdmarac2’ and change shared vmdk Storage Policy from ‘Oracle RAC vSAN Storage Policy’ to ‘Oracle RAC vSAN – OSR 0’ ie from OSR=100 to OSR=0

 

 

 

 

 

5) Check both RAC VM’s Storage Policy Compliance

 

 

 

 

6) In the Datastore view,  if we check the RAC VM ‘clone_pvrdmarac1’ shared vmdk size, the Allocated Size is 2 TB but actual size on vSAN datastore is < 2TB.  This indicates that the shared vmdk is indeed thin-provisioned , not thick provisioned.

 

7) The O/S still reports the ASM disks with original size i.e 2 TB

 

[root@prdmarac1 ~]# fdisk -lu /dev/sda

Disk /dev/sda: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xcafbc3d9

Device Boot Start End Blocks Id System
/dev/sda1 2048 4294967294 2147482623+ 83 Linux
[root@prdmarac1 ~]#

[root@prdmarac2 ~]# fdisk -lu /dev/sda

Disk /dev/sda: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xcafbc3d9

Device Boot Start End Blocks Id System
/dev/sda1 2048 4294967294 2147482623+ 83 Linux
[root@prdmarac2 ~]#

 

 

 

8) No errors were observed in the database Alert log files and OS /var/log/messages file for both RAC instances

 

 

 

Summary of the test matrix along with the outcome for the various test cases are as below –

 

 

 

 

 

Summary

 

  • Prior VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001), Oracle RAC on vSAN requires the shared VMDKs to be Eager Zero Thick provisioned (OSR=100) for multi-writer mode to be enabled
  • Starting VMware vSAN 6.7 P01 (ESXi 6.7 Patch Release ESXi670-201912001), Oracle RAC on vSAN does NOT require the shared VMDKs to be Eager Zero Thick provisioned (OSR=100) for multi-writer mode to be enabled
    • shared VMDKs can be thin provisioned (OSR=0) for multi-writer mode to be enabled
  • For existing Oracle RAC deployments
    • Storage Policy change of a shared vmdk from OSR=100 to OSR=0 (and vice-versa if needed) can be performed ONLINE without any Oracle RAC downtime
  • All above test cases were performed and the results were captured . Only Test Case 1 is published as part of this blog. The other test cases had the same exact outcome and hence is not included in this blog.

 

All Oracle on vSphere white papers including Oracle on VMware vSphere / VMware vSAN / VMware Cloud on AWS , Best practices, Deployment guides, Workload characterization guide can be found in the url below

 

Oracle on VMware Collateral – One Stop Shop

http://www.vmw.re/oracle-one-stop-shop

Posted in Uncategorized

VMworld Europe 2019 Virtualizing Applications Track Highlights – Oracle

Oracle Workshop

 

Architecting and Implementing Oracle Workloads on VMware Technologies [BCA2171TE]

 

Thursday November 7th

 

Lead Instructor – Sudhir Balasubramanian @vRacDba

 

On a mission to arm yourself with the latest knowledge and skills needed to master Oracle workloads virtualization? This VMworld workshop will prepare you to lead the virtualization effort in your organization, with instructor-led demos and in-depth course work designed to put you in the ranks of the IT elite. The class will provide the attendee the opportunity to learn the essential skills necessary to run Oracle implementations on VMware vSphere. The best practices and optimal approaches to deployment, operation and management of Oracle database and application software will be presented by VMware vExpert and Oracle ACE Sudhir Balasubramanian who will be joined by other VMware and industry experts. This technical workshop will exceed the standard breakout session format by delivering “real-life,” instructor-led, live training and incorporating the recommended design and configuration practices for architecting Business Critical Databases on VMware vSphere infrastructure. Subjects such as Real Applications Clusters, Automatic Storage Management, vSAN and NSX , Oracle with vSphere Hardware Accelerators (PVRDMA and Persistent Memory) as well as Oracle running on the VMware Cloud on AWS will be covered in depth.

VMworld Europe this year has a lot to offer for everyone interested in running Oracle workloads efficiently and effectively on VMware vSphere – check out these events and ensure to add them to your schedule.

See you at VMworld 2019 Europe

Posted in Uncategorized

A Lean, Mean, Fighting Machine – Oracle RAC on VMware Cloud on AWS (SDDC 1.8v2)

Introducing …. the new lean, mean, fighting machine!! Oracle RAC on VMware Cloud on AWS SDDC 1.8v2 , all muscles, no fat!!

 

 

No more need to reserve all the storage upfront for installing and running Oracle RAC on VMware Cloud on AWS starting 1.8v2 !!

 

 

“To be precise” – No more Eager Zero Thick disk(s) needed for Oracle RAC shared vmdk’s on VMware Cloud on AWS starting 1.8v2. Read on.

 

 

 

 

 

Oracle RAC on VMware vSAN

 

By default, in a VM, the simultaneous multi-writer “protection” is enabled for all. vmdk files ie all VM’s have exclusive access to their vmdk files. So, in order for all of the VM’s to access the shared vmdk’s simultaneously, the multi-writer protection needs to be disabled. The use case here is Oracle RAC.

KB 2121181 provides more details on how to set the multi-writer option to allow VM’s to share vmdk’s on VMware vSAN .

Requirement for shared disks with the multi-writer flag setting for a RAC environment is that the shared disk is has to be set to Eager Zero Thick provisioned.

In vSAN , the Eager Zero thick attribute can be controlled by a rule in the VM Storage Policy called ‘Object Space Reservation’ which need to be set to 100% which pre-allocates all object’s components on disk.

More details can be found in KB 2121181.

 

 

 

 

Oracle RAC on VMware Cloud on AWS

 

VMware Cloud on AWS is an on-demand service that enables customers to run applications across vSphere-based cloud environments with access to a broad range of AWS services. Powered by VMware Cloud Foundation, this service integrates vSphere, vSAN and NSX along with VMware vCenter management, and is optimized to run on dedicated, elastic, bare-metal AWS infrastructure. ESXi hosts in VMware Cloud on AWS reside in an AWS availability Zone (AZ) and are protected by vSphere HA.

The ‘Oracle Workloads and VMware Cloud on AWS: Deployment, Migration, and Configuration ‘ can be found here.

 

 

 

 

What’s New Sept 24th, 2019 (SDDC Version 1.8v2)

 

As per the updated release notes, starting VMware Cloud on AWS SDDC version v1.8v2 , vSAN datastores in VMware Cloud on AWS now support multi-writer mode on Thin-Provisioned VMDKs.

Previously, prior SDDC Version 1.8v2, vSAN required VMDKs to be Eager Zero Thick provisioned for multi-writer mode to be enabled. This change enables deployment of workloads such as Oracle RAC with Thin-Provisioned, multi-writer shared VMDKs.

More details about this change can be found here.

What this means is customers NO longer have to provision storage up-front when provisioning space for an Oracle RAC shared storage on VMware Cloud on AWS SDDC version 1.8 v2.

The RAC shared vmdk’s can be thin provisioned to start with and can slowly consume space as and when needed.

 

 

 

 

Key points to take away from this blog

 

This blog focuses on validating the various test cases which was run on

  • 6 node Stretched Clusters for VMware Cloud on AWS SDDC Version 1.8v2
  • 2 Node 18c Oracle RAC cluster on Oracle Linux Server release 7.6
  • Database Shared storage on Oracle ASM using Oracle ASMLIB

 

 

Key points to take away

  • Prior SDDC Version 1.8v2, Oracle RAC on VMware Cloud on AWS requires the shared VMDKs to be Eager Zero Thick provisioned (OSR=100) for multi-writer mode to be enabled
  • Starting SDDC Version 1.8v2, Oracle RAC on VMware Cloud on AWS does NOT require the shared VMDKs to be Eager Zero Thick provisioned (OSR=100) for multi-writer mode to be enabled
    • shared VMDKs can be thin provisioned (OSR=0) for multi-writer mode to be enabled
  • For existing Oracle RAC deployments
    • Storage Policy change of a shared vmdk from OSR=100 to OSR=0 (and vice-versa if needed) can be performed ONLINE without any Oracle RAC downtime

 

 

 

Setup – Oracle RAC on VMware Cloud on AWS SDDC Version 1.8v2

 

The VMware Cloud on AWS SDDC Version is 1.8v2 as shown below.

 

 

 

 

Setup is a 6 node i3 VMware Cloud on AWS SDDC cluster stretched across 2 Availability Zones (AZ’s) , ‘us-west-2a’ and ‘us-west-2b’.

 

 

 

Each ESXi server in the i3 VMware Cloud on AWS SDDC has 2 sockets, 18 cores per socket with 512GB RAM. Details of the i3 SDDC cluster type can be found here.

 

 

 

An existing 2 Node 18c Oracle RAC cluster with RAC VM’s ‘rac18c1’ and ‘rac18c2’ , operating system Oracle Linux Server release 7.6, was chosen for the test cases.

This RAC cluster had shared storage on Oracle ASM using Oracle ASMLIB. The shared disks for the RAC shared vmdk’s were provisioned with OSR=100 as per KB 2121181.

 

 

 

 

As part of creating any Oracle RAC cluster on vSAN, the first step is to create a VM Storage Policy that will be applied to the virtual disks  which will be used as the cluster’s shared storage.

In this example we had 2 VM Storage Policies

  • ‘Oracle RAC VMC AWS Policy – OSR 100 RAID 1’
  • ‘Oracle RAC VMC AWS Policy – OSR 0 RAID 1’

 

The VM storage policy ‘Oracle RAC VMC AWS Policy – OSR 100 RAID 1’ is shown as below. The rules include

  • Failures to tolerate (FTT) set to 1 failure – RAID-1 (Mirroring)
  • Number of disk stripes per object set to 1
  • Object Space Reservation (OSR) set to Thick provisioning

 

 

 

The VM storage policy ‘Oracle RAC VMC AWS Policy – OSR 0 RAID 1’ is shown as below. The rules include

  • Failures to tolerate (FTT) set to 1 failure – RAID-1 (Mirroring)
  • Number of disk stripes per object set to 1
  • Object Space Reservation (OSR) set to Thin provisioning

 

 

 

Currently both RAC VM’s have shared vmdk’s with VM storage policy ‘Oracle RAC VMC AWS Policy – OSR 100 RAID 1’ and are complaint with the storage policy.

 

 

 

 

Details of RAC VM ‘rac18c1’ is shown as below.

 

 

 

RAC VM ‘rac18c1’ has 8 vCPU’s, 32GB RAM with 3 vmdk’s.

  • Non-shared vmdk’s
    • SCSI 0:0 rac18c1.vmdk of size 60G [ Operating System ]
    • SCSI 0:0 rac18c1_1.vmdk of size 60G [ Oracle Grid and RDBMS Binaries]
  • Shared vmdk
    • SCSI 2:0 rac18c1_3.vmdk of size 3TB [ Database ]

 

 

Details of the 3TB shared vmdk can be seen below. Notice that the shared vmdk is provisioned with VM storage policy ‘Oracle RAC VMC AWS Policy – OSR 100 RAID 1’.

 

 

 

Details of RAC VM ‘rac18c2’ is shown as below. Both VM’s are the same from CPU and Memory perspective,

 

 

RAC VM ‘rac18c2’ has 8 vCPU’s, 32GB RAM with 3 vmdk’s.

  • Non-shared vmdk’s
    • SCSI 0:0 rac18c2.vmdk of size 60G [ Operating System ]
    • SCSI 0:0 rac18c2_1.vmdk of size 60G [ Oracle Grid and RDBMS Binaries]
  • Shared vmdk
    • SCSI 2:0 rac18c1_3.vmdk of size 3TB [ Database ]

 

Details of the 3TB shared vmdk can be seen below. Notice that RAC VM ‘rac18c2’ points to RAC VM ‘rac18c1’ shared vmdk which has been provisioned with VM storage policy ‘Oracle RAC VMC AWS Policy – OSR 100 RAID 1’.

 

 

 

Operating System ‘fdisk’ command reveals the OS sees a 3TB disk on both VM’s

 

 

[root@rac18c1 ~]# fdisk -lu /dev/sdd
Disk /dev/sdd: 3298.5 GB, 3298534883328 bytes, 6442450944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4f8c7d76
Device Boot Start End Blocks Id System
/dev/sdd1 2048 4294967294 2147482623+ 83 Linux
[root@rac18c1 ~]#

 

[root@rac18c2 ~]# fdisk -lu /dev/sdd
Disk /dev/sdd: 3298.5 GB, 3298534883328 bytes, 6442450944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4f8c7d76
Device Boot Start End Blocks Id System
/dev/sdd1 2048 4294967294 2147482623+ 83 Linux
[root@rac18c2 ~]#

 

 

Looking at the actual space consumed for RAC VM ‘rac18c1’ , the shared vmdk ‘rac18c1_3.vmdk’ shows up as 6TB , reason the VM storage policy had

  • OSR=100 , so Eager Zero Thick provisioned
  • Failures to tolerate (FTT) set to ‘1 failure – RAID-1 (Mirroring)’ , so 2 copies

 

 

 

 

The actual space consumed for RAC VM ‘rac18c2’ is shown as below.

 

 

 

All Cluster Services & Resource are up.

 

[root@rac18c1 ~]# /u01/app/18.0.0/grid/bin/crsctl stat res -t
——————————————————————————–
Name Target State Server State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac18c1 STABLE
ONLINE ONLINE rac18c2 STABLE
ora.DATA_DG.dg
ONLINE ONLINE rac18c1 STABLE
ONLINE ONLINE rac18c2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac18c1 STABLE
ONLINE ONLINE rac18c2 STABLE
ora.chad
ONLINE ONLINE rac18c1 STABLE
ONLINE ONLINE rac18c2 STABLE
ora.net1.network
ONLINE ONLINE rac18c1 STABLE
ONLINE ONLINE rac18c2 STABLE
ora.ons
ONLINE ONLINE rac18c1 STABLE
ONLINE ONLINE rac18c2 STABLE
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac18c2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac18c1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac18c1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE rac18c1 169.254.10.195 192.1
68.140.207,STABLE
ora.asm
1 ONLINE ONLINE rac18c1 Started,STABLE
2 ONLINE ONLINE rac18c2 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE rac18c1 STABLE
ora.mgmtdb
1 ONLINE ONLINE rac18c1 Open,STABLE
ora.qosmserver
1 ONLINE ONLINE rac18c1 STABLE
ora.rac18c.db
1 ONLINE ONLINE rac18c1 Open,HOME=/u01/app/o
racle/product/18.0.0
/dbhome_1,STABLE
2 ONLINE ONLINE rac18c2 Open,HOME=/u01/app/o
racle/product/18.0.0
/dbhome_1,STABLE
ora.rac18c1.vip
1 ONLINE ONLINE rac18c1 STABLE
ora.rac18c2.vip
1 ONLINE ONLINE rac18c2 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac18c2 STABLE
ora.scan2.vip
1 ONLINE ONLINE rac18c1 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac18c1 STABLE
——————————————————————————–
[root@rac18c1 ~]#

 

 

Database Details

  • Recommendation is to have multiple ASM disks group on multiple PVSCSI controllers as per the Oracle Best Practices Guide
  • RAC database has 1 ASM disk group ‘DATA_DG’ with 1 ASM Disk 3TB created for sake of simplicity to show all the test cases
    • Actual ASM Disk provisioned size = 3TB
    • Database Space used < 3TB
  • Tablespace ‘SLOB’ was created on the ASM Disk Group ‘DATA_DG’ of size 1TB and loaded with SLOB data.
  • ASM Disk Group ‘DATA_DG’ also houses all the other components of the Database and the RAC Cluster including OCR, VOTE, Redo Log file, Data Files etc.

 

 

  

 

Test Suite Setup

 

Key points to be kept in mind as part of the test cases

  • Prior to SDDC Version 1.8v2, Oracle RAC on VMware Cloud on AWS required the shared VMDKs to be Eager Zero Thick provisioned (OSR=100) for multi-writer mode to be enabled
  • Starting SDDC Version 1.8v2, Oracle RAC on VMware Cloud on AWS does NOT require the shared VMDKs to be Eager Zero Thick provisioned (OSR=100) for multi-writer mode to be enabled
    • shared VMDKs can be thin provisioned (OSR=0) for multi-writer mode to be enabled
  • Existing / Current RAC vmdk’s were provisioned as EZT using OSR=100
  • Online change of Storage Policy of a shared vmdk from OSR=100 to OSR=0 and vice-versa can be performed without any Oracle RAC downtime

 

 

 

As part of the test suite, 5 test cases were performed against the RAC cluster

  • Current shared vmdk Storage Policy
    • ‘Oracle RAC VMC AWS Policy – OSR 100 RAID 1’
  • Create a new VM Storage Policy
    • ‘Oracle RAC VMC AWS Policy – OSR 0 RAID 1’
  • Test Case 1-4
    • Using existing ASM Disk Group (DATA_DG)
      • Change Storage Policy of the shared vmdk’s OSR=100 to OSR=0 online without any RAC downtime
      • Change Storage Policy of the shared vmdk’s OSR=0 to OSR=100 online without any RAC downtime
      • Repeat the above 2 tests again just for consistency sake and check DB and OS error logs
  • Test Case 5
    • Using existing ASM Disk Group (DATA_DG)
      • ASM Disk Group (DATA_DG) has 1 ASM disk with VM Storage Policy OSR=100
      • Add a new vmdk with VM Storage Policy OSR=0
      • Guest Operating System only goes off the vmdk provisioned size
        • no idea what the actual allocated vmdk size is on the underlying storage
      • Drop old ASM disk with VM Storage Policy OSR=100 with ASM rebalance operation
      • Check DB and OS error logs

 

SLOB Test Suite 2.4.2 was used with run time of 15 minutes with 128 threads

  • UPDATE_PCT=30
  • SCALE=10G
  • THINK_TM_FREQUENCY=0
  • RUN_TIME=900

 

All test cases were performed and the results were captured . Only Test Case 1 is published as part of this blog. The other test cases had the same exact outcome and hence is not included in this blog.

 

 

Test Case 1: Online change of Storage Policy for shared vmdk with no RAC downtime

 

As part of this test case, while the Oracle RAC was running, the Storage Policy of the shared vmdk was changed from ‘Oracle RAC VMC AWS Policy – OSR 100 RAID 1’ to ‘Oracle RAC VMC AWS Policy – OSR 0 RAID 1’.

 

1) Start SLOB load generator for 15 minutes against the RAC database with 128 threads against the SCAN to load balance between the 2 RAC instances

 

2) Edit settings for RAC VM ‘rac18c1’.

 

 

 

 

3) Change the shared vmdk Storage Policy from OSR=100 to OSR=0

 

 

 

 

 

4) Edit settings for RAC VM ‘rac18c2’ and change shared vmdk Storage Policy from OSR=100 to OSR=0

 

 

 

 

5) Check both RAC VM’s ‘rac18c1’ and ‘rac18c2’ Storage Policy Compliance

 

 

 

6) In the Datastore view,  Check RAC VM ‘rac18c1’ shared vmdk ‘rac18c1_3.vmdk’ size.

 

  • As per VM settings, the Allocated Size is 3 TB but
  • Actual size on vSAN datastore is < 3TB

 

This indicates that the shared vmdk is indeed thin-provisioned , not thick provisioned.

 

 

 

 

7) The O/S still reports the ASM disks with original size i.e 3 TB

 

[root@rac18c1 ~]# fdisk -lu /dev/sdd
Disk /dev/sdd: 3298.5 GB, 3298534883328 bytes, 6442450944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4f8c7d76
Device Boot Start End Blocks Id System
/dev/sdd1 2048 4294967294 2147482623+ 83 Linux
[root@rac18c1 ~]#

 

[root@rac18c2 ~]# fdisk -lu /dev/sdd
Disk /dev/sdd: 3298.5 GB, 3298534883328 bytes, 6442450944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4f8c7d76
Device Boot Start End Blocks Id System
/dev/sdd1 2048 4294967294 2147482623+ 83 Linux
[root@rac18c2 ~]#

 

 

8) No errors were observed in the database Alert log files and OS /var/log/messages file for both RAC instances

 

 

Summary of the test matrix along with the outcome for the various test cases are as below.

 

 

 

 

Summary

 

  • Prior to SDDC Version 1.8v2, Oracle RAC on VMware Cloud on AWS required the shared VMDKs to be Eager Zero Thick provisioned (OSR=100) for multi-writer mode to be enabled
  • Starting SDDC Version 1.8v2, Oracle RAC on VMware Cloud on AWS does NOT require the shared VMDKs to be Eager Zero Thick provisioned (OSR=100) for multi-writer mode to be enabled
    • shared VMDKs can be thin provisioned (OSR=0) for multi-writer mode to be enabled
  • Online change of Storage Policy of a shared vmdk from OSR=100 to OSR=0 and vice-versa can be performed without any Oracle RAC downtime
  • All above test cases were performed and the results were captured . Only Test Case 1 is published as part of this blog. The other test cases had the same exact outcome and hence is not included in this blog.

 

All Oracle on vSphere white papers including Oracle on VMware vSphere / VMware vSAN / VMware Cloud on AWS , Best practices, Deployment guides, Workload characterization guide can be found in the url below

Oracle on VMware Collateral – One Stop Shop
http://www.vmw.re/oracle-one-stop-shop

Posted in Uncategorized