Accelerating Oracle Performance using vSphere Persistent Memory (PMEM)

Customers have successfully run their business-critical Oracle workloads with high performance demands on VMware vSphere for many years.

Deploying IO-intensive Oracle workloads requires fast storage performance with low latency and resiliency from database failures. Latency, which is a measurement of response time, directly impacts a technology’s ability to deliver faster performance for business-critical applications.

There has been a disruptive paradigm shift in data storage called Persistent Memory (PMEM) that resides between DRAM and disk storage in the data storage hierarchy.

More information about Persistent Memory (PMEM)  and how vSphere 6.7 can take advantage of PMEM technology to accelerate IO-intensive Oracle workloads can be found here.


Accelerating Oracle Performance using vSphere Persistent Memory (PMEM) – Reference Architecture


The Accelerating Oracle Performance using vSphere Persistent Memory (PMEM) paper examines the performance of Oracle databases using VMware vSphere 6.7 Persistent Memory feature in different modes for redo log-enhanced performance, accelerating flash cache performance and a possibility of reducing Oracle licenses.



Additional use case : vPMEMDisk versus vPMEM (memory and raw mode)


A VM ‘Oracle122-RHEL-PMEM-udev’ was created as a copy of the ‘Oracle122-RHEL-PMEM’ VM , used in the paper.


VM Specifications

  • 12 vCPUs and 64GB memory
  • Red Hat 7.4 operating system
  • Oracle database version was with Oracle SGA set to 32GB and PGA set to 12GB with Grid Infrastructure and RDBMS binaries installed
  • A single instance database ‘DBPROD’ was created
  • All database-related vmdks were set to Eager Zero thick in Independent Persistent mode to ensure maximum performance with no snapshot capability
  • All database-related vmdks were partitioned using Linux utilities with proper alignment offset and labelled with Oracle ASMLib or Linux udev for device persistence.
  • Oracle ASM ‘DATA_DG’ and ‘REDO_DG’ disk group were created on an All Flash SAN attached storage with external redundancy and configured with default allocation unit (AU) size of 1M.
  • ASM ‘DATA_DG’ and ‘REDO_DG’ disks were presented on different PVSCSI controllers for performance and queue depth purposes.
  • All best practices for Oracle on VMware SDDC was followed as per the ‘Oracle Databases on VMware—Best Practices Guide’ which can be found here



  • OEL 7.4 was not compatible with vPMEM mode at the time of writing this paper
  • udev rules were used instead of ASMLIB when using pmem as we ran into disk partitioning issues with vPMEM devices




VM Disk Layout

  • Hard Disk 1 – Operating System
  • Hard Disk 2 – Oracle Binaries
  • Hard Disk 3 – DATA_DG
  • Hard Disk 4 – REDO_DG


Workload Generator

This solution primarily uses SLOB TPCC like workload generator to generate heavy batch processing workload on the Oracle database. During this workload generation, Oracle AWR, and Linux SAR reports were used to compare the performance and validate the testing use cases. The Oracle database was restarted after every test case to ensure no blocks or SQLs cached in the SGA.


SLOB configuration

  • Database VM with a 2,048GB SLOB schema
  • Workload is purely a 100 percent write to mimic a heavy IO database batch
  • processing workload (SLOB parameter UPDATE_PCT was set to 100).
  • Number of users set to 1 with 0 think time to hit each database with maximum requests concurrently to generate extremely intensive batch workload
  • SLOB parameter SCALE for the workload was set to 1024GB with Oracle SGA set to 32GB
  • SLOB parameter REDO_STRESS for the workload was set to HEAVY
  • SLOB parameter RUN_TIME was set to 30 minutes.


Test Cases

Run SLOB against database with Redo log files on

  • REDO_DG ASM disk group backed by All Flash SAN Storage (Baseline)
  • REDO_DG ASM disk group backed by vPMEMDisk
  • REDO_DG ASM disk group backed by vPMEM in raw mode
  • /redolog ext4 File system backed by vPMEM in raw mode with dax option
  • /redolog_dax ext4 File system backed by vPMEM in memory mode with dax option


Additional Database Setup

  • 16 Redo log groups , 256 MB , 2 members per group created in the REDO_DG
  • Initialization parameter ‘db_writer_processes’ was set at ‘3’ as the initial run of the workload, being very batch intensive, was waiting on Checkpoint process to complete, and the intention of the test is to demonstrate the reduced wait time on ‘log file switch’ event.



AWR reports were collected for all runs and analyzed ad compared for all 5 use cases.




  • Reduction in ‘log file switch completion’ wait times
  • Increase in the amount of work by the workload (Executes (SQL) / sec & Transactions / sec )
  • Impact of log file switches reduced



The above test cases using vPMEMdisk and vPMEM mode indicates a reduction in wait times for critical database events e.g ‘log file switch completion’ and at the same time an increase in the amount of work done by the workload.

Deploying IO-intensive Oracle workloads requires fast storage performance with low latency and resiliency from database failures. Latency, which is a measurement of response time, directly impacts a technology’s ability to deliver faster performance for business-critical applications.

Persistent Memory (PMEM) technology enables byte-addressable updates and prevents data loss during power interruptions. Instead of having nonvolatile storage at the bottom with the largest capacity but the slowest performance, nonvolatile storage is now very close to DRAM in terms of performance.

Persistent Memory Performance in vSphere 6.7 paper can be found here.

Posted in Uncategorized

Migrating Oracle Workloads to VMware Cloud on AWS

Customers deploying Oracle Real Application Clusters (RAC) have requirements such as stringent SLAs, continued high performance, and application availability. It is a major challenge for business organizations to manage data storage in these environments due to these rigorous business requirement.

Common issues presented when using traditional storage solutions for business-critical application (BCA) include inadequate performance, scale-in/scale-out, storage inefficiency, complex management, and high deployment and operating costs.

With more and more production servers being virtualized, the demand for highly converged server-based storage is surging. VMware Virtual SAN aims at providing a highly scalable, available, reliable, and high-performance storage using cost-effective hardware, specifically direct-attached disks in VMware ESXi hosts. Virtual SAN adheres to a new policy-based storage management paradigm, which simplifies and automates complex management workflows that exist in traditional enterprise storage systems with respect to configuration and clustering.

Virtual SAN Stretched Cluster enables active/active data centers that are separated by metro distance. Extended Oracle RAC with Virtual SAN enables transparent workload sharing between two sites accessing a single database while providing the flexibility of migrating or balancing workloads between sites in anticipation of planned events such as hardware maintenance.

VMware Cloud on AWS is an on-demand service that enables customers to run applications across vSphere-based cloud environments with access to a broad range of AWS services. Powered by VMware Cloud Foundation, this service integrates vSphere, vSAN and NSX along with VMware vCenter management, and is optimized to run on dedicated, elastic, bare-metal AWS infrastructure. ESXi hosts in VMware Cloud on AWS reside in an AWS availability Zone (AZ) and are protected by vSphere HA.

A new feature called Stretched Clusters for VMware Cloud on AWS is designed to protect against an AWS availability zone failure. Now applications can span multiple AWS Availability Zones (AZ) within a VMware Cloud on AWS cluster.



One of the use cases is running Extended Oracle RAC on Stretched Clusters for VMware Cloud on AWS to provide greater availability and protect against AZ failures.

With the release of Virtual SAN Stretched Cluster followed by VMware Cloud for AWS and recently ,Stretched Clusters for VMware Cloud on AWS, applications e.g Oracle RAC which have stringent requirements such as very high SLA’s, continued high performance, and application availability can now take dual advantage of being deployed on the cloud along with having high availability across multiple AZ’s in the Stretched Clusters for VMware Cloud on AWS deployment model.

This paper describes the deployment, migration options along with best practices when migrating Oracle Standalone and Oracle RAC on VMware on-premises (vSphere with traditional Storage or VMware HCI vSAN ) to Stretched Clusters for VMware Cloud on AWS.

Posted in Uncategorized

VMworld 2018 Oracle Customer Bootcamps

Architecting Oracle Workloads on VMware Technologies 

On a mission to arm yourself with the latest knowledge and skills needed to master virtualizing Oracle on VMware Software Defined Data Center (SDDC) along with moving workloads to the VMware Cloud on AWS ?


VMworld Customer bootcamps can get you in shape to lead the virtualization charge in your organization, with Instructor-led demos and In-depth course work designed to put you in the ranks of the IT elite.

Oracle on vSphere
The Oracle on VMware SDDC Bootcamp will provide the attendee the opportunity to learn the essential skills necessary to run Oracle implementations on VMware SDDC with a well defined journey to running the same Oracle workloads on VMware Cloud on AWS.

The best practices and optimal approaches to deployment, operation and management of Oracle database and application software will be presented by VMware expert Sudhir Balasubramanian , Oracle Practice lead, who will be joined by other VMware and Industry Experts.

This technical workshop will exceed the standard breakout session format by delivering “real-life,” instructor-led, live training and incorporating the recommended design and configuration practices for architecting Business Critical Databases on VMware SDDC  infrastructure and VMware Cloud on AWS.

Subjects such as running Oracle workloads e.g Single Instance / Real Applications Clusters (RAC) using Automatic Storage Management (ASM) on vSphere 6.7 with all the new features like Para Virtualized RDMA , Persistent Memory etc , HCI vSAN , VAIO , VVOL, NSX etc as well as running Oracle workloads on VMware Cloud on AWS will be covered in depth.

Learn More



Cost: $800 / seat

Sunday August 26, 2018
8:00am to 5:00pm
(registration opens at 6:30am)

Mandalay Bay, South Convention Centre
Level 2 , Room Number (will be posted as soon as it is made available)


Be sure to add the Bootcamp in step 4 of your VMworld conference registration, under Educational Offerings, after you’ve selected your conferences pass.

Registration is open, seating is limited! Lunch and breaks provided.

Looking forward to seeing you all there!

Posted in Uncategorized

Migrating an Oracle RAC Cluster using Storage vMotion to vSAN Storage – Private Investigation

Migrating an Oracle RAC Cluster using Storage vMotion to vSAN Storage – Private Investigation


” Give me ten men like Clouseau, and I can destroy the world…” said Former Chief Inspector Dreyfus about Inspector Jacques Clouseau and his incompetence, clumsy and chaotic detective skills



This blog will not focus on the ‘how to perform Storage vMotion” aspect of the entire process, it will focus on what we uncovered during our investigations (sans the incompetence, clumsiness and chaos that followed Inspector Jacques Clouseau) into Storage vMotioning a Oracle RAC cluster to vSAN Storage.

The Around the “Storage World” in no time – Storage vMotion and Oracle Workloads  blog  focused on how we can storage vMotion an Oracle RAC Cluster from one storage to another without ANY downtime.

The blog Migrating non-production Oracle RAC using VMware Storage vMotion with minimal downtime focused on on storage vMotion of a 2 Node non-production Oracle RAC Cluster with storage on one datastore to a datastore on a different storage system with minimal downtime and time.

You can read more about it here.

Posted in Uncategorized

Migrating non-production Oracle RAC using VMware Storage vMotion with minimal downtime

Migrating non-production Oracle RAC using VMware Storage vMotion with minimal downtime


Much has been discussed and documented about Storage vMotion of Oracle RAC Clusters on VMware SDDC which can be found in the numerous blog posts available at Oracle on VMware Collateral – One Stop Shop.




Development, QA , Pre-Production  RAC clusters form an important part of any RAC deployment environment and while they may not be as large as the production RAC clusters, they are deemed equally critical for the software life cycle process . Fortunately they are not constrained by the 100% uptime business SLA’s which means admins can choose to go another route of storage vMotioning Oracle RAC with minimal downtime in a very short time.

This blog focuses on storage vMotion of a 2 Node non-production Oracle RAC Cluster with storage on one datastore to a datastore on a different storage system with minimal downtime and time.

Let’s us first review some key ideas, including the deployment model of Oracle on vSphere and the fundamentals of Storage vMotion.

You can read more on it here.

Posted in Uncategorized

Introducing vSphere 6.7 for Enterprise Applications

vSphere 6.7 introduces new storage and networking features which have a major impact on the performance of  Enterprise Applications – these include support for Persistent Memory (PMEM) and enhanced support for Remote Directory Memory Access (RDMA).


Persistent Memory (PMEM)

With vSphere Persistent Memory (PMEM) , customers using supported hardware servers, can get the benefits of ultra-high-speed storage at a price point closer to DRAM-like speeds at flash-like prices.

The following diagram shows the convergence of memory and storage.



Technology at the top of the pyramid (comprised of DRAM and the CPU cache and registers) have the shortest latency (best performance) but this comes at a higher cost relative to the items at the bottom of the pyramid.    All of these components are accessed directly by the application – also known as load/storage access.

Technology at the bottom of the pyramid – represented by Magnetic media (HDDs and tape) and NAND flash (represented by SSDs and PCIe Workload Accelerators) have longer latency and lower costs relative to the technology at the top of the pyramid.  These technology components have block access meaning data is typically communicated in blocks of data and the applications are not accessed directly.

PMEM is a new layer called Non-Volatile Memory (NVM) and sits between NAND flash and DRAM, providing faster performance relative to NAND flash but also providing the non-volatility not typically found in traditional memory offerings.  This technology layer provides the performance of memory with the persistence of traditional storage.

Enterprise Applications can be deployed in virtual machines which are exposed to PMEM datastores.  PMEM datastores are created from NVM storage attached locally to each server. Performance benefits can then be attained as follows:

  • vSphere can allocate piece of the PMEM datastore and present it to the virtual machine as a disk -virtual persistent memory disk which is used as an ultra-fast disk. In this mode no guest-OS or application change is required.
  • vSphere can allocate a piece of the PMEM datastore in a server and present it to a virtual machine as a virtual NVDIMM. This type of virtual device exposes a byte addressable persistent memory to the virtual machine.
    • Virtual NVDIMM is compatible with latest Guest Operating Systems which support persistent memory. Applications do not change and experience faster file access as the modified OS filesystem bypasses the buffer cache.
    • Applications can be modified to take advantage of PMEM and experience the highest increase in performance via direct and uninterrupted access to hardware.

Applications deployed on PMEM backed datastores can benefit from live migration (VMware vMotion) and VMware DRS – this is not possible with PMEM in physical deployments.


Remote Directory Memory Access (RDMA)

vSphere 6.7 introduces new protocol support for Remote Direct memory Access (RDMA) over Converged Ethernet, or RoCE (pronounced “rocky”) v2, a new software Fiber Channel over Ethernet (FCoE) adapter, and iSCSI Extension for RDMA (iSER). These features enable customers to integrate with even more high-performance storage systems providing more flexibility to use the hardware that best compliments their workloads.

RDMA support is enhanced with vSphere 6.7 to bring even more performance to enterprise workloads by leveraging kernel and OS bypass reducing latency and dependencies.  This is illustrated in the diagram below.



When virtual machines are configured with RDMA in a pass thru mode, the workload is basically tied to a physical host with no DRS capability i.e. no ability to vMotion. However customers who want to harness the power vMotion and DRS and still experience the benefits of RDMA , albeit at a  very small performance penalty can do so – with para virtualized RDMA software (PVRDMA). With PVRDMA, applications can run even in the absence of an Host Channel Adapter (HCA) card.  RDMA-based applications can be run in ESXi guests while ensuring virtual machines can be live migrated.

Use cases for this technology include distributed databases, financial applications, and Big Data.

This blog was also posted here.

More on vSphere 6.7 New features can be found here .

All Oracle on vSphere white papers including Oracle licensing on vSphere/vSAN, Oracle best practices, RAC deployment guides, workload characterization guide can be found at Oracle on VMware Collateral – One Stop Shop

This blog was authored by Sudhir Balasubramanian and Vas Mitra.

Posted in Uncategorized

A Journey to the Clouds – Oracle on VMware Cloud on AWS

We have nothing to fear but …”. famous words of Chief Vitalstatistix , chief of the famous Gaulish village in the famous Asterix & Obelix series.



Legend goes when the Gallic chieftains were asked by Alexander the Great what they were most afraid of in all the world, they replied that their worst fear was that the “sky might fall on their heads”.

In today’s world, the “sky” aka Cloud is the platform for bursting and sustaining application workloads with Cloud service providers offering network services, infrastructure, or business applications in the cloud, thereby helping with kicking off new projects or helping meet business needs for temporary, seasonal, or unplanned demand.

More information about it can be found here.

Posted in Uncategorized