Oracle and vSphere Persistent Memory (PMEM) – vPMEM v/s vPMEMDisk

In the previous blog post Accelerating Oracle Performance using vSphere Persistent Memory (PMEM)  , we demonstrated how performance of Oracle databases can be improved using VMware vSphere 6.7 Persistent Memory feature in different modes for the uses cases below

  • Improved performance of Oracle Redo Log using vPMEM Disk-backed vmdks/vPMEM disks in DAX mode
  • Accelerating Performance using Oracle Smart Flash Cache
  • Potential reduction in Oracle Licensing

In this blog, we demonstrate the performance improvement in using vPMEM over vPMEMDisk

The additional use case below shows performance improvement in Redo log activity when redo log files are placed on vPMEM Disk-backed vmdks/vPMEM disks in DAX mode over redo logs on vPMEMDisk backed vmdks.

 

 

vPMEMDisk versus vPMEM (memory and raw mode)

 

A VM ‘Oracle122-RHEL-PMEM-udev’ was created as a copy of the ‘Oracle122-RHEL-PMEM’ VM , used in the paper.

 

VM Specifications

  • 12 vCPUs and 64GB memory
  • Red Hat 7.4 operating system
  • Oracle database version was 12.2.0.1.0 with Oracle SGA set to 32GB and PGA set to 12GB with Grid Infrastructure and RDBMS binaries installed
  • A single instance database ‘DBPROD’ was created
  • All database-related vmdks were set to Eager Zero thick in Independent Persistent mode to ensure maximum performance with no snapshot capability
  • All database-related vmdks were partitioned using Linux utilities with proper alignment offset and labelled with Oracle ASMLib or Linux udev for device persistence.
  • Oracle ASM ‘DATA_DG’ and ‘REDO_DG’ disk group were created on an All Flash SAN attached storage with external redundancy and configured with default allocation unit (AU) size of 1M.
  • ASM ‘DATA_DG’ and ‘REDO_DG’ disks were presented on different PVSCSI controllers for performance and queue depth purposes.
  • All best practices for Oracle on VMware SDDC was followed as per the ‘Oracle Databases on VMware—Best Practices Guide’ which can be found here

 

Note

  • OEL 7.4 was not compatible with vPMEM mode at the time of writing this paper
  • udev rules were used instead of ASMLIB when using pmem as we ran into disk partitioning issues with vPMEM devices

 

 

 

VM Disk Layout

  • Hard Disk 1 – Operating System
  • Hard Disk 2 – Oracle Binaries
  • Hard Disk 3 – DATA_DG
  • Hard Disk 4 – REDO_DG

 

Workload Generator

This solution primarily uses SLOB TPCC like workload generator to generate heavy batch processing workload on the Oracle database. During this workload generation, Oracle AWR, and Linux SAR reports were used to compare the performance and validate the testing use cases.

The Oracle database was restarted after every test case to ensure no blocks or SQLs cached in the SGA.

 

SLOB configuration

  • Database VM with a 2,048GB SLOB schema
  • Workload is purely a 100 percent write to mimic a heavy IO database batch
  • processing workload (SLOB parameter UPDATE_PCT was set to 100).
  • Number of users set to 1 with 0 think time to hit each database with maximum requests concurrently to generate extremely intensive batch workload
  • SLOB parameter SCALE for the workload was set to 1024GB with Oracle SGA set to 32GB
  • SLOB parameter REDO_STRESS for the workload was set to HEAVY
  • SLOB parameter RUN_TIME was set to 30 minutes.

 

Test Cases

Run SLOB against database with Redo log files on

  • REDO_DG ASM disk group backed by All Flash SAN Storage (Baseline)
  • REDO_DG ASM disk group backed by vPMEMDisk
  • REDO_DG ASM disk group backed by vPMEM in raw mode
  • /redolog ext4 File system backed by vPMEM in raw mode with dax option
  • /redolog_dax ext4 File system backed by vPMEM in memory mode with dax option

 

Additional Database Setup

  • 16 Redo log groups , 256 MB , 2 members per group created in the REDO_DG
  • Initialization parameter ‘db_writer_processes’ was set at ‘3’ as the initial run of the workload, being very batch intensive, was waiting on Checkpoint process to complete, and the intention of the test is to demonstrate the reduced wait time on ‘log file switch’ event.

 

Results

AWR reports were collected for all runs and analyzed ad compared for all 5 use cases.

 

 

Analysis

  • Reduction in ‘log file switch completion’ wait times
  • Increase in the amount of work by the workload (Executes (SQL) / sec & Transactions / sec )
  • Impact of log file switches reduced

 

Conclusion

The above test cases using vPMEMdisk and vPMEM mode indicates a reduction in wait times for critical database events e.g ‘log file switch completion’ and at the same time an increase in the amount of work done by the workload.

Persistent Memory Performance in vSphere 6.7 paper can be found here.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.