Kevin Closson

Subscribe to Kevin Closson feed Kevin Closson
Platform, Database and Storage Topics
Updated: 11 hours 22 min ago

Copy Data Management for Oracle Database with EMC AppSync and XtremIO

Wed, 2015-10-14 12:08

This is a quick blog entry to invite readers to view this little demonstration video I created. The topic is Copy Data Management in an Oracle Database environment. We all know the pains involved with the number of database copies needed in today’s Oracle environment. Well, how about technology with these characteristics:

  1. 100% space efficient. There is no need for any full-copy “donor” in this solution. You can create 8192 XtremIO Virtual Copies of volumes in an XtremIO array and there is no reduction in user-capacity at the storage level. For example, 512 copies of a 1TB volume with Oracle tablespaces in it takes exactly 1TB from the array.
  2. Self service. With EMC AppSync permissions can be set up so that developers can create their own copies, refresh their own copies and expire their own copies.
  3. Speed. AppSync copy operations such as creation and refresh are measured in seconds.
  4. Data Services. All XtremIO Virtual Copies enjoy data reduction services. So as users begin to make changes to their database copies the modified blocks are first treated with de-duplication and then compression.

You more than likely need XtremIO in any cose. However, now it’s also time to think about the ease of provisioning copies of Oracle databases to test/dev and other functions the XtremIO way.

It only takes minutes so please give this a view:


Filed under: oracle

Focusing on Ext4 and XFS TRIM Operations – Part I.

Sun, 2015-07-19 09:29

I’ve been doing some testing that requires rather large file systems. I have an EMC XtremIO Dual X-Brick array from which I provision a 10 terabyte volume. Volumes in XtremIO are always thinly provisioned. The testing I’m doing required me to scrutinize default Linux mkfs(8) behavior for both Ext4 and XFS. This is part 1 in a short series and it is about Ext4.

Discard the Discard Option

The first thing I noticed in this testing was the fantastical “throughput” demonstrated at the array while running the mkfs(8) command with the “-t ext4” option/arg pair. As the following screen shot shows the “throughput” at the array level was just shy of 72GB/s.

That’s not real I/O…I’ll explain…

EMC XtremIO Dual X-Brick Array During Ext4 mkfs(8). Default Options.

EMC XtremIO Dual X-Brick Array During Ext4 mkfs(8). Default Options.

The default options for Ext4 include the discard (TRIM under the covers) option. The mkfs(8) manpage has this to say about the discard option :

Attempt to discard blocks at mkfs time (discarding blocks initially is useful on solid state devices and sparse / thin-provisioned storage). When the device advertises that discard also zeroes data (any subsequent read after the discard and before write returns zero), then mark all not-yet-zeroed inode tables as zeroed. This significantly speeds up filesystem initialization. This is set as default.

I’ve read that quoted text at least eleventeen times but the wording still sounds like gibberish-scented gobbledygook to me–well, except for the bit about significantly speeding up filesystem initialization.

Since XtremIO volumes are created thin I don’t see any reason for mkfs to take action to make it, what, thinner?  Please let me share test results challenging the assertion that the discard mkfs option results in faster file system initialization. This is the default functionality after all.

In the following terminal output you’ll see that the default mkfs options take 152 seconds to make a file system on a freshly-created 10TB XtremIO volume:


# time mkfs -t ext4 /dev/xtremio/fs/test
mke2fs 1.43-WIP (20-Jun-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
real 2m32.055s
user 0m3.648s
sys 0m17.280s
#

The mkfs(8) Command Without Default Discard Functionality

Please bear in mind that default 152 second result is not due to languishing on pathetic physical I/O. The storage is fast. Please consider the following terminal output where I passed in the non-default -E option with the nodiscard argument. The file system creation took 4.8 seconds:

# time mkfs -t ext4 -E nodiscard /dev/xtremio/fs/test
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
 102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
 2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

real 0m4.856s
user 0m4.264s
sys 0m0.415s
#

I think 152 seconds down to 4.8 makes the point that with proper, thinly-provisioned storage the mkfs discard option does not “significantly speed up filesystem initialization.” But initializing file systems is not something one does frequently so investigation into the discard mount(8) option was in order.

Taking Ext4 For A Drive

Since I had this 10TB Ext4 file system–and a fresh focus on file system discard (storage TRIM) features–I thought I’d take it for a drive.

Discarded the Default Discard But Added The Non-Default Discard

While the default mkfs(8) command includes discard, the mount(8) command does not. I decided to investigate this option while unlinking a reasonable number of large files. To do so I ran a simple script (shown below) that copies 64 files of 16 gigabytes each–in parallel–into the Ext4 file system. I then timed a single invocation of the rm(1) command to remove all 64 of these files. Unlinking file in a Linux file system is a metadata operation, however, when the discard option is used to mount the file system each unlink operation includes TRIM operations being sent to storage. The following screen shot of the XtremIO performance dashboard was taken while the rm(1) command was running. The discard mount option turns a metadata operation into a rather costly storage operation.

Array Level Activity During Bulk rm(1) Command Processing. Ext4 (discard mount option)

Array Level Activity During Bulk rm(1) Command Processing. Ext4 (discard mount option)

The following terminal output shows the test step sequence used to test the discard mount option:

# umount /mnt ; mkfs -t ext4 -E nodiscard /dev/xtremio/fs/test; mount -t ext4 -o discard /dev/xtremio/fs/test /mnt
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
 102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
 2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

# cd mnt
# cat > cpit
for i in {1..64}; do ( dd if=/data1/tape of=file$i bs=1M oflag=direct )& done
wait
# time sh ./cpit > /dev/null 2>&1 

real 5m31.530s
user 0m2.906s
sys 8m45.292s
# du -sh .
1018G .
# time rm -f file*

real 4m52.608s
user 0m0.000s
sys 0m0.497s
#

The following terminal output shows the same test repeated with the file system being mounted with the default (thus no discard) mount options:

# cd ..
# umount /mnt ; mkfs -t ext4 -E nodiscard /dev/xtremio/fs/test; mount -t ext4 /dev/xtremio/fs/test /mnt
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
 102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
 2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

# cd mnt
# cat > cpit
for i in {1..64}; do ( dd if=/data1/tape of=file$i bs=1M oflag=direct )& done
wait
#
# time sh ./cpit > /dev/null 2>&1 

real 5m31.526s
user 0m2.957s
sys 8m50.317s
# time rm -f file*

real 0m16.398s
user 0m0.001s
sys 0m0.750s
#

This testing shows that mounting an Ext4 file system with the discard mount option dramatically impacts file removal operations. The default mount options (thus no discard option) performed the rm(1) command in 16 seconds whereas the same test took 292 seconds when mounted with the discard mount option.

So how can one perform the important house-cleaning that comes with TRIM operations?

The fstrim(8) Command

Ext4 supports user-invoked, online TRIM operations on mounted file systems. I would advise people to forego the discard mount option and opt for occasionally running the fstrim(8) command. The following is an example of  how long it takes to execute fstrim on the same 10TB file system stored in an EMC XtremIO array. I think that foregoing the taxation of commands like rm(1) is a good thing–especially since running fstrim is allowed on mounted file systems and only takes roughly 11 minutes on a 10TB file system.

# time fstrim -v /mnt
/mnt: 10908310835200 bytes were trimmed

real 11m29.325s
user 0m0.000s
sys 2m31.370s
#
Summary

If you use thinly-provisioned storage and want file deletion in Ext4 to return space to the array you have a choice. You can choose to take serious performance hits when you create the file system (default mkfs(8) options) and when you delete files (optional discard mount(8) option) or you can occasionally execute the fstrim(8) command on a mounted file system.

Up Next

The next post in this series will focus on XFS.


Filed under: oracle

Announcing “SLOB Recipes”

Fri, 2015-07-17 11:28

I’ve started updating the SLOB Resources page with links to “recipes” for certain SLOB testing. The first installment is the recipe for loading 8TB scale SLOB 2.3 Multiple Schema Model with a 2-Socket Linux host attached to EMC XtremIO. Recipes will include (at a minimum) the relevant SLOB program output (e.g., setup.sh or runit.sh), init.ora and slob.conf.

Please keep an eye on the SLOB Resources page for updates…and don’t miss the first installment. It’s quite interesting.

SLOB-recipes


Filed under: oracle

This Is Not Glossy Marketing But You Still Won’t Believe Your Eyes. EMC XtremIO 4.0 Snapshot Refresh For Agile Test / Dev Storage Provisioning in Oracle Database Environments.

Tue, 2015-07-14 19:18

This is just a quick blog post to direct readers to a YouTube video I recently created to help explain to someone how flexible EMC XtremIO Snapshots are. The power of this array capability is probably most appreciated in the realm of provisioning storage for Test and Development environments.

Although this is a silent motion picture I think it will speak volumes–or at least 1,000 words.

Please note: This is just a video demonstration to show the base mechanisms and how they relate to Oracle Database with Automatic Storage Management. This is not a scale demonstration. XtremIO snapshots are supported to in the thousands and extremely powerful “sibling trees” are fully supported.

Not Your Father’s Snapshot Technology

No storage array on the market is as flexible as XtremIO in the area of writable snapshots. This video demonstration shows how snapshots allow the administrator of a “DEV” host–using Oracle ASM–to quickly refresh to current or past versions of ASM disk group contents from the “PROD” environment.

The principles involved in this demonstration are:

  1. XtremIO snapshots are crash consistent.
  2. XtremIO snapshots are immediately created, writeable and space efficient. There is no fixed “donor” relationship. Snapshots can be created from other snapshots and refreshes can go in any direction.
  3. XtremIO snapshot refresh does not involve the host operating system. Snapshot and volume contents can be immediately “swapped” (refreshed) at the array level without any action on the host.

Regarding number 3 on that list, I’ll point out that while the operating system does not play a role in the snapshot operations per se, applications will be sensitive to contents of storage immediately changing. It is only for this reason that there are any host actions at all.

Are Host Operations Involved? Crash Consistent Does Not Mean Application-Coherent

The act of refreshing XtremIO snapshots does not change the SCSI WWN information so hosts do not have any way of knowing the contents of a LUN have changed. In the Oracle Database use case the following must be considered:

  1. With a file system based database one must unmount the file systems before refreshing a snapshot otherwise the file system will be corrupted. This should not alarm anyone. A snapshot refresh is an instantaneous content replacement at the array level. Operationally speaking, file system based databases only require database instance shutdown and the unmounting of the file system in preparation for application-coherent snapshot refresh.
  2. With an ASM based database one must dismount the ASM disk group in preparation for snapshot refresh. To that end, ASM database snapshot restore does not involve system administration in any way.

The video is 5 minutes long and it will show you the following happenings along a timeline:

  1. “PROD” and “DEV” database hosts (one physical and one virtual) each showing the same Oracle database (identical DBID) and database creation time as per dictionary views. This establishes the “donor”<->clone relationship. DEV is a snapshot of PROD. It is begat of a snapshot of a PROD consistency group
  2. A single-row token table called  “test” in the PROD database has value “1.” The DEV database does not even have the token table (DEV is independent of PROD…it’s been changing..but its origins are rooted in PROD as per point #1)
  3. At approximately 41 seconds into the video I take a snapshot of the PROD consistency group with “value 1” in the token table. This step prepares for “time travel” later in the demonstration
  4. I then update the PROD token table to contain the value “42”
  5. At ~2:02 into the video I have already dismounted DEV ASM disk groups and started clobbering DEV with the current state of PROD via a snapshot refresh. This is “catching up to PROD”
    1. Please note: No action at all was needed on the PROD side. The refresh of DEV from PROD is a logical, crash-consistent point in time image
  6. At ~2:53 into the video you’ll see that the DEV database instance has already been booted and that it has value “42” (step #4). This means DEV has “caught up to PROD”
  7. At ~3:32 you’ll see that I use dd(1) to copy the redo LUN over the data LUN on the DEV host to introduce ASM-level corruption
  8. At 3:57 the DEV database is shown as corrupted. In actuality, the ASM disk group holding the DEV database is corrupted
  9. In order to demonstrate traveling back in time, and to recover from the dd(1) corrupting of the ASM disk group,  you’ll see at 4:31 I chose to refresh from the snapshot I took at step #3
  10. At 5:11 you’ll see that DEV has healed from the dd(1) destruction of the ASM disk group, the database instance is booted, and the value in the token table is reverted to 1 (step #3) thus DEV has traveled back in time

Please note: In the YouTube box you can click to view full screen or on youtube.com if the video quality is a problem:

More Information

For information on the fundamentals of EMC XtremIO snapshot technology please refer to the following EMC paper: The fundamentals of XtremIO snapshot technology

For independent validation of XtremIO snapshot technology in a highly-virtualized environment with Oracle Database 12c please click on the following link: Principled Technologies, Inc Whitepaper

For a proven solution whitepaper showing massive scale data sharing with XtremIO snapshots please click on the following link: EMC Whitepaper on massive scale database consolidation via XtremIO


Filed under: oracle

Announcing SLOB 2.3. Tarry Not, Get It While It’s Hot!

Sun, 2015-07-12 12:48

BLOG UPDATE 2015.07.16: SLOB 2.3.0.3-1 is now the current version.

This is just a quick post to announce SLOB 2.3. Please visit the SLOB Resources page to download the gzipped tar archive. The SLOB Resources page also has a link the SLOB 2.3 Documentation. SLOB Resources Page: Click Here. New in this release:

  1. The documentation is now also included in the tar archive under SLOB/doc in PDF form.
  2. SLOB 2.3 introduces the SLOB Single Schema feature. Please see the documentation.
  3. Because of SLOB Single Schema the kit now supports SLOB Threads. Note, however, SLOB Threads can be used in either Single or Multiple Schema Model.
  4. SLOB 2.3 has two types of “Hot Spots”
    1. In Multiple Schema Model there are both per-schema Hot Spots and a Hot Schema. Please see the SLOB 2.3 documentation for descriptions of these features.
  5. Improved error handling for both the SLOB Data Loader (setup.sh) and Test Execution program (runit.sh).
  6. Licensing. Prior releases of SLOB consisted of copyrighted programs with unclear licensing. Please don’t be alarmed. SLOB is still free to use. The LICENSE file defines the word “use.”

Filed under: oracle

SLOB 2.3 User Guide

Fri, 2015-07-10 20:13

SLOB 2.3 is releasing within the next 48 hours. In case anyone wants to read about all the new features here is a link to the SLOB 2.3 User Guide:

SLOB 2.3 User Guide (pdf)

 


Filed under: oracle

SLOB 2.3 Is Getting Close!

Thu, 2015-05-28 15:30

SLOB 2.3 is soon to be released. This version has a lot of new, important features but also a significant amount of tuning in the data loading kit. Before sharing where the progress is on that front, I’ll quickly list some of the new important features that will be in SLOB 2.3:

  1. Single Schema Support. SLOB historically avoids application-level contention by having database sessions perform the SLOB workload against a private schema. The idea behind SLOB is to exert maximum I/O pressure on storage while utilizing the minimum amount of host CPU possible. This lowers the barrier to entry for proper testing as one doesn’t require dozens of processors festering in transactional SQL code just to perform physical I/O. That said, there are cases where a single, large active data set is desirable–if not preferred. SLOB 2.3 allows one to load massive data sets quickly and run large numbers of SLOB threads (database sessions) to drive up the load on the system.
  2. Advanced Hot Spot Testing. SLOB 2.3 supports configuring each SLOB thread such that every Nth SQL statement operates on a hot spot sized in megabytes as specified in the slob.conf file. Moreover, this version of SLOB allows one to dictate the offset for the hot spot within the active data set. This allows one to easily move the hot spot from one test execution to the next. This sort of testing is crucial for platform experts studying hybrid storage arrays that identify and promote “hot” data into flash for example.
  3. Threaded SLOB. SLOB 2.3 allows one to have either multiple SLOB schemas or the new Single Schema and to drive up the load one can specify how many SLOB threads per schema will be active.

 

To close out this short blog entry I’ll make note that the SLOB 2.3 data loader is now loading 1TB scale Single Schema in just short of one hour (55.9 minutes exactly). This procedure includes data loading, index creation and CBO statistics gathering. The following was achieved with a moderate IVB-EP 2s20c40t server running Oracle Linux 6.5 and Oracle Database 12c and connected to an EMC XtremIO array via 8GFC Fibre Channel. I think this shows that even the data loader of SLOB is a worthwhile workload in its own right.

SLOB 2.3 Data Loading 1TB/h


Filed under: oracle

Lab Report: Oracle Database on EMC XtremIO. A Compression Technology Case Study.

Tue, 2015-05-26 02:26

If you are interested in array-level data reduction services and how such technology mixes with Oracle Database application-level compression (such as Advanced Compression Option), I offer the link below to an EMC Lab Report on this very topic.

To read the entire Lab Report please click the following link:   Click Here.

The following is an excerpt from the Lab Report:

Executive Summary
EMC XtremIO storage array offers powerful data reduction features. In addition to thin provisioning, XtremIO applies both deduplication and compression algorithms to blocks of data when they are ingested into the array. These features are always on and intrinsic to the array. There is no added licensing, no tuning nor configuration involved when it comes to XtremIO data reduction.

Oracle Database also supports compression. The most common form of Oracle Database compression is the Advanced Compression Option—commonly referred to as ACO. With Oracle Database most “options” are separately licensed features and ACO is one such option. As of the publication date of this Lab Report, ACO is licensed at $11,000 per processor core on the database host1. Compressing Oracle Database blocks with ACO can offer benefits beyond simple storage savings. Blocks compressed with ACO remain compressed as they pass through the database host. In short, blocks compressed with ACO will hold more rows of data per block. This can be either a blessing or a curse. Allowing Oracle to store more rows per block has the positive benefit of caching more application data in main memory (i.e., the Oracle SGA buffer pool). On the other hand, compacting more data into each block often results in increased block-contention.

Oracle offers tuning advice to address this contention in My Oracle Support note 1223705.12. However, the tuning recommendations for reducing block contention with ACO also lower the compression ratios. Oracle also warns users to expect higher CPU overhead with ACO as per the following statement in the Oracle Database product documentation:

Compression technology uses CPU. Ensure that you have enough available CPU to handle the additional load.

Application vendors, such as SAP, also produce literature to further assist database administrators in making sensible choices about how and when to employ Advanced Compression Option. The importance of understanding the possible performance impact of ACO are made quite clear in such publications as SAP Note 14363524 which states the following about SAP performance with ACO:

Overall system throughput is not negatively impacted and may improve. Should you experience very long runtimes (i.e. 5-10 times slower) for certain operations (like mass inserts in BW PSA or ODS tables/partitions) then you should set the event 10447 level 50 in the spfile/init.ora. This will reduce the overhead for insertion into compressed tables/partitions.

The SAP note offers further words of caution regarding transaction logging (a.k.a., redo) in the following quote:

Amount of redo data generated can be up to 30% higher

Oracle Database Administrators, with prior ACO experience, are largely aware of the trade-offs where ACO is concerned. Database Administrators who have customarily used ACO in their Oracle Database deployments may wish to continue to use ACO after adopting EMC XtremIO. For this reason Database Administrators are interested in learning how XtremIO compression and Advanced Compression Option interact.

This Lab Report offers an analysis of space savings with and without ACO on XtremIO. In addition, a performance characterization of an OLTP workload manipulating the same application data in ACO and non-ACO tablespaces will be covered…please click the link above to continue reading…

 


Filed under: oracle

Whitepaper: Oracle Database 11g and 12c Consolidation and Workload Scalability with EMC XtremIO 3.0

Wed, 2015-04-29 15:30

This is a just a quick blog post to direct readers to the best Oracle-related paper detailing the value EMC XtremIO brings to Oracle Database use cases.  I’ve been looking forward to the availability of this paper for quite some time as I supported (minimally, really) the EMC Global Solutions Engineering group in this effort. They really did a great job with this testing! I highly recommend this paper for readers who are interested in:

  • Leveraging immediate, space efficient, zero overhead storage snapshots for productivity
  • All-Flash Array performance
  • Database workload consolidation

Click the following link to access the whitepaper: click here.   wp-1 Abstract:

This white paper describes the deployment of the XtremIO® all-flash array with Oracle RAC 11g and 12c databases in both physical and virtual environments. It describes optimal performance while scaling up in a physical environment, the effect of adding multiple virtualized database environments, and the impact of using XtremIO Compression with Oracle Advanced Compression. The white paper also demonstrates the physical space efficiency and low performance impact of XtremIO snapshots.


Filed under: oracle Tagged: Oracle Database performance XtremIO flash, Oracle Performance, Random I/O, XtremIO

Adding An EMC XtremIO Volume As An ASM Disk With Oracle Database 12c On Linux – It Does Not Get Any Easier Than This.

Wed, 2015-03-04 19:07
When Something Is Simple It Must Be Simple To Prove

Provisioning high-performance storage has always been a chore. Care and concern over spindle count, RAID type, RAID attributes, number of controller arms involved and a long list of other complexities have burdened storage administrators. Some of these troubles were mitigated by the advent of Automatic Storage Management–but not entirely.

Wouldn’t it be nice if the complexity of storage provisioning could be boiled down to but a single factor? Wouldn’t it be nice if that single factor was, simply, capacity? With EMC XtremIO the only factor storage administrators need to bear in mind when provisioning storage is, indeed, capacity.

With EMC XtremIO a storage administrator hears there is a need for, say, one terabyte of storage and that is the entirety of information needed. No more questions about the I/O pattern (e.g., large sequential writes ala redo logging, etc). The Database Administrator simply asks for capacity with a very short sentence and the Storage Administrator clicks 3 buttons in the XtremIO GUI and that’s all there is to it.

Pictures Speak Thousands of Words

I too enjoy the simplicity of XtremIO in my engineering work. Just the other day I ran short on space in a tablespace while testing Oracle Database 12c intra-node parallel query. I was studying a two-node Real Application Clusters setup attached to an EMC XtremIO array via 8 paths of 8GFC Fibre Channel. The task at hand was a single parallel CTAS (Create Table As Select) but the command failed because my ASM disk group ran out of space when Oracle Database tried to extend the BIGFILE tablespace.

Since I had to add some space I thought I’d take a few screen shots to show readers of this blog how simple it is to perform the full cycle of tasks required to add space to an active cluster with ASM in an XtremIO environment.

The following screen shot shows the error I was reacting to:

2015-02-23-pic1

Since the following example shows host configuration steps please note the Linux distribution (Oracle Linux) and kernel version (UEK) I was using:

linux-release

The following screenshot shows the XtremIO GUI configuration tab. I selected “Add” and then typed a name and size (1TB) of the volume I wanted to create:

NOTE: Right click the embedded images for greater clarity

2015-02-23-pic2

The following screenshot shows how I then selected the initiators (think hosts) from the right-hand column that I wanted to see the new volume:

2015-02-23-pic4

After I clicked “apply” I could see my new volume in my “12C” folder. With the folder construct I can do things like create zero-overhead, immediate, writable snapshots with a single mouse click. As the following screenshot shows, I highlighted “data5” so I could get details about the volume in advance of performing tasks on the host. The properties tab shows me the only information I need to proceed–the NAA Identifier. Once I had the NAA Identifier I moved on to the task of discovering the new volume on the hosts.

 

2015-02-23-pic5

Host Discovery

Host discovery consists of three simple steps:

  1. Multipath discovery
  2. Updating the udev rules file with a text editor
  3. Updating udev state with udevadm commands
Multipath Discovery

On both nodes of the cluster I executed the following series of commands. This series of commands generates a lot of terminal output so I won’t show that in this blog post.

# multipath -F ;service multipathd restart ; rescan-scsi-bus.sh -r

After executing the multipath related commands I was able to see the new volume (0002a) on both nodes of the cluster. Notice how the volume has different multipath names (mpathab, mpathai) on the hosts. This is not an issue since the volumes will be controlled by udev:

2015-02-23-pic7

Updating Udev Rules File and Udev State

After verifying the volumes were visible under DM-MPIO I moved on to the udev actions. The following screenshot shows how I added an ACTION line in the udev rules file and copied it to the other RAC host and then executed the udev update commands on both RAC hosts:

2015-02-23-pic9

I then could see “/dev/asmdisk6” on both RAC hosts:

2015-02-23-pic10

Adding The New XtremIO Volume As An ASM Disk

The next task was to use ASMCA (ASM Configuration Assistant) to add the XtremIO volume to the ASM disk group called “DATA”:

2015-02-23-pic8

As the following screenshot shows the volume is visible as /dev/asmdisk6:

2015-02-23-pic11

I selected asmdisk6 and the task was complete:

2015-02-23-pic12

I then saw evidence of ASM rebalancing in the XtremIO GUI Performance tab:

2015-02-23-pic15

 

Summary

With EMC XtremIO you provision capacity and that allows you to speak in very short sentences with the application owners that share space in the array.

It doesn’t get any easier than this.


Filed under: oracle

Little Things Doth Crabby Make – Part XVIII. Automatic Storage Management Won’t Let Me Use My Disk For My Files! Yes, It Will!

Fri, 2015-02-06 14:52

It’s been a long time since my last installment in the Little Things Doth Crabby Make series and to be completely honest this particular topic isn’t really all that fit for a LTDCM installment because it covers something that is possible but less than expedient.  That said, there are new readers of this blog and maybe it’s time they google “Little Things Doth Crabby Make” to see where this series has been. This post might rustle up that curiosity!

So what is this blog post about? It’s about stuffing any file system file into Automatic Storage Management space. OK, so maybe this is just morbid curiosity or trivial pursuit. Maybe it’s just a parlor trick. I would agree with any of those descriptions. Nonetheless maybe there are 42 or so people out there who didn’t know this. If so, this post is for them.

ASMCMD cp Command

The cp sub-command of ASM lets you stuff certain database files into ASM. We all know this. However, just to make it all fresh in people’s minds I’ll show a screen shot of me trying to push a compressed tar archive of $ORACLE_HOME/bin/oracle up into ASM:

2014.02.04-pic-0

Well, that’s not surprising. But what happens if I take heed of the error message and attempt to placate? The block size is 8KB so the following screen shot shows me rounding up the size of the compressed tar archive to an 8192B blocking factor:

2014.02.04-pic-0.1

ASMCMD still won’t gobble up the file. That’s still not all that surprising because after ASMCMD checked the geometry of the file it then read the file looking for a header or any file magic it could understand.  As you can see ASMCMD doesn’t see a file type it understands. The following screen shot shows me pre-pending the tar archive with file magic I know ASMCMD must surely understand. I have a database with a tablespace called foo that I created in a non-Oracle Disk Manager naming convention (foo.dbf). The screen shot shows me:

  1. Extracting the foo.dbf file
  2. “Borrowing” 1MB from the head of the file
  3. Creating a compressed tar archive of the Oracle Database executable
  4. Rounding up the size of the compressed tar archive to an 8192B blocking factor

2014.02.04-pic1

 

So now I have a file that has the “shape” of a datafile and the necessary header information from a datafile. The next screen shot shows:

  1. ASMCMD cp command pushing my file into ASM
  2. Removal of all of my current working directory files
  3. ASMCMD cp command pulling the file form ASM and into my current working directory
  4. Extracting the contents of the “embedded” tar archive
  5. md5sum(1) proof the file contents survived the journey

2014.02.04-pic2

OK, so that’s either a) something nobody would ever do or b) something that can be done with some elegant execution of some internal database package in a much less convoluted way or c) a combination of both “a” and “b” or d) a complete waste of my time to post, or, finally, e) a complete waste of your time reading the post. I’m sorry for “a”,”b”,”c” and certainly “e” if the case should be so.

Now you must wonder why I put this in the Little Things Doth Crabby Make series. That’s simple. I don’t like any “file system” imposing restrictions on file types:)

 


Filed under: oracle

Scrutinizing Exadata X5 Datasheet IOPS Claims…and Correcting Mistakes

Mon, 2015-02-02 19:37

I want to make these two points right out of the gate:

  1. I do not question Oracle’s IOPS claims in Exadata datasheets
  2. Everyone makes mistakes
Everyone Makes Mistakes

Like me. On January 21, 2015, Oracle announced the X5 generation of Exadata. I spent some time studying the datasheets from this product family and also compared the information to prior generations of Exadata namely the X3 and X4. Yesterday I graphed some of the datasheet numbers from these Exadata products and tweeted the graphs. I’m sorry  to report that two of the graphs were faulty–the result of hasty cut and paste. This post will clear up the mistakes but I owe an apology to Oracle for incorrectly graphing their datasheet information. Everyone makes mistakes. I fess up when I do. I am posting the fixed slides but will link to the deprecated slides at the end of this post.

We’re Only Human

Wouldn’t IT be a more enjoyable industry if certain IT vendors stepped up and admitted when they’ve made little, tiny mistakes like the one I’m blogging about here? In fact, wouldn’t it be wonderful if some of the exceedingly gruesome mistakes certain IT vendors make would result in a little soul-searching and confession? Yes. It would be really nice! But it’ll never happen–well, not for certain IT companies anyway. Enough of that. I’ll move on to the meat of this post. The rest of this article covers:

  • Three Generations of Exadata IOPS Capability
  • Exadata IOPS Per Host CPU
  • Exadata IOPS Per Flash SSD
  • IOPS Per Exadata Storage Server License Cost
Three Generations of Exadata IOPS Capability

The following chart shows how Oracle has evolved Exadata from the X3 to the X5 EF model with regard to IOPS capability. As per Oracle’s datasheets on the matter these are, of course, SQL-driven IOPS. Oracle would likely show you this chart and nothing else. Why? Because it shows favorable,  generational progress in IOPS capability. A quick glance shows that read IOPS improved just shy of 3x and write IOPS capability improved over 4x from the X3 to X5 product releases. These are good numbers. I should point out that the X3 and X4 numbers are the datasheet citations for 100% cached data in Exadata Smart Flash Cache. These models had 4 Exadata Smart Flash Cache PCIe cards in each storage server (aka, cell). The X5 numbers I’m focused on reflect the performance of the all-new Extreme Flash (EF) X5 model. It seems Oracle has started to investigate the value of all-flash technology and, indeed, the X5 EF is the top-dog in the Exadata line-up. For this reason I choose to graph X5 EF data as opposed to the more pedestrian High Capacity model which has 12 4TB SATA drives fronted with PCI Flash cards (4 per storage server). exadata-evolution-iops-gold-1 The tweets I hastily posted yesterday with the faulty data points aimed to normalize these performance numbers to important factors such as host CPU, SSD count and Exadata Storage Server Software licensing costs.  The following set of charts are the error-free versions of the tweeted charts.

Exadata IOPS Per Host CPU

Oracle’s IOPS performance citations are based on SQL-driven workloads. This can be seen in every Exadata datasheet. All Exadata datasheets for generations prior to X4 clearly stated that Exadata IOPS are limited by host CPU. That is a very important fact to understand because SQL-driven IOPS is a host metric no matter what your storage is.

Indeed, anyone who studies Oracle Database with SLOB knows how all of that works. SQL-driven IOPS requires host CPU. Sadly, however, Oracle ceased stating the fact that IOPS are host-CPU bound in Exadata as of the advent of the X4 product family. I presume Oracle stopped correctly stating the factual correlation between host CPU and SQL-driven IOPS for only the most honorable of reasons with the best of customers’ intentions in mind.

In case anyone should doubt my assertion that Oracle historically associated Exadata IOPS limitations with host CPU I submit the following screen shot of the pertinent section of the X3 datasheet:   X3-datasheet-truth Now that the established relationship between SQL-driven IOPS and host CPU has been demystified, I’ll offer the following chart which normalizes IOPS to host CPU core count: exadata-evolution-iops-per-core-gold I think the data speaks for itself but I’ll add some commentary. Where Exadata is concerned, Oracle gives no choice of host CPU to customers. If you adopt Exadata you will be forced to take the top-bin Xeon SKU with the most cores offered in the respective Intel CPU family.  For example, the X3 product used 8-core Sandy Bridge Xeons. The X4 used 12-core Ivy Bridge Xeons and finally the X5 uses 18-core Haswell Xeons. In each of these CPU families there are other processors of varying core counts at the same TDP. For example, the Exadata X5 processor is the E5-2699v3 which is a 145w 18-core part. In the same line of Xeons there is also a 145w 14c part (E5-2697v3) but that is not an option to Exadata customers.

All of this is important since Oracle customers must license Oracle Database software by the host CPU core. The chart shows us that read IOPS per core from X3 to X4 improved 18% but from X4 to X5 we see only a 3.6% increase. The chart also shows that write IOPS/core peaked at X4 and has actually dropped some 9% in the X5 product. These important trends suggest Oracle’s balance between storage plumbing and I/O bandwidth in the Storage Servers is not keeping up with the rate at which Intel is packing cores into the Xeon EP family of CPUs. The nugget of truth that is missing here is whether the 145w 14-core  E5-2697v3 might in fact be able to improve this IOPS/core ratio. While such information would be quite beneficial to Exadata-minded customers, the 22% drop in expensive Oracle Database software in such an 18c versus 14c scenario is not beneficial to Oracle–especially not while Oracle is struggling to subsidize its languishing hardware business with gains from traditional software.

Exadata IOPS Per Flash SSD

Oracle uses their own branded Flash cards in all of the X3 through X5 products. While it may seem like an implementation detail, some technicians consider it important to scrutinize how well Oracle leverages their own components in their Engineered Systems. In fact, some customers expect that adding significant amounts of important performance components, like Flash cards, should pay commensurate dividends. So, before you let your eyes drift to the following graph please be reminded that X3 and X4 products came with 4 Gen3 PCI Flash Cards per Exadata Storage Server whereas X5 is fit with 8 NVMe flash cards. And now, feel free to take a gander at how well Exadata architecture leverages a 100% increase in Flash componentry: exadata-evolution-iops-per-SSD-gold This chart helps us visualize the facts sort of hidden in the datasheet information. From Exadata X3 to Exadata X4 Oracle improved IOPS per Flash device by just shy of 100% for both read and write IOPS. On the other hand, Exadata X5 exhibits nearly flat (5%) write IOPS and a troubling drop in read IOPS per SSD device of 22%.  Now, all I can do is share the facts. I cannot change people’s belief system–this I know. That said, I can’t imagine how anyone can spin a per-SSD drop of 22%–especially considering the NVMe SSD product is so significantly faster than the X4 PCIe Flash card. By significant I mean the NVMe SSD used in the X5 model is rated at 260,000 random 8KB IOPS whereas the X4 PCIe Flash card was only rated at 160,000 8KB read IOPS. So X5 has double the SSDs–each of which is rated at 63% more IOPS capacity–than the X4 yet IOPS per SSD dropped 22% from the X4 to the X5. That means an architectural imbalance–somewhere.  However, since Exadata is a completely closed system you are on your own to find out why doubling resources doesn’t double your performance. All of that might sound like taking shots at implementation details. If that seems like the case then the next section of this article might be of interest.

IOPS Per Exadata Storage Server License Cost

As I wrote earlier in this article, both Exadata X3 and Exadata X4 used PCIe Flash cards for accelerating IOPS. Each X3 and X4 Exadata Storage Server came with 12 hard disk drives and 4 PCIe Flash cards. Oracle licenses Exadata Storage Server Software by the hard drive in X3/X4 and by the NVMe SSD in the X5 EF model. To that end the license “basis” is 12 units for X3/X5 and 8 for X5. Already readers are breathing a sigh of relief because less license basis must surely mean less total license cost. Surely Not! Exadata X3 and X4 list price for Exadata Storage Server software was $10,000 per disk drive for an extended price of $120,000 per storage server. The X5 EF model, on the other hand, prices Exadata Storage Server Software at $20,000 per NVMe SSD for an extended price of $160,000 per Exadata Storage Server. With these values in mind feel free to direct your attention to the following chart which graphs the IOPS per Exadata Storage Server Software list price (IOPS/license$$). exadata-evolution-iops-per-license-cost-gold The trend in the X3 to X4 timeframe was a doubling of write IOPS/license$$ and just short of a 100% improvement in read IOPS/license$$. In stark contrast, however, the X5 EF product delivers only a 57% increase in write IOPS/license$$ and a troubling, tiny, 17% increase in read IOPS/license$$. Remember, X5 has 100% more SSD componentry when compared to the X3 and X4 products.

Summary

No summary needed. At least I don’t think so.

About Those Faulty Tweeted Graphs

As promised, I’ve left links to the faulty graphs I tweeted here: Faulty / Deleted Tweet Graph of Exadata IOPS/SSD: http://wp.me/a21zc-1ek Faulty / Deleted Tweet Graph of Exadata IOPS/license$$: http://wp.me/a21zc-1ej

References

Exadata X3-2 datasheet: http://www.oracle.com/technetwork/server-storage/engineered-systems/exadata/exadata-dbmachine-x3-2-ds-1855384.pdf Exadata X4-2 datasheet: http://www.oracle.com/technetwork/database/exadata/exadata-dbmachine-x4-2-ds-2076448.pdf Exadata X5-2 datasheet: http://www.oracle.com/technetwork/database/exadata/exadata-x5-2-ds-2406241.pdf X4 SSD info: http://www.oracle.com/us/products/servers-storage/storage/flash-storage/f80/overview/index.html X5 SSD info: http://docs.oracle.com/cd/E54943_01/html/E54944/gokdw.html#scrolltoc Engineered Systems Price List: http://www.oracle.com/us/corporate/pricing/exadata-pricelist-070598.pdf , http://www.ogs.state.ny.us/purchase/prices/7600020944pl_oracle.pdf


Filed under: oracle

Recommended Reading: Oracle Database 12c NUMA-Related Topics

Wed, 2015-01-07 15:42

This is a short post to recommend some recent blog posts by Nikolay Manchev and Bertrand Drouvot on the topic of Oracle Database 12c NUMA awareness.

Nikolay provides a very helpful overview on Linux Control Groups and how they are leveraged by Oracle Database 12c. Bertrand Drouvot carried the topic a bit further by leveraging SLOB to assess the impact of NUMA remote memory on a cached Oracle Database workload. Yes, SLOB is very useful for more than physical I/O! Good job, Bertrand!

These are good studies and good posts!

Also, one can refer to MOS 1585184.1 for more information on Control Groups and a helpful script to configure CGROUPS.

The following links will take you to Nikolay and Bertrand’s writings on the topic:

http://manchev.org/2014/03/processor-group-integration-in-oracle-database-12c/

https://bdrouvot.wordpress.com/2015/01/07/measure-the-impact-of-remote-versus-local-numa-node-access-thanks-to-processor_group_name/

 


Filed under: oracle

SLOB 2.2 Not Generating AWR reports? Testing Large User Counts With Think Time? Think Processes and SLOB_DEBUG.

Tue, 2014-12-16 17:30

I’ve gotten a lot of reports of folks branching out into SLOB 2.2 large user count testing with the SLOB 2.2 Think Time feature. I’m also getting reports that some of the same folks are not getting the resultant AWR reports one expects from a SLOB test.

If you are not getting your AWR reports there is the old issue I blogged about here (click here). That old issue was related to a Redhat bug.  However, if you have addressed that problem, and still are not getting your AWR reports from large user count testing, it might be something as simple as the processes initialization parameter. After all, most folks have been accustomed to generating massive amounts of physical I/O with SLOB at low session counts.

I’ve made a few changes to runit.sh that will help future folks should they fall prey to the simple processes initialization parameter folly. The fixes will go into SLOB 2.2.1.3. The following is a screen shot of these fixes and what one should expect to see in such situation in the future. In the meantime, do take note of SLOB_DEBUG as mentioned in the screenshot:

 

slob2.2-processes-folly


Filed under: oracle

SLOB Data Loading Case Studies – Part II. SLOB 2.2 For High-Bandwidth Data Loading.

Tue, 2014-12-16 00:44

BLOG UPDATE 2015.07.24: For all testing recipes please visit the SLOB Recipes section of kevinclosson.net/slob

This is Part II in a series. Part I can be found here (click here). Part I in the series covered a very simple case of SLOB data loading. This installment is aimed at how one can use SLOB as a platform test for a unique blend of concurrent, high-bandwidth data loading, index creation and CBO statistics gathering.

Put SLOB On The Box – Not In a Box

As a reminder, the latest SLOB kit is always available here: kevinclosson.net/slob .

Often I hear folks speak of what SLOB is useful for and the list is really short. The list is so short that a single acronym seems to cover it—IOPS, just IOPS and nothing else. SLOB is useful for so much more than just testing a platform for IOPS capability. I aim to make a few blog installments to make this point.

SLOB for More Than Physical IOPS

I routinely speak about how to use SLOB to study host characteristics such as NUMA and processor threading (e.g., Simultaneous Multithreading on modern Intel Xeons). This sort of testing is possible when the sum of all SLOB schemas fit into the SGA buffer pool. When testing in this fashion, the key performance indicators (KPI) are LIOPS (Logical I/O per second) and SQL Executions per second.

This blog post is aimed at suggesting yet another manner of platform testing with SLOB–specifically concurrent bulk data loading.

The SLOB data loader (~SLOB/setup.sh) offers the ability to test non-parallel, concurrent table loading, index creation and CBO statistics collection.

In this blog post I’d like to share a “SLOB data loading recipe kit” for those who wish to test high performance SLOB data loading. The contents of the recipe will be listed below. First, I’d like to share a platform measurement I took using the data loading recipe. The host was a 2s20c40t E5-2600v2 server with 4 active 8GFC paths to an XtremIO array.

The tar archive kit I’ll refer to below has the full slob.conf in it, but for now I’ll just use a screen shot. Using this slob.conf and loading 512 SLOB schema users generates 1TB of data in the IOPS tablespace. Please note the attention I’ve drawn to the slob.conf parameters SCALE and LOAD_PARALLEL_DEGREE. The size of the aggregate of SLOB data is a product of SCALE and the number of schemas being loaded. I drew attention to LOAD_PARALLEL_DEGREE because that is the key setting in increasing the concurrency level during data loading. Most SLOB users are quite likely not accustomed to pushing concurrency up to that level. I hope this blog post makes doing so seem more worthwhile in certain cases.

SLOB-dataload-slob.conf

The following is a screenshot of the output from the SLOB 2.2 data loader. The screenshot shows that the concurrent data loading portion of the procedure took 1,474 seconds. On the surface that would appear to be a data loading rate of approximately 2.5 TB/h. One thing to remember, however, is that SLOB data is loaded in batches controlled by LOAD_PARALLEL_DEGREE. Each batch loads LOAD_PARALLEL_DEGREE number of tables and then creates a unique indexes and performs CBO statistics gathering.  So the overall “data loading” time is really data loading plus these ancillary tasks. To put that another way, it’s true this is a 2.5TB data loading use case but there is more going on than just simple data loading. If this were a pure and simple data loading processing stream then the results would be much higher than 2.5TB/h. I’ll likely blog about that soon.

slob2.2-load-1TB

As the screenshot shows the latest SLOB 2.2 data loader isolates the concurrent loading portion of setup.sh. In this case, the seed table (user1) was loaded in 20 seconds and then the concurrent loading portion completed in 1,474 seconds.

That Sounds Like A Good Amount Of Physical I/O But What’s That Look Like?

To help you visualize the physical I/O load this manner of testing places on a host, please consider the following screenshot. The screenshot shows peaks of vmstat 30-second interval reporting of approximately 2.8GB/s physical read I/O combined with about 435 MB write I/O for an average of about 3.2GB/s. This host has but 4 active 8GFC fibre channel paths to storage so that particular bottleneck is simple to solve by adding another 4 port HBA! Note also how very little host CPU is utilized to generate the 4x8GFC saturating workload. User mode cycles are but 15% and kernel mode utilization was 9%. It’s true that 24% sounds like a lot, however, this is a 2s20c40t host and therefore 24% accounts for only 9.6 processor threads–or 5 cores worth of bandwidth. There may be some readers who were not aware that 5 “paltry” Ivy Bridge Xeon cores are capable of driving this much data loading!

NOTE: The SLOB method is centered on the sparse blocks. Naturally, fewer CPU cycles are required for loading data into sparse blocks.

Please note, the following vmstat shows peaks and valleys. I need to remind you that SLOB data loading consists of concurrent processing of not only data loading (Insert as Select) but also a unique index creation and CBO statistics gathering. As one would expect I/O will wane as the loading process shifts from the bulk data load to the index creation phase and then back again.

vmstat-SLOB-dataload

Finally, the following screenshot shows the very minimalist init.ora settings I used during this testing.

SLOB-dataload-load.ora

The Recipe Kit

The recipe kit can be found in the following downloadable tar archive. The kit contains the necessary files one would need to reproduce this SLOB data loading time so long as the platform has sufficient performance attributes. The tar archive also has all output generated by setup.sh as the following screenshot shows:

slob-data-load-kit

The SLOB 2.2 data loading recipe kit can be downloaded here (click here). Please note, the screenshot immediately above shows the md5 checksum for the tar archive.

Summary

This post shows how one can tune the SLOB 2.2 data loading tool (setup.sh) to load 1 terabyte of SLOB data in well under 25 minutes. I hope this is helpful information and that, perhaps, it will encourage SLOB users to consider using SLOB for more than just physical IOPS testing.


Filed under: oracle

How Do I Know I Have The Latest SLOB Kit?

Thu, 2014-12-11 17:25

This is a quick blog post to show SLOB users how to determine whether they are using the latest SLOB kit. If you visit kevinclosson.net/slob you’ll see the webpage I captured in the following screenshot.

Once on the SLOB Resources page you can simply hover over the “SLOB 2.2 (Click here)” hyperlink and the bottom of your browser will show the full name of the tar archive. Alternatively you can use md5sum(1) on Linux (or md5 on Mac) to get the checksum of the tar archive you have and compare it to the md5sum I put on the web page (see the arrow).

 

latest-slob


Filed under: oracle

Announcing XtremIO Performance Engineering Lab Report: Facts About Redo Logging And NAND Flash.

Sun, 2014-12-07 15:06

I invite you to please read this report.

NAND Flash is good for a lot of things but not naturally good with write-intensive workloads. Unless, that is, skillful engineering is involved to mitigate the intrinsic weaknesses of NAND Flash in this regard. I assert EMC XtremIO architecture fills this bill.

Regardless of your current or future plans for adopting non-mechanical storage I hope this lab report will show some science behind how to determine suitability for non-mechanical storage–and NAND Flash specifically–where Oracle Database redo logging is concerned.

Please note: Not all lab tests are aimed at achieving maximum theoretical limits in all categories. This particular lab testing required sequestering precious lab gear for a 104 hour sustained test.

The goal of the testing was not to show limits but, quite to the contrary, to show a specific lack of limits in the area of Oracle Database redo logging. For a more general performance-focused paper please download this paper (click here).  With that caveat aside, please see the following link for the redo logging related lab report:

Link to XtremIO Performance Engineering Lab Report (click here).

 

Redo-Durability-splash


Filed under: oracle

Announcing SLOB 2.2 : Think Time and Limited-Scope User-Data Modification

Fri, 2014-12-05 00:19

This is a hasty blog post to get SLOB 2.2 out to those who are interested.

Where To Get The Kit

Please visit kevinclosson.net/slob

About The New Kit

In addition to doing away with the cumbersome “seed” table and procedure.sql, this kit introduces 5 new slob.conf parameters. By default these parameters are disabled.

This SLOB distribution does not require re-executing setup.sh. One can simply adopt the kit and use it to test existing SLOB databases. The following explains the new slob.conf parameters:

DO_UPDATE_HOTSPOT=FALSE

When set to TRUE, modify SQL will no longer affect random rows spanning each session’s schema. Instead, each session will only modify HOTSPOT_PCT percent of their data.

HOTSPOT_PCT=10

This parameter controls how much of each session’s schema gets modified when UPDATE_PCT is non-zero. The default will limit the scope of each session’s data modifications to a maximum of 10% of their data.

THINK_TM_MODULUS=0

When set to non-zero this is a frequency control on how often sessions will incur think time. For example, if set to 7, every seventh SQL statement will be following by a sleep (think time) for a random amount of time between THINK_TM_MIN and THINK_TM_MAX. It’s best to assign a prime number to THINK_TM_MODULUS.

THINK_TM_MIN=.1

The low-bound for selection of a random period to sleep when THINK_TM_MODULUS triggers a think time event.

THINK_TM_MAX=.5

The high-bound for selection of a random period to sleep when THINK_TM_MODULUS triggers a think time event.

Notes About Think Time

The resolution supported for think time is seconds and values in the hundreds of seconds are supported.

Additional Helpful Files

The additional tarball (at the following link) has a slob.conf, simple.ora and awr.txt that show a way to have 256 sessions produce the following load profile (on 2s16c32t E5 Xeon):

https://my.syncplicity.com/share/geydubw3q42okrt/think-time-help-files.tar

load-profile-think-time


Filed under: oracle, SLOB Tagged: Oracle, SLOB

Pages