Re: Testing data usage patterns

From: Don Seiler <>
Date: Fri, 6 Dec 2013 11:40:30 -0600
Message-ID: <>

Thanks Kevin. All but one (DEV) of our Oracle software installs were not linked with rat_on. It was done before I started here and I couldn't say why. So we need to relink with rat_on to capture the load on our production DB. Obvious downtime requirements, etc.


On Thu, Dec 5, 2013 at 5:51 PM, Kevin Jernigan <>wrote:

> [shameless plug for a product I manage at Oracle] Even though your
> workload as described might not be a fit, you should still think about the
> benefits of HCC on ZFSSA, which is fully supported with If you
> have licensed Partitioning, then you can partition your larger tables by
> time, and use HCC on the older (and presumably "colder") partitions to
> greatly reduce the storage requirements for those partitions, typically
> 6x-10x with the "query" levels of HCC, and 15x++ with the "archive" levels.
> But to answer your real question, i.e. how to determine how frequently
> data is read and re-read, that's not an easy question to answer in
> We added Heat Map in 12c to make it easy to track query and
> modification times at the table, segment, and block (mods only) levels, but
> prior to 12c, there's nothing built-in that will give you the details you
> really need to project performance.
> You're better off focusing on using RAT or something similar to record or
> simulate your real-world workloads, and then replay those workloads in a
> test system with your ZFSSA storage in place. On that note, you say that
> you have to do some re-linking to use RAT - can you elaborate on what you
> have to re-link?
> Thanks,
> -KJ
> --
> Kevin Jernigan
> Senior Director Product Management
> Advanced Compression, Hybrid Columnar
> Compression (HCC), Database File System
> (DBFS), SecureFiles, Database Smart Flash
> Cache, Total Recall, Database Resource
> Manager (DBRM), Direct NFS Client (dNFS),
> Continuous Query Notification (CQN),
> Index Organized Tables (IOT), Information
> Lifecycle Management (ILM)
> (650) 607-0392 (o)
> (415) 710-8828 (m)
> On 12/5/13, 1:58 PM, Don Seiler wrote:
>> Good afternoon. Oracle Currently on ASM but looking to migrate
>> to the Oracle ZFSSA. The big selling point of the ZFSSA was the ARC and
>> L2ARC caches backed by DRAM and flash. However we want to know if our
>> application needs will be met here. We suspect that a fair amount of our
>> I/O is for unique lookups that aren't repeated that often, and so likely
>> would not be in the ZFSSA cache on subsequent executions.
>> However I'd like to be able to determine for sure. I'm wondering if any
>> list members have any suggestions on how to determine how frequently data
>> is read and re-read so we can make realistic expectations on what the
>> performance will be like for our application usage.
>> We're looking at using RAT to capture a typical workload and then play it
>> back on a "sandbox standby" that we have running on the ZFSSA now, but we
>> have to wait for a downtime window to do some re-linking. Wondering if
>> there are any alternatives that we can also look at.
>> --
>> Don Seiler

Don Seiler

Received on Fri Dec 06 2013 - 18:40:30 CET

Original text of this message