RE: redo per second (size)on Exadata ?

From: Dimensional DBA <dimensional.dba_at_comcast.net>
Date: Thu, 28 Aug 2014 11:24:09 -0700
Message-ID: <003801cfc2ed$433d1cf0$c9b756d0$_at_comcast.net>



A VP giving you a specific number to test sounds like they have been spoken to or visited by a vendor claiming their system is faster.  

A better test for your own benefit might be to simply test what your specific system can generate at peak throughput versus what do your systems actually generate to understand how much headroom you have. This will make the conversation with the VP easier.  

I have had systems generate up to 420GB/Hr. sustained redo across multiple hours of the business day, but during high redo activity you could actually be generating redo at a rate much higher than that for short sustained periods like 10 to 15 minutes. An example would be you are performing large index builds and as the index starts the final read out of temp to build the actual new index structure on disk you can sustain some high rates of undo generation. Similarly with large ETL loads in a DW you can sustain high redo generation during loads.  

Some examples:

  1. During index builds I have had systems perform 26 - 1GB redo log switches a minute for up to 5 minutes. (433MB/sec)(Flash/SSD)
  2. During ETL loads I have had systems generate up to 15 - 1GB archive logs per minute for 20 minutes. (250MB/sec) (25 disk stripe 15K disk)(NonExadata)

With benchmarking these equivalent systems peak even higher, as the redo actually generated was dependent upon the application or task.  

I performed testing last year for actually getting backups off the Exadata in Oracle’s Santa Clara lab and was able to push 27.4TB/hr. to an Oracle ZBA. (Limit of the ZBA) (6,777MB/sec)(Infiniband)  

Your mileage will vary (based on model of the Exadata and drives chosen), but your Exadata system has no problem sustaining 80Mb/sec of redo generation per node and if you want to be more comfortable then implement flash logs for your redo. Just watch out for your archive logging not be able to keep up with the redo logging if you really sustain the high rates of redo generation.    

Matthew Parker

Chief Technologist

425-891-7934 (cell)

<mailto:Dimensional.dba_at_comcast.net> Dimensional.dba_at_comcast.net

<http://www.linkedin.com/pub/matthew-parker/6/51b/944/> View Matthew Parker's profile on LinkedIn
 

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of amihay gonen Sent: Thursday, August 28, 2014 3:55 AM
To: ORACLE-L
Subject: redo per second (size)on Exadata ?  

Hi all,

I've been asked by our VP to give estimation what is consider heavy system OLTP in term of redo per bytes rate.  

She told to to test our exadata machine with load of 80Mb per second per Node , and I've told her that I think it is too much .  

if OLTP system with generate 80Mb* (2 nodes) per second that it means 576G per hour .    

I wonder if anyone work with such systems , what is the typical redo rate ?    

thanks

amihay      

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Aug 28 2014 - 20:24:09 CEST

Original text of this message