Re: log_buffer sizing

From: Tim Gorman <tim_at_evdbt.com>
Date: Mon, 15 Aug 2011 20:12:42 +0000
Message-ID: <W39037839184481313439162_at_webmail37>



Kart,

There is no magic bullet in that query, it was just for your information, so you can provide something more useful than the size of your online redo log files.

So, asking again: what are you seeing that made you say, "the log buffer seems too small". From this query, you can see when redo generation gets high. During those times, are you seeing anything that indicates any kind of "bottleneck" in redo processing?

Thanks!

-Tim

-----Original Message-----

From: K R [mailto:kp0773_at_gmail.com]
Sent: Monday, August 15, 2011 02:01 PM
To: tim_at_evdbt.com
Cc: Oracle-L_at_freelists.org
Subject: Re: log_buffer sizing

Tim,
this is a typical workload when the batch is running on the database.

DAY HR NBR_SWITCHES GB
----------- ----- ------------ -----------
13-AUG-2011 00:00 12 3.84

 01:00 12 8.46
 02:00 30 51.52
 03:00 38 60.86
 04:00 36 61.56
 05:00 43 72.49
 06:00 43 55.48
 07:00 12 0.45
 08:00 12 0.03
 09:00 12 0.26
 10:00 12 6.42
 11:00 12 3.89
 12:00 12 0.25
 13:00 12 2.35
 14:00 12 0.04
 15:00 13 1.90
 16:00 41 72.73
 17:00 32 55.27
 18:00 37 44.95
 19:00 12 0.10
 20:00 12 0.05
 21:00 12 0.05
 22:00 12 3.65
 23:00 14 16.64

*********** ***** -----------
sum 523.25

On Mon, Aug 15, 2011 at 12:52 PM, Tim Gorman <tim_at_evdbt.com> wrote: Kart,

What exactly are you seeing in the database that makes you say "it just seems too low"?

Bear in mind that the size of the online redo log files has no causal relation to the size of the log buffer, and vice-versa. What matters is the rate with which redo is written, and the frequency of COMMIT commands by the application.

For the first bit of information, try the following SQL*Plus script...

clear breaks
break on day skip 1 on hr on report
compute sum of gb on day
compute sum of gb on report
col sort0 noprint
col cnt format 999,990
col gb format 999,990.00
select trunc(completion_time,'HH24') sort0,  to_char(trunc(completion_time,'HH24'),'DD-MON-YYYY') day,  to_char(trunc(completion_time,'HH24'),'HH24')||':00' hr,  inst_id,
 count(*) nbr_switches,
 sum(blocks*block_size)/(1048576*1024) gb from gv$archived_log

group by trunc(completion_time,'HH24'),
 to_char(trunc(completion_time,'HH24'),'DD-MON-YYYY'),
 to_char(trunc(completion_time,'HH24'),'HH24')||':00',
 inst_id
order by 1, 2, 3, 4;

That will give you some idea of the rate and volume. Of course, this query may end up double-counting redo volumes if you have additional archive destinations set up, so be aware of that and adjust accordingly.

Hope this helps...

Thanks!

-Tim

-----Original Message-----

From: K R [mailto:kp0773_at_gmail.com]
Sent: Monday, August 15, 2011 01:22 PM
To:Oracle-L_at_freelists.org
Subject: log_buffer sizing

All ,

my online redo logs are 3G . This is a batch intensive datawarehouse application. What should i be keeping my log_buffer .

currently it is set at 29425664 and it just seems too low .

thanks
Kart

--

http://www.freelists.org/webpage/oracle-l Received on Mon Aug 15 2011 - 15:12:42 CDT

Original text of this message