Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Redo killer

Re: Redo killer

From: Mike Moore <mmoore_gmp_at_yahoo.com>
Date: 27 Oct 2004 06:43:09 -0700
Message-ID: <43c454d1.0410270543.4fd952c0@posting.google.com>


HansF <news.hans_at_telus.net> wrote in message news:<hRcfd.360$df2.164_at_edtnps89>...
> Mike Moore wrote:
>
> > Part of our billing process creates over 1 million inserts into a
> > table which kills our redo logs. Then when the process is complete
> > the data is extracted and never referenced again. We truncate the
> > table each night. In other words the data is semi-temporary.
> >
> > I'd change the table definition to a temporary table, except that if
> > the program crashes it needs to pick up where it left off and not
> > start all over again. There has got to be another way to handle this
> > process, but I'm at a loss for a solution. Any ideas would be greatly
> > appreciated!!!
>
> Is your problem the redo size, redo volume or redo 'speed'?
>
> OS, Oracle version, disk layout info and info like redo log sizes and switch
> interval could be useful.
>
> /Hans

Some details: 9i with RAC on Tru64 Unix, disk and RAM are scaled for so are not a problem for now.

We're switching 8-12 times per hours on redo logs that are 100 MB a piece when this process is running. There is a similar process that also hammers the logs. So when these jobs are running I'm getting a couple gig of redo for data that has a useful life of a few hours. This is just stupid! With the exception of when these couple of batch process run the logs switch once every couple hours.

I hate to add redo for bad code. Plus, it hurts my MTTR numbers.

Thoughts? Suggestions? Comments? Solutions? Received on Wed Oct 27 2004 - 08:43:09 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US