Re: Huge generation of archivelog: how to tweak that ?

From: Mark D Powell <Mark.Powell_at_eds.com>
Date: Tue, 4 Aug 2009 07:28:39 -0700 (PDT)
Message-ID: <0485edeb-b7d5-4ba4-9744-c0a69664d6e1_at_32g2000yqj.googlegroups.com>



On Aug 4, 1:11 am, Xavier Maillard <x..._at_gnu.org> wrote:
> Hi,
>
> we are creating standby databases (physical) for several
> databases (9i). All in all it works perfectly except one thing:
> for one of them archivelog generation is totally out of control.
>
> We have multiplexed archivelog destinations sized at 8Gb (which,
> based on our estimations was something unlikely to happen below
> one full production day). Today, these 8Gb are hitten in *one*
> hour only; to be more precise, this happens for at least one
> program: a purge.
>
> Today, this is a no-go for our whole dataguard platform since
> every hour we must delete manually archive logs manually to
> permit the purge to finish correctly (thus breaking our standby
> database).
>
> What I am trying to figure out is this:
>
> - why do we hit such archivelog production ?
> - what is exactly stored in an archived redo log ?
> - how can we distingly disminish this archive log generation ?
> - what could be done in order not to break our standby database ?
> - is there a "best practice" our developers should follow to code
>  his purge system (# of commit, commit frequency, DLL to avoid
>  using, ...
>
> I googled hard but found nothing. Any help would be greatly
> appreciated here !
>
> Thank you in advance.
>
> Xavier

>> what is exactly stored in an archived redo log ? <<

The answer to this question is extremely basic. The archive logs are well documented in the Concepts and DBA Administration manuals. Basically all data changes that the database processes are copied to the redo logs which in turn are archived. (Pretty much all production databases always be in archive log mode otherwise forward recovery from disk failure is not possible)

The proper way to handle a large quantitiy of data being archived is to 1- correctly size the online redo logs so that the logs do not switch excessively 2- make sure the online redo log files are spread across multiple physical disks, 3- to make sure the archive log destination has plenty of free space, and 4- that the archive log destination is supported by multiple physical disk units.

If you think the amount of redo being generated is excessive for the workload being performed then you need to discuss the application design with the developers and look for sections of the application where the processing logic could be improved. If you do not use a physical DataGuard setup the nologging feature of direct path loads might be one tool you can use since it will cut done on the amount of undo generated and since undo is logged to redo you may be albe to use this option to reduce the amount of redo generated. But only if you have large insert jobs that are responsible for much of the redo activity.

If the developers used a lot of work (permanent temporary tables or DDL in job streams) then you can have them rewrite the code to just perform 4 and 5 table joins and reply on the Oracle read consistency model instead of work tables, substitute true (global) temporary tables for the permanent work tables where a table has to be used, substiture truncate for delete to clear out persistent work table where they have to be used, and a couple of like methods to reduce the amount of work being done in the processes.

In Oracle you should normally create objects one time and use them many. If the developer designed processes use DDL then improvement is possible. Create the objects once and re-use them.

Someone mentioned the extra cost partitioning option. While partitioning may be useful for managment and performance reasons I am not sure it will help much in reducing the amount of overall redo generated.

Running a statspack or AWR report (10g+ with Performance Pack License) and monitoring the SGA/Shared Pool should give you some idea what processes generate the large amounts of redo. It then becomes a matter of looking at the work being done to see if the same results can be obtained doing less work.

HTH -- Mark D Powell -- Received on Tue Aug 04 2009 - 09:28:39 CDT

Original text of this message