Re: Huge generation of archivelog: how to tweak that ?

From: John Hurley <johnbhurley_at_sbcglobal.net>
Date: Tue, 4 Aug 2009 06:12:21 -0700 (PDT)
Message-ID: <26ff76d9-e253-45aa-9df9-51425a69fa4c_at_t13g2000yqt.googlegroups.com>



On Aug 4, 1:11 am, Xavier Maillard <x..._at_gnu.org> wrote:

snip

> Hi,
>
> we are creating standby databases (physical) for several
> databases (9i). All in all it works perfectly except one thing:
> for one of them archivelog generation is totally out of control.
>
> We have multiplexed archivelog destinations sized at 8Gb (which,
> based on our estimations was something unlikely to happen below
> one full production day). Today, these 8Gb are hitten in *one*
> hour only; to be more precise, this happens for at least one
> program: a purge.
>
> Today, this is a no-go for our whole dataguard platform since
> every hour we must delete manually archive logs manually to
> permit the purge to finish correctly (thus breaking our standby
> database).
>
> What I am trying to figure out is this:
>
> - why do we hit such archivelog production ?
> - what is exactly stored in an archived redo log ?
> - how can we distingly disminish this archive log generation ?
> - what could be done in order not to break our standby database ?
> - is there a "best practice" our developers should follow to code
>  his purge system (# of commit, commit frequency, DLL to avoid
>  using, ...
>
> I googled hard but found nothing. Any help would be greatly
> appreciated here !
>
> Thank you in advance.
>
> Xavier

Truncate table may work better ... an 8 gig dest for archive logs is tiny ...

If I were you I would start by reading Tom Kyte's latest architecture book at least the first 10 chapters.

It will explain in detail what is in the redo logs ... Received on Tue Aug 04 2009 - 08:12:21 CDT

Original text of this message