Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Why does Orcl generate REDO logs in NOARCHIVE mode?
A copy of this was sent to Mike Burden <michael.burden_at_capgemini.co.uk>
(if that email address didn't require changing)
On Fri, 26 Mar 1999 09:19:15 +0000, you wrote:
>I'm going to be brave and try and reply to your comments.
>
>Firstly, I believe you have spotted an issue that I had not considered. However, currently
>I don't beleive it is a stopper. Please prove me wrong if you have the time.
>
>The issue you highlight is only a problem if the data datafiles SCN is greater than (i.e.
>later) any of the rollback datafiles SCN. In this case, as the RDBMS reads through the
>redo log it will get to a point were the rollback segments need rebuilding. Because the
>data datafiles may be later than the rollback it may well not contain data required to
>rebuild the rollback segment and thus can't. This is obviously not acceptable.
>
>In the example I gave that you comment on, given the above rule, the information has not
>been lost from the system because it will be in the rollback segments. The roll back
>segments will only need rebuilding from the point they reached at the time of the crash.
>If the rollback datafiles always have a later SCN than the data datafiles at the point the
>rollback needs to be rebuilt the data datafiles are inline and it can be guaranteed that
>the data datafiles will contain all the information required to rebuild the rollback
>segments (I think).
>
>So in short I believe my suggestion works if, at the time of recovery, the rollback
>datafiles are not older than the data datafiles.
>
I'm not sure i follow you 100% here -- but in any case I'll try to respond.
Lets say your block buffers are small (1meg). Lets say you have 5meg of updates to do. We won't buffer all of that so we'll have to flush frequently (i do this alot on a load -- keep dbwr flushing blocks as the buffer cache fills so there is never too much to checkpoint). As we fill the buffer cache, we'll start flushing dirty blocks to make space for more blocks in the cache. this is not a checkpoint -- the datafile headers are not getting updated necessarily during these operations, just making space. The rollback files SCN is whatever value you want to assign as is the datafiles header (make it bigger or smaller -- or the same). All i need to do is flush one block to the datafile (an AFTER image) and not flush the corresponding rollback block and I cannot recover!
You are assuming that there exist no blocks in a datafile older then the SCN in the header however that SCN is only used to say there are no blocks *younger* then that in there (we need to recover that file from that SCN, you need not look at redo for SCN's before that).
Your approach would work only if the datafile represented the exact point in time that SCN was current. That is, the datafile would *have* to be 100% consistent with respect to that SCN's point in time. Datafiles are always fuzzy unless you have "shutdown normal or immediate" or "offlined normal" (they contain blocks from many many many points in time). The problem is the SCN only tells us what we need to recover from -- it doesn't say "the datafile is consistent with respect to this point in time".
So, if the SCN of the rollback is > then SCN of the datafile, there still probably will be blocks in the datafile that have SCN's > the SCN of the rollback (and vice versa so no matter how you look at it....). It only takes one block in a datafile with an SCN > then SCN of flushed rollback to break your database and make it unrecoverable.
>For the RDBMS to make sure this is the case during normal running it must ensure the
>rollback segment datafiles are always flushed at the same time (or more often) than the
>data datafiles (i.e. the same or later SCN).
>
The problem is checkpoints never take place by 'segments' which are objects that span files (so a single rollback segment can actually be in many files) but by file and a file can contain segments of >1 type (a file can have rollback, data, index, temporary, cluster, and hash cluster segments in it simultaneously). Checkpoints happen to files, not to segments.
>The most obvious example I can think of were the rollback segment datafiles could be older
>than the data datafiles is a hot backup. Obviously each datafile can be backed up in any
>order. When restored the rollback segment datafiles might well be older than the data
>datafiles and hence the problem described above. My suggestion would require the user to
>make sure the rollback segment datafiles were always backed up after the data datafiles.
>This I believe would be the one compromise from the users perspective. It may be
>unacceptable, I don't know. I do understand some on the scenarios this may lead to.
>
You would have to make more assumptions then that actually you would have to:
We would have Oracle version 5 again (i think that the was the last version with AI and BI journal files, its been a long time...) That was the architecture sort of in the past.
>The cold backup and roll forward is the simplest scenario and I believe there is no
>problem with this. This assumes all datafiles have the same SCN at the point of restore.
>
with a cold backup after a shutdown normal, this is true however -- i am concerned only about recovery from system failure (power failure, etc). thats why we have rollback and redo (if we never had system failures, redo would be redundant -- we would not need it. We would only need rollback)
>Of course this is just one of many scenarios, all of which make my brain ache. If anyone
>knows any other scenarios then please shoot.
>
>If it is possible then the goal is a good reduction in redo log space usage. What I am not
>sure about is whether the saving is worth the effort. Of course there may be lots of small
>technical reason to do with the management of the rollback segments that make my
>suggestion a non starter.
>
>I still think I've missed the obvious but just can't see it.
>
>Thankyou for your time.
>
>Thomas Kyte wrote:
>
>> A copy of this was sent to Mike Burden <michael.burden_at_capgemini.co.uk>
>> (if that email address didn't require changing)
>> On Wed, 24 Mar 1999 15:58:56 +0000, you wrote:
>>
Thomas Kyte
tkyte_at_us.oracle.com
Oracle Service Industries
Reston, VA USA
--
http://govt.us.oracle.com/ -- downloadable utilities
![]() |
![]() |