Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Windows defrag with 10g

Re: Windows defrag with 10g

From: joel garry <>
Date: Wed, 22 Aug 2007 14:31:54 -0700
Message-ID: <>

On Aug 22, 12:45 pm, "Preston" <dontwant..._at_nowhere.invalid> wrote:
> Adam Sandler wrote:
> > Hello,
> > Someone asked a question the other day for which we didn't necessarily
> > have an answer. Someone was concerned about low level OS processes
> > 10g can execute and if a defrag was run on the drive hosting the
> > database (using Windows Server 2003 R2), would that interfere in any
> > way with what Oracle is doing. What's your take?
> I tested this on NT4 with 8.1.7 a few years ago, just to satisfy my
> curiosity. Most of the time it didn't cause any problems, but once it
> toasted the database. Not exactly a scientific test, but it did prove
> that defragging an open database /can/ corrupt it, at least on 8.1.7.
> I didn't bother trying to repair the database so don't know what the
> damage was.

Look up "fractured block" in the docs.

Besides that, oracle writes things where it wants to within data files. The operating system puts the blocks where it wants to on disk. When the disk is spinning and Oracle tells the OS to give it some blocks, the OS decides how many blocks it is really going to get at a time. So if you defragment, it is possible you are helping the OS to know where the blocks are when Oracle asks for them. It is also possible you are hindering the OS, for example, if Oracle is asking for a full table scan, it might ask for 8 blocks, then another 8 blocks, and so on - meanwhile, after the first 8 blocks, the disk goes after something else, and has to spin before it gets to the next 8 blocks, where if the blocks were somewhere other than contiguous, it could have gotten to them faster. It really "depends." Somewhere I read that when you update a block with Windows it uses an optimistic write algorithm, writing back to the first place it finds available rather than where it read from, which is why fragmentation happens. If that's the case, heavily updated blocks would shift around to the most random spots, which would fight against Oracle's multiblock read access (Oracle asks for 8 blocks and Windows has to hunt for each) and should indeed benefit from defragging.

I dunno, I avoid Windows.

I'd like to see some actual performance tests that demonstrate this (as opposed to "fragmentation reports" that just say percentages of fragmentation). Perhaps such a test would create some large amount of data, time a report on all of it, heavily update the data randomly until Windows reports a large amount of fragmentation, time the same report again, defrag, time again, heavily update, time again, defrag thrice, time again, heavily update, time again. Those last few cycles are to show whether a single defrag pass results in extreme freespace fragmentation, making the fragmentation worse for subsequent updates. Some trace files showing wait states would be informative, too.


-- is bogus.
Received on Wed Aug 22 2007 - 16:31:54 CDT

Original text of this message