Re: SQL Server for Oracle DBAs

From: joel garry <>
Date: Thu, 29 May 2008 13:09:23 -0700 (PDT)
Message-ID: <>

On May 29, 11:24 am, "Tony Rogerson" <> wrote:
> I'm suprised oracle recommend's SAME because I'd have thought at the file IO
> level that if you are reading a 30GB table and writing out 20GB the ideology
> behind it is the same whatever product is doing it.

Well, consider these:

Something is always happening, it's not like one process reads all of one file (with the big O exception being archiving redo logs, but even there, if lots of stuff is going on, the one process isn't going to slurp up the whole thing at once).

For the table, Oracle is a lot smarter about keeping stuff in memory and not bothering the disk at all. Oracle has multiple buffers available to separate out certain objects that should be kept or are very volatile. For situations that do require reading a 30G table, Oracle places the read blocks at the tail end of an LRU queue, so they get cycled out first. Other things can happen regarding where the data gets read to depending on configuration and parallelization and so forth. The more sharing is going on, the bigger the win of Oracle's model. Do you really have a lot of apps that have one person on one computer reading and writing one datafile all by himself? Oracle may not be for you.

Oracle can skip using OS buffers, which helps because it may know better what it is really asking for (this may not apply to Windows).

A lot of tuning winds up showing that you may not want to be reading the 30G file. Oracle is getting pretty smart about this, and allows being told if there is a better way. Your ideology may suck.

Oracle can differentiate between needing a particular block and a lot of blocks, and use different methods to obtain them.

Oracle can be lazy about writing, because the recovery mechanism provides the safety to allow smoothing out bursty updates. So it can be smart enough to figure out more recent data is already around and not have to ask the disk for it.

So at the file I/O level Oracle has already minimized the data interchange required, and simply recognizes that the more spindles you spread the I/O over, the quicker you go. Or at least that was the case nearly a decade ago, when the recommendation was actually made. As a generalization, it still holds, but there may be specific situations where it doesn't, particularly on fully utilized hardware. But is it really written towards that, or for a new system where people might suffer from compulsive tuning disorder?


-- is bogus.
word: doded
Received on Thu May 29 2008 - 15:09:23 CDT

Original text of this message