Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle Myths

Re: Oracle Myths

From: D.Y. <dyou98_at_aol.com>
Date: 7 Jun 2002 10:54:45 -0700
Message-ID: <f369a0eb.0206070954.9b8c87c@posting.google.com>


Nuno Souto <nsouto_at_optushome.com.au.nospam> wrote in message news:<3d0089eb$0$28005$afc38c87_at_news.optusnet.com.au>...
> In article <f369a0eb.0206061304.a9c8a88_at_posting.google.com>, you said
> (and I quote):

<snip>
> > Are you assuming that even if the data you want to read is right next to the
> > disk reader, the disk still moves halfway to the other end and then back?
> > While I haven't taken a very close look at the its search mechanism, I would
> > like to think it's smarter than that.
>
> If you are using a file system, that is indeed in fact quite possible.
> It all depends on how freshly made was the file system before you used it
> to allocate space for your database datafiles. Regardless of things like
> vxfs and such.
>

That's an interesting one. I've always thought, when a disk system gets a request for data in a certain sector on a certain track, that it will take the shortest path to get to that location whether there is a LVM etc. sitting above. I sure like to see any literature which says otherwise.

<snip>
> >
> > Think a little bit more if you have terabyte DSS/OLTP databases and your
> > segments are typically 20-30 GB in size. That's what I work with so I am not
> > just talking about theory. But sometimes it's nice to have theory validated
> > by what actually happens.
> >
>
> Same here. Not too long ago. Thank God I'm using much smaller databases
> right now, just lots of them...
> Yes it's easier to control this sort of thing if you're talking "chunks"
> of 20-30Gb instead of the more common 2Gb. But as soon as you throw in a
> file system, be it ufs, vxfs or whatever-fs, it's all out the window
> regardless of the size of your "chunks". It's a bummer, but a fact of
> life unfortunately...

Five years ago 200GB was considered "LARGE". The trend is for databases to get even larger. Big corporations now add at least hundreds of megabytes to their data warehouse everyday. It's a different view from OLTP systems. Received on Fri Jun 07 2002 - 12:54:45 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US