Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle Myths
In article <f369a0eb.0206061304.a9c8a88_at_posting.google.com>, you said
(and I quote):
>
> That's why it's pointless to debate over or even test performance when you
> have a file system like that.
Precisely!
>
> Are you assuming that even if the data you want to read is right next to the
> disk reader, the disk still moves halfway to the other end and then back?
> While I haven't taken a very close look at the its search mechanism, I would
> like to think it's smarter than that.
If you are using a file system, that is indeed in fact quite possible. It all depends on how freshly made was the file system before you used it to allocate space for your database datafiles. Regardless of things like vxfs and such.
If you are using raw without a LVM, then it probably won't be the case. You'll get very good physical sequencing of your data. If you use raw with an LVM, some may actually lead you into a false sense that you are actually using contiguous disk.
>
> I suspect that's because almost everyone that did any testing didn't go
> the whole length.
You just hit it in one, chief!
>
> Think a little bit more if you have terabyte DSS/OLTP databases and your
> segments are typically 20-30 GB in size. That's what I work with so I am not
> just talking about theory. But sometimes it's nice to have theory validated
> by what actually happens.
>
Same here. Not too long ago. Thank God I'm using much smaller databases
right now, just lots of them...
Yes it's easier to control this sort of thing if you're talking "chunks"
of 20-30Gb instead of the more common 2Gb. But as soon as you throw in a
file system, be it ufs, vxfs or whatever-fs, it's all out the window
regardless of the size of your "chunks". It's a bummer, but a fact of
life unfortunately...
-- Cheers Nuno Souto nsouto_at_optushome.com.au.nospamReceived on Fri Jun 07 2002 - 05:19:10 CDT