Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle Myths

Re: Oracle Myths

From: D.Y. <dyou98_at_aol.com>
Date: 6 Jun 2002 14:04:23 -0700
Message-ID: <f369a0eb.0206061304.a9c8a88@posting.google.com>


Nuno Souto <nsouto_at_optushome.com.au.nospam> wrote in message news:<3cff4c12$0$28009$afc38c87_at_news.optusnet.com.au>...
> In article <f369a0eb.0206052119.4865711_at_posting.google.com>, you said
> (and I quote):
> >
> > The argument that separating tables and indexes, or separating tables that
> > are concurrently accessed etc. makes no performance difference seems to be
> > based on the assumption that space is allocated on file systems by a
> > volumn manager.
>
> Any file system will cause this. Any. Not just a volume manager. It's
> how file systems work, both in Unix and NT. There are some very
> interesting papers on file systems and how they work in the Unix arena.
> And the NT doco from M$ has some accurate descriptions of what NTFS does
> to a disk...
>

That's why it's pointless to debate over or even test performance when you have a file system like that.

> > controlling the physical location of your segments. I believe you can, and raw
> > devices may be the only way to do this. You want to make sure large table
>
> Any wonder why people like Steve Adams recommend raw for the highest
> performance? It gives you absolute control over placement on disk,
> assuming of course you're not going through a LVM and are actually
> allocating raw partitions in the disk geometry files in Unix. If not,
> then even with raw you can still fail to achieve this performance
> separation. Many LVM's take assumptions without telling you what they
> are doing.
>

I can agree with this.

> > on any given disk you still want to do this). Look at the number of sequential
> > I/Os vs random I/Os per second. It depends on the RPM and I don't have the
> > acurate figure.
>
> Nope. What you think is a sequential I/O is only so in logical terms.
> Never is in physical terms when using a file system. Never.
>

Here we are talking about raw devices where you have much better control on where to physically place your segments.

> > But the difference is significant. If it weren't we would have
> > to rewrite the basic laws of physics. This is assuming you don't have 100GB of
> > cache or any kind of smart prefetch to speed things up.
> >
>
> That's the whole point. There is NO difference that can be absolutely
> and objectively measured when one of the givens of the test is that BOTH
> options will be using random I/O even though one "thinks" it is
> sequential! What happens most of the times these things have been
> "measured" is precisely the effect of caches, be they in the disk (which
> you can NOT disable), controller (which in most cases you can NOT
> disable), OS (which in some cases you can bypass) and the DB (which you
> may have control over IF you know where to go and how).
>

Are you assuming that even if the data you want to read is right next to the disk reader, the disk still moves halfway to the other end and then back? While I haven't taken a very close look at the its search mechanism, I would like to think it's smarter than that.

> Still, this is only relevant when we're talking extracting absolute max
> speed out of a system. In 99.9999999999% of real life cases, it can be
> debated to death but no one will be able to confirm conclusively that
> separating an index from a table by tablespace will achieve any speed
> gain. Because the two results will be so close that it won't make any
> sense to go to all the effort.
>
> However, of course: if one feels that it makes sense in terms of
> management and maintenance, then go for your life. It will not be better
> in speed, but most probably it won't be much worse either, so: 6 of one,
> half a dozen of the other.
> But purely for performance it doesn't make the slightest.

I suspect that's because almost everyone that did any testing didn't go the whole length. It makes sense to think that there isn't much performance difference if you have a small/medium database (a lot of data is cached), or you have a OLTP application (very few table or full index scans), even though that's not the kind of system I deal with.

Think a little bit more if you have terabyte DSS/OLTP databases and your segments are typically 20-30 GB in size. That's what I work with so I am not just talking about theory. But sometimes it's nice to have theory validated by what actually happens. Received on Thu Jun 06 2002 - 16:04:23 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US