Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Algorithm for calculating extent size in LMT
dkrpata1 doodled thusly:
>Ok, so then help me out, cause I've been under this impression for a while.
>
>I'm not saying somethig with 5 or maybe even 100 extents is necessarily bad,
>but I have seen quite a few databases with extents in the range of 500 to
>1000's of extents. So can we say that the I/O time on accessing data
>through that table is not going to be very affected?
>
>
Well, what type of I/O and how are we measuring the time? That's what is important: the conditions of real use, not the theoretical "single-user", "single program", "wind blowing from NE, full moon" stuff.
In theory:
IF you had a single user accessing the system.
IF that user was only running one and only one program.
IF that program did sequential accesses (which is a DIFFERENT thing
from full scans, although it includes them!).
IF data was loaded and written on disk in order of sequential
retrieval.
IF the file system you use allowed you to have consecutive areas of
disk allocated to consecutive file system addresses.
IF the disk sector size is matched to the file system block size.
IF the file system block size is matched to the database block size.
IF the database block size is matched to the LMT constant allocation.
IF the LMT constant allocation is matched to the physical storage
parameters of the table/tablespace/index.
THEN: yes, contiguous allocation _might_ have an effect of about 10-15% in your I/O speed.
Turn ANY of the above IFs into an "ELSE" and all the theory is out the window. Because it is based on ALL those being correct and matched BEFORE you can accurately measure the said 10-15% improvement! It's what we call in electrical engineering an "impedance match": drop one of many interdependent parameters and you got yourself a mismatch. Which means you can NOT accurately measure the effects of any changes or extrapolate them to something else.
Now: is it WORTH all this trouble for 10-15%? I'm not an expert on time optimization, but I know where I'd like to waste my DBA time if this was the case with me.
I'm being a little bit facetious just to clearly illustrate the point. There are a lot of defaults in systems nowadays that reduce the IFs above to a manageable number.
The point I want to enhance is:
if you are after a SIGNIFICANT improvement to your performance and you
have reasonably well configured h/w (not everything in a single disk
or some other horrible M$CE configuration), then don't waste your
brain power in long nights of reorgs. You will NOT gain a significant
improvement in performance that way.
Cheers
Nuno Souto
nsouto_at_optushome.com.au.nospam
Received on Tue Mar 05 2002 - 20:07:36 CST
![]() |
![]() |