Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: cost of having MANY tables
I believe Karsten has some valid points, but I would like to disagree
with or modify some of her points.
In article <825mg0$371$1_at_nnrp1.deja.com>,
karsten_schmidt8891_at_my-deja.com wrote:
> Hi,
>
> You will need more SGA memory to cache parsed statements (oviously,
> there are going to be a lot of different statements).
> You also need more memory to cache the data dictionary.
>
Just because you have more tables does not mean you will need more
memory for the dictionary cache. How much memory you need to cache the
dictionary objects depends on how often and how many objects are
accessed concurrently. You can expect to need more shared_pool to hold
the additional SQL statements.
>
> Maintainance is going to become difficult - you have to take care of
> lots of indexes, constraints etc. Performance Tuning is going to be a
> headache too.
>
I have a tablespace with 150 indexes in it. I can drop and recreate
every index in the tablespace (with a tablespace coalesce between the
drop and creates) in about 1 hour. I have another tablespace where
several of the indexes take an hour each. The time and cpu expended in
maintenance has more to do with the total quanity of data than it does
the number of objects. There will be more effort by the DBA to
generate 150 index rebuilds as opposed to 15, but this should not pose
a serious problem.
>
> Parsing will become more expensive, you need more CPU horsepower.
>
The cost of parsing is normally very small compared to the cost of the
actual data retrieval. So the total amount of CPU is again more
dependend on the quanity of data than the number of objects or sql
statements.
>
> Joins, selects should not be that different (apart from the not data-
> dependend overhead described above.) - this tends to be bound on disk
> i/o.
>
> HTH Karsten
>
I question what Steve means by having a lhuge number of small objects
verse a smaller number of large objects. His post implies that he has
a choice in the matter. I would prefer to work on a system where the
physical object structure was build based on the logical needs of the
application and not on preference of the DBA based for object sizes or
number of total objects. Partitioning tables to allow maintenance
within expected available window duration, or based on hardware
considerations I have no problem with. In the real world you are
forced to live with liminations of your resources.
>
> In article <384458AE.75EAEC11_at_averstar.com>,
> Steve Parker <sparker_at_averstar.com> wrote:
> > Can someone please give me a run down of the costs of having a HUGE
> > amount of tables (each being pretty small) as opposed to having
fewer
> > tables that are bigger?
> >
> > i.e., space, speed (individual accesses and joins)
> >
> > any help is much appreciated
> > parker
> >
Another point. While joins against small tables can be expected on
average to perform better than joins on large tables (a lot of possible
factors here) if you have to write a join on 9 small tables to get all
the required data as oppossed to a 3 table join of large tables the
extra join operations can consume and may exceed the cost of the join
with large objects. It really all depends on what you are trying to
accomplish. Anyway these are just my opinions. I do not think that
anything Karsten said is wrong; it just all depends.
--
Mark D. Powell -- The only advice that counts is the advice that
you follow so follow your own advice --
Sent via Deja.com http://www.deja.com/
Before you buy.
Received on Thu Dec 02 1999 - 12:39:09 CST
![]() |
![]() |