Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Large number of columns

Re: Large number of columns

From: Howard J. Rogers <dba_at_hjrdba.com>
Date: Fri, 3 May 2002 08:07:16 +1000
Message-ID: <aasdbv$ftg$1@lust.ihug.co.nz>


I agree. If performance is not a concern, I certainly wouldn't re-engineer a legacy application for the sake of a bit of disk space. As for the costs of joins if he does re-engineer things, I agree with that too: denormalisation has its place as a useful tool precisely because join costs can be high (why else did they invent the index cluster, which physically denormalises, or pre-joins, tables?!).

So, yeah, I don't disagree: if it ain't broke, don't fix it.

Regards
HJR "J.P." <jp_boileau_at_yahoo.com> wrote in message news:7e388bc3.0205020447.321ad11f_at_posting.google.com...
> Howard,
>
> Your discussion is very appropriate, however you didn't take into
> consideration the fact that we don't know how much data changes (aka
> "row growth") he has. He does say it's a legacy application, which
> tends to make me believe that not much new data is added or modified
> in the system.
>
> If the data is pretty much static, he could reduce his PCTFREE to 5%
> and be happy. Alternatively, he could set his PCTFREE to 0%, import
> his legacy data, and reset the PCTFREE to a more "normal" value (say
> 10%) and then the new rows would be stored with the new value of
> PCTFREE. I'm not 100% sure if this is "legit" but I'd certainly try
> it.
>
> As far as performance is concerned, if he seperates the table into 2
> or more tables, the overhead of joining the two or more tables may
> just end up with a wash as far as performance is concerned, versus
> having just the one table.
>
> Personally... I'd leave the legacy data just the way it is, UNLESS
> there's a problem.
>
> JP
>
> "Howard J. Rogers" <dba_at_hjrdba.com> wrote in message
news:<aaq9o9$fm5$1_at_lust.ihug.co.nz>...
> > Yes to your last question. Just as an example, if the average row length
of
> > these monster tables was, say, 1500 bytes then in an 8K block system,
you
> > can fit 4 rows per block (the computation is, roughly, 8192-819 (for
> > PCTFREE) = 7373 bytes free per block. 7373/1500=4.9 -and since you can't
> > have 0.9 of a row, you get 4).
> >
> > Now let's say your report, using an index, needs to retrieve 20 rows.
You'll
> > need to do 5 block reads (on average) to get the requisite data.
> >
> > Had your row length been, say, 500 bytes, then a block would have
contained
> > 14 rows, and you might have got your data back in 2 block reads.
> >
> > Incidentally, Oracle never chains rows unless it absolutely has to. And
the
> > only reason that it has to chain a row is if the row is longer than the
> > block size (note that row chaining is not the same as row migration,
which
> > happens whatever the row length is, if there is insufficient space in
the
> > block for the row to *grow* in length). So, I'm not surprised that you
have
> > a low setting for CHAIN_CNT.
> >
> > What it all means, however, is that the density of data in your tables
will
> > be outrageously low, meaning a lot of I/O activity to retrieve required
> > data. You'll also be wasting a lot of space in your blocks -consideraby
more
> > than the 10% you were expecting because of PCTFREE. Go back to that
first
> > example: we had 7373 bytes free after PCTFREE had been taken into
account.
> > Yet we could only store 4 rows of 1500 bytes, meaning that only 6000
bytes
> > of row data could actually be stored. So that's 1373 bytes *in addition
to
> > PCTFREE* which is just sitting there, unused. That's going to affect the
> > efficiency of full table scans, as well as your index accesses.
> >
> > Remember that in performance tuning, it's the application designers and
> > developers who do the bulk of the work, long before the DBA gets
anywhere
> > near it. What you've got here is a classic example of why that's true
(or
> > supposed to be): you basically have now to re-design the entire
application
> > to sort this problem out, normalizing the tables from scratch.
> >
> > Good luck
> > HJR
> >
> >
> >
> > "Andy" <enzoweb_at_hotmail.com> wrote in message
> > news:8d4033cd.0205011826.66dc1c83_at_posting.google.com...
> > > The Oracle Version is V806, running on Solaris, block size is 8k.
> > >
> > > I have just started work at a site where they have a legacy
> > > application converted to Oracle, with no analysis done, it was just
> > > picked up and dropped in.
> > >
> > > Consequently there are over 50 tables which have over 250 columns.
> > > There are a couple with over 300 columns.
> > >
> > > What, if any, impact will this have on performance? I have done an
> > > analyze on chained rows and there are amazingly very few, whereas I
> > > expected a lot. PCTFREE is the default of 10.
> > >
> > > Do I need to worry about this, or can I leave them? I thought about
> > > splitting the tables up, which would only result in queries having to
> > > do joins. So what is worse, large number of columns on one table or
> > > having to join 4/5 tables?
> > >
> > > If the tables are properly indexed, would the number of columns
> > > matter?
> > >
> > > Thanks.
Received on Thu May 02 2002 - 17:07:16 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US