Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Large number of columns

Re: Large number of columns

From: Howard J. Rogers <dba_at_hjrdba.com>
Date: Thu, 2 May 2002 12:53:19 +1000
Message-ID: <aaq9o9$fm5$1@lust.ihug.co.nz>


Yes to your last question. Just as an example, if the average row length of these monster tables was, say, 1500 bytes then in an 8K block system, you can fit 4 rows per block (the computation is, roughly, 8192-819 (for PCTFREE) = 7373 bytes free per block. 7373/1500=4.9 -and since you can't have 0.9 of a row, you get 4).

Now let's say your report, using an index, needs to retrieve 20 rows. You'll need to do 5 block reads (on average) to get the requisite data.

Had your row length been, say, 500 bytes, then a block would have contained 14 rows, and you might have got your data back in 2 block reads.

Incidentally, Oracle never chains rows unless it absolutely has to. And the only reason that it has to chain a row is if the row is longer than the block size (note that row chaining is not the same as row migration, which happens whatever the row length is, if there is insufficient space in the block for the row to *grow* in length). So, I'm not surprised that you have a low setting for CHAIN_CNT.

What it all means, however, is that the density of data in your tables will be outrageously low, meaning a lot of I/O activity to retrieve required data. You'll also be wasting a lot of space in your blocks -consideraby more than the 10% you were expecting because of PCTFREE. Go back to that first example: we had 7373 bytes free after PCTFREE had been taken into account. Yet we could only store 4 rows of 1500 bytes, meaning that only 6000 bytes of row data could actually be stored. So that's 1373 bytes *in addition to PCTFREE* which is just sitting there, unused. That's going to affect the efficiency of full table scans, as well as your index accesses.

Remember that in performance tuning, it's the application designers and developers who do the bulk of the work, long before the DBA gets anywhere near it. What you've got here is a classic example of why that's true (or supposed to be): you basically have now to re-design the entire application to sort this problem out, normalizing the tables from scratch.

Good luck
HJR "Andy" <enzoweb_at_hotmail.com> wrote in message news:8d4033cd.0205011826.66dc1c83_at_posting.google.com...
> The Oracle Version is V806, running on Solaris, block size is 8k.
>
> I have just started work at a site where they have a legacy
> application converted to Oracle, with no analysis done, it was just
> picked up and dropped in.
>
> Consequently there are over 50 tables which have over 250 columns.
> There are a couple with over 300 columns.
>
> What, if any, impact will this have on performance? I have done an
> analyze on chained rows and there are amazingly very few, whereas I
> expected a lot. PCTFREE is the default of 10.
>
> Do I need to worry about this, or can I leave them? I thought about
> splitting the tables up, which would only result in queries having to
> do joins. So what is worse, large number of columns on one table or
> having to join 4/5 tables?
>
> If the tables are properly indexed, would the number of columns
> matter?
>
> Thanks.
Received on Wed May 01 2002 - 21:53:19 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US