Oracle FAQ
|
Your Portal to the Oracle Knowledge Grid
|
Home ->
Community ->
Usenet ->
c.d.o.server ->
Re: Storage/Tuning Questions for large tables
Re: Storage/Tuning Questions for large tables
Mike:
1,2) There are two school of thought:
- have a big initial extent, save CPU by not extending the tables
constantly. You will get defragmentation
- insure that each extent is the same size ( %increase=ZERO ) then when
extents are freed up they can be re-used since they are the exact
size required. There is some optimal sizing formula that I don't have off
hand. You don't have to defrag with this one, but more CPU usage with
continual table extending
- You only need a big rollback segment, if the program does few commits,
i.e. if you do a commit after each delete you're fine, as long as your
records are a normal size.
Regards
Tim
<mikea730_at_my-deja.com> wrote in message news:8j9ihd$n57$1_at_nnrp1.deja.com...
> I'm going to be setting up a database used for data collection
> (inserts) and reporting. There will be very little updating.
>
> My questions are based around two tables that will be continually
> collecting data via inserts. The average row size for the two tables
> are 50 bytes each. I expect to collect roughly +2 million rows in each
> table per day. The only updates that will occur to the tables is a
> daily delete of rows older than 40 days running a single "delete" SQL
> statement against each table.
>
> 1) Any recommendations for storage allocations on these two tables?
> Should they have a huge initial extent?
>
> 2) What impact will the daily deletes have on the tables? I'll
> basically be deleting 2 million rows a day. Could this cause problems?
> (fragmentation?) What should I watch out for?
>
> 3) Will I need a huge rollback segments to run the deletes?
>
> I'm running Oracle 8i on Solaris 2.7
>
> Thanks in advance for any advice!!!!
> Mike
>
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.
Received on Thu Jun 29 2000 - 00:00:00 CDT
Original text of this message