Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Storage/Tuning Questions for large tables

Re: Storage/Tuning Questions for large tables

From: <mikea730_at_my-deja.com>
Date: 2000/06/27
Message-ID: <8jb5tb$vrs$1@nnrp1.deja.com>#1/1

So if I hear what your saying is store a days worth of data in each partition/tablespace? We need to keep 40 days depth, would that mean 40 datafiles? then I could just drop the partition???

One problem is that the data continually trickles in. It might be difficult to make a "clean" break between the days.

How bad would it really be if I created a single datafile with a very large initial extent?

Thanks for your help!!!!

In article <962136675.24296.1.nnrp-11.9e984b29_at_news.demon.co.uk>,   "Jonathan Lewis" <jonathan_at_jlcomp.demon.co.uk> wrote:
>
> At first sight this looks like an ideal
> set-up for using partitioned tables
> with range partitioning.
>
> Each day gets its independent partition,
> but the table as a whole is viewed as
> a single logical unit. Create LOCAL
> indexes on the table, so each table
> partition has its own index partition.
>
> Keep a rolling process to add new
> partitions a few days ahead of time.
> Each day you can:
> alter table XXX drop partition YYY;
> to get rid of one day's data. No rollback,
> no redo, just a few seconds of invalidation.
>
> There are a few implementation details
> to worry about, but with 40 partitions
> you should have few problems.
>
> At 2M rows per table, and 50 bytes per
> row, aim for 10mb extents to get about
> 10 extents per partition. I would
> then think about 7 tablespaces, one
> for each day of the week, to help with
> spreading visibility of I/O, keeping
> tablespace sizes down, and making it
> easy to generate new partitions in the
> right place.
>
> Make sure you use locally managed
> tablespaces (uniform size 10M).
>
> --
>
> Jonathan Lewis
> Yet another Oracle-related web site: http://www.jlcomp.demon.co.uk
>
> mikea730_at_my-deja.com wrote in message <8j9ihd$n57$1_at_nnrp1.deja.com>...
> >I'm going to be setting up a database used for data collection
> >(inserts) and reporting. There will be very little updating.
> >
> >My questions are based around two tables that will be continually
> >collecting data via inserts. The average row size for the two tables
> >are 50 bytes each. I expect to collect roughly +2 million rows in
 each
> >table per day. The only updates that will occur to the tables is a
> >daily delete of rows older than 40 days running a single "delete" SQL
> >statement against each table.
> >
> >1) Any recommendations for storage allocations on these two tables?
> >Should they have a huge initial extent?
> >
> >2) What impact will the daily deletes have on the tables? I'll
> >basically be deleting 2 million rows a day. Could this cause
 problems?
> >(fragmentation?) What should I watch out for?
> >
> >3) Will I need a huge rollback segments to run the deletes?
> >
> >I'm running Oracle 8i on Solaris 2.7
> >
> >Thanks in advance for any advice!!!!
> >Mike
> >
> >
> >Sent via Deja.com http://www.deja.com/
> >Before you buy.
>
>

Sent via Deja.com http://www.deja.com/
Before you buy. Received on Tue Jun 27 2000 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US