Re: Huge Tables/Tablespaces???

From: Jim Kennedy <kennedy-family_at_home.com>
Date: Wed, 21 Nov 2001 00:33:19 GMT
Message-ID: <jDCK7.51634$XJ4.30629512_at_news1.sttln1.wa.home.com>


  1. Get Jonathan Lewis's book. (and read) Practical Oracle 8i
  2. Get Thomas Kyte's book. (and read) Oracle Expert One on One

I would use locally managed datafiles. Keep the extents uniform size. You are going to have to describe the micro-array gene-expression data. Is it grouped somehow? (by experiment?) and how large is each group? Are you mainly going to query by experiment or some other more appropriate attribute? While 75 GB is large how much would be read at a time? The more read at a time and the less tolerant you are to wait then the more disks you are going to need to improve IO.

Jim

"Castlemaine" <mjh20_at_columbia.edu> wrote in message news:d1a36aa1.0111201250.5e337dbe_at_posting.google.com...
> I am currently building a microarray
> database (Oracle9i on Red Hat 7.1) to house the wealth
> of gene-expression data that has been generated in our
> labs. Due to the nature of genetic information and the
> volume of data that needs to be stored for each
> experiment, certain tables will become extremely
> large. After doing sizing estimates, several tables
> will grow to approximately 75GB in three years. I am
> aware of the fact that I will need to make use of
> table partitioning and RAID.
>
> But in the beginning (and to optimize performance), how
> exactly should I plan for such tables. More
> specifically and assuming a 75GB table - how large
> should the tablespace sizes be?, number of datafiles
> per tablespace?, number of extents per table? size of
> extents?, etc.
>
> Any help is greatly appreciated.
Received on Wed Nov 21 2001 - 01:33:19 CET

Original text of this message