Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> 90GB table on Windows 2000

90GB table on Windows 2000

From: Andras Kovacs <andkovacs_at_yahoo.com>
Date: 8 Oct 2002 05:08:40 -0700
Message-ID: <412ebb69.0210080408.1777a9f2@posting.google.com>


I am having problems to maintain a very large database. In fact the product has just been installed and we are trying to fine tune the application. All the transactions are recorded into a single table that can't be split (but can be partitioned).

Here are the facts :
- This table has 6 attributes and 1 index on 2 of the attributes,
  structure is very simple.
- No more than 10 concurrent users on the system to query data

My questions are :

  1. What type of backup should we use ? (We are thinking about replication and incremental backups or maybe a third machine)
  2. Our write performance is very good. However we have some problems with reads (at present we have 15GB of data and 320 000 000 rows). The first read for a given query takes 52 seconds. Then the second time the query runs in 12 seconds with a 100% cache hit ratio. What type of hardware (controllers and disks) should we use to improve performance (52 seconds)? Is there any thing to do to reduce these 12 seconds cache reads ?
  3. I have tried to rebuild the index on the table after having dropped it. It is still running ... I had to configure a 15GB temporary table space. Any advise to speed up the index reconstruction ?
  4. What would be the benefit of switching from NT to Unix ?
  5. If somebody has a similar sized system, could you indicate us what type of hardware you have ?

Thanks for your time, Andras Received on Tue Oct 08 2002 - 07:08:40 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US