Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Large table and buffer cache concerns
Hey Joel,
quad socket - dual core, RedHat linux x86-64, in front of a 128 spindle SAN. Not a monster rig but it should be sufficient.
I'm pushing for a hybrid solution of normalizing out and streamlining/trimming the dataset where possible while still maintaining the initial requirements. Effectively pruning, 780 of the 1000 records per character down from 43 bytes per record to just 8-12 bytes, and moving them to their own table (table k). From there, setting both tables j and k up as single b*tree cluster tables, using the customer id (c.a) as the cluster key. Plenty of benchmarking left to do to ensure I'm not shooting myself in the foot by slowing down the insert and update calls with the cluster table. The data sets are still a bit larger than I'd like so for each customer I'm looking at two 8K DB Blocks per table j and k, 4 in all to fetch all the customers records. Received on Thu Apr 12 2007 - 22:22:50 CDT
![]() |
![]() |