Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Minimal storage
My database needs to store a single column with many rows (> 2,000,000,000 )
of numeric data of indeterminate resolution.
Storing the data in an indexed table with an id column is very inefficient so I have tried storing the data in a single column (number) table. I built my own psuedo index table that records the start row number for each new block and file (from rowid). I can efficiently calculate a rowid for any row number in the large table from this pseudo index table. Storage is around 150 rows to a block so the psuedo index is fairly efficient. The stored data is never updated and has no free space.
My problem is that while I can build or rebuild my psuedo index table for the whole table given a full table scan, I can not afford a full scan to maintain it. It occurs to me that Oracle must be storing the addresses of newly added rows in the transaction log file before committing them. Is there some way I can read these addresses to maintain my index table as I load blocks of data? Or is there some other way I can find the blocks containing data above the highest 'indexed' row number? Or have I lost the plot in attempting to cheat Oracle when there is a more 'normal' way of achieving the same result?
I am presently using Oracle 7.3 but could consider 8 if it offers a solution.
Thanks in advance for your help
Don Fraser Received on Wed Feb 10 1999 - 13:48:27 CST