Re: 500 million rows every (8 hour) day

From: Mark D Powell <Mark.Powell2_at_hp.com>
Date: Tue, 24 Nov 2009 07:25:07 -0800 (PST)
Message-ID: <6c87ae3d-3a98-4cae-a322-ded87e767c6a_at_v37g2000vbb.googlegroups.com>



On Nov 24, 8:55 am, Richard Last <richard..._at_yahoo.com> wrote:
> On Nov 24, 3:27 am, "Arne Ortlinghaus" <Arne.Ortlingh..._at_acs.it>
> wrote:
>
> > Surely it will depend much on quick hard drives especially for writing and
> > enough main memory for the SGA to hold the indexes needed and much of the
> > data of one day.
>
> > Arne Ortlinghaus
> > ACS Data Systems
>
> What has been mentioned by others... Oracle RAC, 96GB RAM, several
> hundred TB of SAN storage.  The hardware budget is bigger than the GDP
> of some countries!!!!!

Richard, I think the answer depends on what and how the data will be used for. That is how will the data be queried? Will it permanently reside in the database or is the database just a holding point till the data is filtered and transferred to its pemanent home? Is the initial insert done to the data's final store within the database or is it moved within the database?

I am not a big fan of using sqlldr direct path loads in a production environment. If a direct path load job fails then the indexes become unusable and require rebuilding. Having to rebuild large indexes on a massive single point of insertion table pretty much brings the system to a halt. If there will be multiple, concurrent sources of data intput I would insist on using convention load or using programs written to use bulk inserts or the older array insert feature of pro* languages.

HTH -- Mark D Powell -- Received on Tue Nov 24 2009 - 09:25:07 CST

Original text of this message