Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: ORA-04030

Re: ORA-04030

From: Billy Verreynne <vslabs_at_onwe.co.za>
Date: Tue, 18 Sep 2001 11:36:06 +0200
Message-ID: <9o74io$23a$1@ctb-nnrp2.saix.net>


"Reid Lai" <reidlai_at_netvigator.com> wrote

> Ah....let's increase the SHARED_POOL_SIZE first.

That was the first thing I tried Reid. Made no difference.

> If the problem still occurs, try to set SHARED_POOL_RESERVED_SIZE
> (check manual for correct parameter) in order to reserve enough
> continuous memory for your stored procedure or other database object.

Thanks for the suggestion, but also a no go.

Running GlancePlus on HP shows that the Unix process is just consuming memory at an astounding rate. It reaches about 250MB before it fails.

Some random thoughts. Are there any issues about 32bit 8.1.5 running on a 64bit HP-UX B.11? Are there any known problems with 8.1.5 resource/memory leakage? Any bugs on B.11 that could cause this? (btw, tuning kernel parameters did not help - delayed the crash with a few 10's of seconds).

There are essentially two methods I can think of when populating a warehouse type fact table, using a very large table as the raw source.

  1. You process a subset of data from the raw table (e.g. just one day's worth data which results in a couple of million of rows). You then join this subset with the dimension tables and use the result set to populate the fact table. Repeat the process per subset until the complete raw data set has been processed.
  2. You process the complete raw table data set. However, instead of using a single massive join to join to all the dimension tables, you use smaller joins and process the raw source table multiple times. Multiple results sets are created (using the rowid of the raw source table and containing the dimension keys). Finally these result subsets are joined (on rowid) and then added to the fact table.

In my case, method 1 is failing. Joining a subset of raw data (just over a million indexed rows) with about 8 or so (indexed) dimension tables (using a very complex WHERE clause due to all the join criteria). Memory is being consumed at a large rate _while_ the results are being spooled to file (process uses up to 250MB of memory before failing).

Method 2 seems to work fine. Which means joining up to 50 million rows (from the raw source table) with one to three dimension tables and creating a result set containing rowid and dimension keys. The raw data is processed via PQ (with each PQ using less than 4MB worth of memory).

Am I missing something obvious here with how Oracle is using memory?

--
Billy
Received on Tue Sep 18 2001 - 04:36:06 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US