Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: ORA-04030
"Reid Lai" <reidlai_at_netvigator.com> wrote
> Ah....let's increase the SHARED_POOL_SIZE first.
That was the first thing I tried Reid. Made no difference.
> If the problem still occurs, try to set SHARED_POOL_RESERVED_SIZE
> (check manual for correct parameter) in order to reserve enough
> continuous memory for your stored procedure or other database object.
Thanks for the suggestion, but also a no go.
Running GlancePlus on HP shows that the Unix process is just consuming memory at an astounding rate. It reaches about 250MB before it fails.
Some random thoughts. Are there any issues about 32bit 8.1.5 running on a 64bit HP-UX B.11? Are there any known problems with 8.1.5 resource/memory leakage? Any bugs on B.11 that could cause this? (btw, tuning kernel parameters did not help - delayed the crash with a few 10's of seconds).
There are essentially two methods I can think of when populating a warehouse type fact table, using a very large table as the raw source.
In my case, method 1 is failing. Joining a subset of raw data (just over a million indexed rows) with about 8 or so (indexed) dimension tables (using a very complex WHERE clause due to all the join criteria). Memory is being consumed at a large rate _while_ the results are being spooled to file (process uses up to 250MB of memory before failing).
Method 2 seems to work fine. Which means joining up to 50 million rows (from the raw source table) with one to three dimension tables and creating a result set containing rowid and dimension keys. The raw data is processed via PQ (with each PQ using less than 4MB worth of memory).
Am I missing something obvious here with how Oracle is using memory?
-- BillyReceived on Tue Sep 18 2001 - 04:36:06 CDT