Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: SGA rules of thumb?

Re: SGA rules of thumb?

From: kathy duret <>
Date: Tue, 8 May 2007 08:40:40 -0700 (PDT)
Message-ID: <>

Check to see what is running when these ora-600 happen.    

  Check your snapshots this will tell you alot. Waits, what sql is running, etc.   Very powerful and helpful. You may need to change then to run more often   capture more specific items.    

  What cronjobs, dbms_jobs are you running. Anything during these periods?    

  Is datapump going when you are doing RMAN backups?    

  I have found when datapump is going against large tables it grabs alot of resources.   I had to revert back to using exports at one place because datapump used too many resources when it was running even during the slowest part of the night.    

  Maybe you are doing a process in parallel that is grabbing all the resources.    

  I had some guy running a parallell 4 process on a single cpu test box and he couldn't understand why the box was thrashing. This was to analyze tables, when and how are you doing this? Setting to high a parallel on this can cause issues.    

  Are you doing these large loads into the external tables at the time RMAN is trying to back up. This could cause some issues.    

  Are you running RMAN through Veritas or another product to help you manage backups?   I backup up to tape and my large Pool size is    

  Bottom Line: Look to see what is running when RMAN is or when these errors are happening. Check the trace files to see if they shed any light. Search on Metalink for the codes along with the Ora-600 and use the ora-600 utility to see if that comes up with anything useful.    

  Good Luck    

  Kathy Duret
  Been there and done that too many times...                       

Don Seiler <> wrote:
  Alright I'll make this scenario more specific. I'm an RMAN user. My large_pool_size has been 16M "forever". The Backup & Recovery Advanced Users Guide [1] says that:

"If LARGE_POOL_SIZE is set, then the database attempts to get memory from the large pool. If this value is not large enough, then an error is recorded in the alert log, the database does not try to get buffers from the shared pool, and asynchronous I/O is not used."

and that the formula for setting LARGE_POOL_SIZE is as follows:

LARGE_POOL_SIZE = number_of_allocated_channels * (16 MB + ( 4 * size_of_tape_buffer ) )

Of course I'm backing up to disk, not tape, but it would seem I should be using a lot more than 16M. However, I don't see any errors in the alert.log with "async" or "sync" in the text, so perhaps the large pool is still just fine?


On 5/7/07, Don Seiler wrote:
> I'm wondering if any of you have general "rules of thumb" when it
> comes to sizing the various pools and db buffer cache within the SGA.
> I'm going to go back to static SGA rather than risk ASMM thrashing
> about and causing another ORA-00600 at 2:30 in the morning. I can see
> where ASMM left the sizes at last, but just wondering what human
> thinks of things.
> This is Oracle on RHEL3. sga_max_size is 1456M on 32-bit,
> going to be (at least) 8192M on 64-bit. The database is a hybrid of
> OLTP and warehouse. When I say "warehouse", I mean that large
> partitioned tables holding millions of records exist, and are bulk
> loaded via external tables and data pump throughout the day. Other
> than the bulk loading, those tables are read-only.
> Any advice would be appreciated (yes I've checked the V$*_ADVICE views as well).

Don Seiler
oracle blog:

Ahhh...imagining that irresistible "new car" smell?
 Check outnew cars at Yahoo! Autos.
Received on Tue May 08 2007 - 10:40:40 CDT

Original text of this message