Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: How do I estimate the required SGA size?

Re: How do I estimate the required SGA size?

From: MarkP28665 <markp28665_at_aol.com>
Date: 1997/10/06
Message-ID: <19971006223000.SAA29475@ladder02.news.aol.com>#1/1

From: terry.jones_at_inamedotcom >>
Paraphrase - He wanted some guidelines on how to size an sga and was not happy  with previous answers to this question <<

There are no hard and fast rules to sizing an sqa but here is what I can think  of right now.

The sga is basically the buffer pool, shared pool. and redo log buffer pool  added together. Size each of these and the result is your starting sga.

Allocate an many database block buffers as practical. There is no valid reason  to allocate a huge area just because you can if it will not be actively used.  Try to estimate how many concurrent users will be accessing your system.  Figure each sql statement will read x ( 2 - 5) tables. Try to guage the  number of random reads vs the number of full table scans per user or per sql.  Allocate the number of bytes read via the init.ora parameter multi_block_read  for the full table scans and throw in 2 index and 1 table block for each  random read. Now add some blocks to the buffer pool to allow caching some  index blocks for performance. If you come up with a real large figure start  with a smaller number and monitor the x$bh table. The state column value of  zero equates to free blocks, 1= read and modified, 2 = read and not modified,  and 3 = being modified. When you have no free blocks you are fully utilizing  the buffer pool and may need a larger one.

The shared pool is defaulted to a small figure ( was 3.5M last I checked). You  should probably bump this up to at least 5M. If you are going to use  packages, stored functions, and/or stored procedure then start with 10M.  Packages take room. The dbms_output package was invented for debugging, but I  have seen it used to generate reports. It is pretty easy for a report to need  the max buffer of 1M. It does not take to many users running reports printed  with dbms_output to eat up the shared pool so add more if this routine is in  use. As the total number of sql statements ran by your application increases  you will probaly want to increase your pool by a meg or two. There are a  couple of dictionary views that look at memory. I do not remember their names  off the top of my head, but they have the work cache in their names so query  sys.dba_views for them. It should not be too difficult to figure out how to  monitor your memory utilition.

Log buffer use can also be monitored. Check the manual. I would suggest 64k.   The default is pretty small, but in general log entries have to be written  pretty regular so there is no need to try for the max right off the bat.

You can overallocate you sga. It has been shown that very large buffer pools  can cause Oracle to search for a free block for a period of time that exceeds  the cost of an i/o to get the block. But do not worry about this day 1.  Instead look at you sga as a percentage of your total memory. If you take a  huge sga, you can so restrict that system memory available to support each  Oracle user process to the point where the OS has to swap. I believe that  Oracle experts told us about 3 years back that each Oracle user need about 5M  of memory. Multiple this by fifty to a 100 sessions and it can add up.

There are no real rules. The best way is to pick some reasonable numbers and  monitor. Be willing to deallocate if for no other reason than to see what  happens so that you can find your system's real need.

Mark Powell -- The only advise that counts is the advise that you follow so follow your own advise Received on Mon Oct 06 1997 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US