Re: Oracle and Memory

From: Rich Headrick <rheadric_at_macromedia.com>
Date: 1996/08/22
Message-ID: <321C69CE.38D8_at_macromedia.com>


Sorry to bother you,

I have a Sun Spark 20 dual cpu, with 128MB of Ram. Are my init.ora parameters anywhere near correct for a system of this size? I'll be running Aurum's SalesTrak app, and have about 40 user concurent connections. Some tables will have as much as 1Million rows.

Any comments are greatly appreciated.

Rich

Here you go:
db_files = 20

#db_file_multiblock_read_count = 8                                  # SMALL
#db_file_multiblock_read_count = 16                                 # MEDIUM
db_file_multiblock_read_count = 32                                  # LARGE

#db_block_buffers = 200                                              # SMALL
#db_block_buffers = 550                                              # MEDIUM
db_block_buffers = 14648                                        # LARGE

#shared_pool_size = 3500000                                            # SMALL
shared_pool_size = 6000000                                          # MEDIUM
#shared_pool_size = 9000000                                          # LARGE

log_checkpoint_interval = 10000
open_cursors = 200

#processes = 50                                                        # SMALL
processes = 100                                                     # MEDIUM
#processes = 200                                                     # LARGE 
#dml_locks = 100                                                       # SMALL
#dml_locks = 200                                                     # MEDIUM
dml_locks = 500                                                     # LARGE

#log_buffer = 8192                                                     # SMALL
#log_buffer = 32768                                                  # MEDIUM
log_buffer = 163840                                                 # LARGE

#sequence_cache_entries = 10                                           # SMALL
#sequence_cache_entries = 30                                         # MEDIUM
sequence_cache_entries = 100                                        # LARGE

#sequence_cache_hash_buckets = 10                                      # SMALL
#sequence_cache_hash_buckets = 23                                    # MEDIUM
sequence_cache_hash_buckets = 89                                    # LARGE

# audit_trail = true            # if you want auditing
# timed_statistics = true       # if you want timed statistics
max_dump_file_size = 10240      # limit trace file size to 5 Meg each

# log_archive_start = true      # if you want automatic archiving
 compatible = 7.1.0.0
 global_names = TRUE
mts_dispatchers="ipc,1"
mts_dispatchers="tcp,1"
mts_max_dispatchers=10
mts_servers=1
mts_max_servers=10
mts_service=tst1
mts_listener_address="(ADDRESS=(PROTOCOL=ipc)(KEY=tst1))"
mts_listener_address = "(ADDRESS=(PROTOCOL=tcp))"
                           "(address=(protocol=tcp)(host=jerrylee)(port=1776))"
 

Thomas J. Kyte wrote:
>
> On 17 Aug 96 01:06:20 -0700, richu_at_msn.com (Richard Hu) wrote:
>
> >>This might sound like a totally idiotic question, since it
> >>flys in the face of reason. But, an associate of mine just
> >>came back from a Oracle class where the statement was made that
> >>'having too much memory can cause problems with Oracle'. I
> >>thought that this was totally absurd and the person in the
> >>class didn't ask the obvious question, why?
 

> >>Does this make sense to anyone or was the instructor out to lunch?
 

> >>Thanks,
> >>Joe
> >
> >
> >More memory=ORACLE heaven...I cant see how more memory could ever
> >really hurt. AT best you can have a large mahority of your database
> >in RAM...this would not HURT at all and be lightning fast
>
> Perhaps what they meant was that "allocating too much memory to Oracle can cause
> problems".
>
> As a rule of thumb, don't ever allocate over 1/2 of the real ram on a machine to
> Oracle. Don't forget, we use dynamic memory as well as SGA memory. If you give
> it all to the SGA upon start up, you will be in paging heaven at runtime.
>
> Also, having too many block buffers can harm performance. If my entire database
> was cached, it would HURT and could be painfully slow. Data on disk is easyly
> addressable (eg: rowid = file/block/row on block). Data in memory is a little
> less organized (read: sequentially searched). We can spend quite a bit of time
> (elaps and cpu time) searching RAM for something that may or may not be there OR
> we can do 1 or 2 io's and just get it. In many cases, doing the io's will be
> faster.
>
> In order to support very large memory databases, you have to go about memory
> organization a little differently. Oracle with the VLM (Very Large Memory)
> option on 64 bit machines for example can handle gigabytes of information.
>
> Thomas Kyte
> Oracle Government
> tkyte_at_us.oracle.com
>
> http://govt.us.oracle.com -- Check out our web site! Brand new, uses Oracle Web Server and Database
>
> -------------------
>
> statements and opinions are mine and do not necessarily
> reflect the opinions of Oracle Corporation
Received on Thu Aug 22 1996 - 00:00:00 CEST

Original text of this message