Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Checkpoint stalls database for seconds

Re: Checkpoint stalls database for seconds

From: Steve Adams <steve.adams_at_ixora.com.au>
Date: Wed, 02 Feb 2000 21:09:43 GMT
Message-ID: <389899a0.178561467@news.eagles.bbs.net.au>


Hello Brian,

I have hit something like this before, but it was the log switch itself that was freezing things, rather than the checkpoint. The solution was to have a sleeping process hold open file descriptors on all the log files to speed up the open() system call. Have a look at the tip on my web site about "Tuning Log Switches" for more explanation and a script.

In any case, it would be helpful to see the output of V$SYSTEM_EVENT, to get more info on what your hold ups really are.

Regards,
Steve Adams

http://www.ixora.com.au/

http://www.oreilly.com/catalog/orinternals/

http://www.christianity.net.au/



On Wed, 02 Feb 2000 21:06:50 +0100, Brian Rasmusson <brasmuss_at_bellesystems.com> wrote:

>Hi,
>
>we're doing some initial benchmarking of an OLTP application performing
>about 100 updates and 2-300 insert a second. 1 session is estimated to
>be 1 update and 2-3 inserts. Initial results showed 200 sessions a
>second, but it has dropped to 100 because of problems with checkpoints.
>
>First let me point out that I'm not a DBA, but have consulted several to
>get to this configuration.
>
>What we see is that a checkpoint stalls the database for a couple of
>seconds, and since this is a real-time system that is not tolerated.
>Where would you suggest we start digging?
>
>Platform is Oracle 8.0.5, Sun Solaris on a Sun 450, 4 CPUs and 2GB ram
>and around 10 disks. No RAID, just plain 10.000 RPM disks. Redo logs are
>stored on a separate disk with direct-io enabled.
>
>Configuration clips:
>
>rollback_segments=(r1,r2,r3,r4)
>db_files = 100
>open_cursors = 256
>db_file_multiblock_read_count = 32
>db_block_buffers = 12800 ## 100M - memory size dependent
>db_block_lru_latches = 4 ## set = number of cpu's
>buffer_pool_keep = 8192
>shared_pool_size = 120M
>optimizer_mode = FIRST_ROWS
>dml_locks = 500
>log_buffer = 262144 ## 256K
>log_simultaneous_copies = 8 ## 2 * cpu_count
>log_small_entry_max_size = 32 ## no copy_latch for_log_upd.
>below bytes
>log_checkpoint_interval = 30
>log_checkpoint_timeout = 0
>lgwr_io_slaves = 4 ## min. one for each disk/raid
>channel
>log_checkpoints_to_alert = false ## to save time??
>sequence_cache_entries = 100
>sequence_cache_hash_buckets = 89
>max_dump_file_size = 20480 ## 10M # max trace file
>large_pool_min_alloc = 128K
>object_cache_optimal_size = 65536
>object_cache_max_size_percent = 100
>row_cache_cursors = 60
>session_cached_cursors = 20
>sessions = 100
>processes = 250
>db_writer_processes = 1 ## usely adequate on async i/o
>
>Tables are partitioned, so that the table we update frequently is spread
>on 3 disks, and the index on 3 other disks. I'm sure our own DBA will be
>able to tweak it further, but I would like to hear other opinions as
>well as he has left the office for today :-)
>
>Thanks for any suggestions,
>Brian
Received on Wed Feb 02 2000 - 15:09:43 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US