Re: SHARED_POOL_SIZE Increase Makes a Bstat Little Worse - Why?

From: Anbazhagan.G <anbu1_at_mbox3.singnet.com.sg>
Date: 1997/07/19
Message-ID: <01bc93d9$f76048c0$a0b815a5_at_default>


Hello ,
When your system memory size is 768 Mbytes , you have given about 500 Mbytes to Oracle .
You are left with 268 Mbytes for everythingelse , this includes the oracle background process as well .
Then unix will take memory and the users will require memory . I do not know how many users were using the system.

Since you have gifted all the memory to oracle very generously , unix must have been swaping ( in your case since it is HP-UX is must have been paging) in and out eating away a lot of resources. Please use sar and and swapinfo etc and find out the details of swaping . Also find out the activity on the disks.

Regards
Anbu

Stan Towianski <stantow_at_ibm.net> wrote in article <33caad18.0_at_news1.ibm.net>...
> Hi,
>
> Would anyone please explain to me why changing the SHARED_POOL_SIZE
> upward in my initprod.ora had the following effect that it did?
>
> Oracle 7.3.2.3 is running on HP Unix 10.01 with 768MB ram. 1 oracle
 instance
> running.
>
> STARTING INITPROD.ORA VALUES:
> -----------------------------
> db_block_buffers = 120100
>
> shared_pool_size = 80800100
>
> sort_area_retained_size = 1100100 # sz of mem in bytes for in-memory
 sorts
> sort_area_size = 1100100 # sz of PGA mem to use for ext.
 (disk)
> sorts
>
> MY STARTING VALUES:
> -------------------
> SVRMGR> show sga
> Total System Global Area 368179576 bytes
> Fixed Size 38904 bytes
> Variable Size 122143104 bytes
> Database Buffers 245964800 bytes
> Redo Buffers 32768 bytes
> SVRMGR>
>
> AFTER CHANGES INITPROD.ORA VALUES:
> -----------------------------
> db_block_buffers = 120100
>
> shared_pool_size = 200800100
>
> sort_area_retained_size = 1100100 # sz of mem in bytes for in-memory
 sorts
> sort_area_size = 1100100 # sz of PGA mem to use for ext.
 (disk)
> sorts
>
> AFTER CHANGING INITPROD.ORA VALUES:
> -----------------------------------
> SVRMGR> show sga
> Total System Global Area 488179576 bytes
> Fixed Size 38904 bytes
> Variable Size 242143104 bytes
> Database Buffers 245964800 bytes
> Redo Buffers 32768 bytes
> SVRMGR>
>
>
> All I did was change my shared_pool_size from 80.8M to 200.8M per
 Oracle's
> suggestion because in my utlbstat.sql report.txt I had a low share sql
> area hit ratio of 60%: ( I assume this is the value the woman at Oracle
> was talking about)
>
> LIBRARY GETS GETHITRATI PINS PINHITRATI RELOADS
 INVALIDATI
> ------------ ---------- ---------- ---------- ---------- ----------



> BODY 0 1 0 1 0
 0
> CLUSTER 0 1 0 1 0
 0
> INDEX 0 1 0 1 0
 0
> OBJECT 0 1 0 1 0
 0
> PIPE 0 1 0 1 0
 0
> SQL AREA 359689 .607 2268888 .871 8003
 0
> ^
> TABLE/PROCED 149445 .999 253973 .999 72
 0
> TRIGGER 0 1 0 1 0
 0
>
> But what I got after increasing my shared_pool_size from 80.8M to 200.8M
> was:
> LIBRARY GETS GETHITRATI PINS PINHITRATI RELOADS
 INVALIDATI
> ------------ ---------- ---------- ---------- ---------- ----------


> BODY 0 1 0 1 0
 0
> CLUSTER 0 1 0 1 0
 0
> INDEX 0 1 0 1 0
 0
> OBJECT 0 1 0 1 0
 0
> PIPE 0 1 0 1 0
 0
> SQL AREA 366475 .622 2299615 .875 7090
 0
> ^
> TABLE/PROCED 145728 .999 248636 .999 58
 0
> TRIGGER 0 1 0 1 0
 0
>
> Not significantly better! and the following values seem to be worse!
> Anyone know what? and what I can do about it?
> I cannot tell from any Oracle book if just increasing shared_pool_size
> decreases from something else in the SGA or if it just increases the SGA
> overall and does not affect other SGA buffers, etc...
>
> By looking at the 'show sga' values it looks like it should have only
> increased
> the variable size (whatever that is) and not have affected the data
 buffers
> size,
> so I would have thought that things could only have gotten better and not
> worse!
>
> These values look worse to me in my report.txt:
> BEFORE:
> Statistic Total Per Transact Per Logon Per
 Second
> --------------------------- ------------ ------------ ------------
> ------------
> CPU used by this session 654915 59537.73 38524.41
> 47.01
> CPU used when call started 654915 59537.73 38524.41
> 47.01
> .
> DBWR timeouts 4384 398.55 257.88
 

> .31
> .
> background timeouts 8995 817.73 529.12
 

> .65
> .
> calls to kcmgas 58 5.27 3.41
 

> 0
> calls to kcmgcs 81 7.36 4.76
 

> .01
> .
> commit cleanout number succ 9190 835.45 540.59
 

> .66
> .
> enqueue conversions 17 1.55 1
 

> 0
> enqueue releases 969 88.09 57
 

> .07
> enqueue requests 963 87.55 56.65
 

> .07
> .
> immediate (CR) block cleano 917 83.36 53.94
 

> .07
> immediate (CURRENT) block c 67 6.09 3.94
 

> 0
> .
> logons cumulative 17 1.55 1
 

> 0
> .
> opened cursors cumulative 48538 4412.55 2855.18
 

> 3.48
> parse count 359684 32698.55 21157.88
> 25.82
> parse time cpu 426533 38775.73 25090.18
> 30.62
> parse time elapsed 436762 39705.64 25691.88
> 31.35
> physical reads 56963 5178.45 3350.76
 

> 4.09
> physical writes 40012 3637.45 2353.65
 

> 2.87
> .
> redo blocks written 68719 6247.18 4042.29
 

> 4.93
> redo buffer allocation retr 81 7.36 4.76
 

> .01
> redo entries 235600 21418.18 13858.82
> 16.91
> redo log space requests 2 .18 .12
 

> 0
> redo log space wait time 154 14 9.06
 

> .01
> redo size 69231460 6293769.09 4072438.82
> 4969.6
> redo small copies 235598 21418 13858.71
> 16.91
> redo synch time 33 3 1.94
 

> 0
> redo synch writes 17 1.55 1
 

> 0
> redo wastage 22128 2011.64 1301.65
 

> 1.59
> redo write time 26271 2388.27 1545.35
 

> 1.89
> redo writes 8077 734.27 475.12
 

> .58
> rollback changes - undo rec 70637 6421.55 4155.12
 

> 5.07
> rollbacks only - consistent 21300 1936.36 1252.94
 

> 1.53
> session connect time 4899508515 445409865 288206383.24
> 351698.26
> session logical reads 6537360 594305.45 384550.59
> 469.27
> session pga memory 1799272 163570.18 105839.53
> 129.16
> session pga memory max 1799272 163570.18 105839.53
> 129.16
> session uga memory -135040 -12276.36 -7943.53
> -9.69
> session uga memory max 988744 89885.82 58161.41
> 70.97
> sorts (memory) 45 4.09 2.65
 

> 0
> sorts (rows) 61155 5559.55 3597.35
 

> 4.39
> table fetch by rowid 903322 82120.18 53136.59
> 64.84
> table fetch continued row 11893 1081.18 699.59
 

> .85
> table scan blocks gotten 1592 144.73 93.65
 

> .11
> table scan rows gotten 5443 494.82 320.18
 

> .39
> table scans (short tables) 116 10.55 6.82
 

> .01
> total number commit cleanou 9203 836.64 541.35
 

> .66
> transaction rollbacks 1 .09 .06
 

> 0
> user calls 17389654 1580877.64 1022920.82
> 1248.27
> user commits 11 1 .65
 

> 0
> write requests 5419 492.64 318.76
 

> .39
>
>
> AFTER:
> Statistic Total Per Transact Per Logon Per
 Second
> --------------------------- ------------ ------------ ------------
> ------------
> CPU used by this session 925590 9542.16 18148.82
> 54.18
> CPU used when call started 925588 9542.14 18148.78
> 54.18
> .
> DBWR timeouts 5424 55.92 106.35
 

> .32
> .
> background timeouts 11070 114.12 217.06
 

> .65
> .
> calls to kcmgas 169 1.74 3.31
 

> .01
> calls to kcmgcs 127 1.31 2.49
 

> .01
> .
> commit cleanout failures: b 5 .05 .1
 

> 0
> commit cleanout number succ 9798 101.01 192.12
 

> .57
> .
> enqueue conversions 97 1 1.9
 

> .01
> enqueue releases 1661 17.12 32.57
 

> .1
> enqueue requests 1658 17.09 32.51
 

> .1
> enqueue timeouts 1 .01 .02
 

> 0
> .
> immediate (CR) block cleano 966 9.96 18.94
 

> .06
> immediate (CURRENT) block c 364 3.75 7.14
 

> .02
> .
> logons cumulative 51 .53 1
 

> 0
> logons current 18 .19 .35
 

> 0
> .
> opened cursors cumulative 60291 621.56 1182.18
 

> 3.53
> opened cursors current 1661 17.12 32.57
 

> .1
> parse count 366473 3778.07 7185.75
> 21.45
> parse time cpu 690328 7116.78 13535.84
> 40.41
> parse time elapsed 714518 7366.16 14010.16
> 41.83
> physical reads 59422 612.6 1165.14
 

> 3.48
> physical writes 40750 420.1 799.02
 

> 2.39
> .
> redo blocks written 69486 716.35 1362.47
 

> 4.07
> redo buffer allocation retr 107 1.1 2.1
 

> .01
> redo entries 238393 2457.66 4674.37
> 13.95
> redo log space requests 1 .01 .02
 

> 0
> redo log space wait time 61 .63 1.2
 

> 0
> redo size 69961231 721249.8 1371788.84
> 4095.37
> redo small copies 238392 2457.65 4674.35
> 13.95
> redo synch time 403 4.15 7.9
 

> .02
> redo synch writes 105 1.08 2.06
 

> .01
> redo wastage 65450 674.74 1283.33
 

> 3.83
> redo write time 32261 332.59 632.57
 

> 1.89
> redo writes 8999 92.77 176.45
 

> .53
> rollback changes - undo rec 70646 728.31 1385.22

> 4.14
> rollbacks only - consistent 21310 219.69 417.84
 

> 1.25
> session connect time 15324337005 157982855.72 300477196.18
> 897051.86
> session logical reads 6611954 68164.47 129646.16
> 387.05
> session pga memory 4578640 47202.47 89777.25
> 268.02
> session pga memory max 4578640 47202.47 89777.25
> 268.02
> session uga memory 1029720 10615.67 20190.59
> 60.28
> session uga memory max 2608424 26890.97 51145.57
> 152.69
> sorts (memory) 202 2.08 3.96
 

> .01
> sorts (rows) 61800 637.11 1211.76
 

> 3.62
> table fetch by rowid 921491 9499.91 18068.45
> 53.94
> table fetch continued row 11968 123.38 234.67
 

> .7
> table scan blocks gotten 1934 19.94 37.92
 

> .11
> table scan rows gotten 6950 71.65 136.27
 

> .41
> table scans (short tables) 149 1.54 2.92
 

> .01
> total number commit cleanou 9841 101.45 192.96
 

> .58
> transaction rollbacks 1 .01 .02
 

> 0
> user calls 17670296 182168 346476.39
> 1034.38
> user commits 97 1 1.9
 

> .01
> write requests 5745 59.23 112.65
 

> .34
>
>
>
> BEFORE:
> LATCH_NAME GETS MISSES HIT_RATIO SLEEPS
 SLEEPS/MISS
> ------------------ ----------- ----------- ----------- -----------



> cache buffer handl 2 0 1 0
 0
> cache buffers chai 13013159 0 1 0
 0
> cache buffers lru 152827 18 1 18
 1
> dml lock allocatio 163 0 1 0
 0
> enqueue hash chain 1929 0 1 0
 0
> enqueues 51257 0 1 0
 0
> ktm global data 46 0 1 0
 0
> latch wait list 58 0 1 0
 0
> library cache 8867600 6 1 6
 1
> library cache load 364 0 1 0
 0
> list of block allo 96981 0 1 0
 0
> messages 125534 2 1 2
 1
> modify parameter v 17 0 1 0
 0
> multiblock read ob 10 0 1 0
 0
> process allocation 17 0 1 0
 0
> redo allocation 258627 25 1 39
 1.56
> row cache objects 11203595 2 1 2
 1
> sequence cache 58 0 1 0
 0
> session allocation 95504 0 1 0
 0
> session idle bit 34827014 0 1 0
 0
> session switching 221 0 1 0
 0
> shared pool 7755558 23 1 23
 1
>
> AFTER:
> LATCH_NAME GETS MISSES HIT_RATIO SLEEPS
 SLEEPS/MISS
> ------------------ ----------- ----------- ----------- -----------


> cache buffer handl 39 0 1 0
 0
> cache buffers chai 13140627 1 1 1
 1
> cache buffers lru 156394 9 1 9
 1
> dml lock allocatio 626 0 1 0
 0
> enqueue hash chain 3390 0 1 0
 0
> enqueues 54065 0 1 0
 0
> ktm global data 57 0 1 0
 0
> latch wait list 3898 0 1 0
 0
> library cache 8897877 400 1 454
 1.135
> library cache load 364 0 1 0
 0
> list of block allo 97212 0 1 0
 0
> messages 130512 1 1 1
 1
> modify parameter v 50 0 1 0
 0
> multiblock read ob 6 0 1 0
 0
> process allocation 50 0 1 0
 0
> redo allocation 263735 21 1 26
 1.238
> row cache objects 10820105 116 1 129
 1.112
> sequence cache 164 0 1 0
 0
> session allocation 97249 0 1 0
 0
> session idle bit 35388805 22 1 22
 1
> session switching 270 0 1 0
 0
> shared pool 6478946 1552 1 1564
 1.008
>
> THE ONE WITH THE 120MB BIGGER SHARED_POOL_SIZE HAS MORE MISSES
 !?
>
>
> BEFORE:
> LATCH_NAME NOWAIT_GETS NOWAIT_MISSES NOWAIT_HIT_RATIO
> ------------------ ---------------- ---------------- ----------------
> cache buffers chai 442660 1 1
> cache buffers lru 107142 36 1
> library cache 52275 0 1
> multiblock read ob 10080 0 1
> process allocation 17 0 1
> row cache objects 63392 0 1
>
>
> AFTER:
> LATCH_NAME NOWAIT_GETS NOWAIT_MISSES NOWAIT_HIT_RATIO
> ------------------ ---------------- ---------------- ----------------
> cache buffers chai 528557 0 1
> cache buffers lru 110023 23 1
> library cache 53062 20 1
> multiblock read ob 1 0 1
> process allocation 50 0 1
> row cache objects 54500 5 1
>
> multiblock read ob IS WAY DIFFERENT!
>
> These are both from nightly batch program runs of the same set of
 programs.
> The 2nd night run was 15 minutes longer overall.
>
> Thanks.
>
>
>
> (posted newsgroup reply and sent email)
>
> stantow_at_ibm.net
>
>

Stan Towianski <stantow_at_ibm.net> wrote in article <33caad18.0_at_news1.ibm.net>...
> Hi,
>
> Would anyone please explain to me why changing the SHARED_POOL_SIZE
> upward in my initprod.ora had the following effect that it did?
>
> Oracle 7.3.2.3 is running on HP Unix 10.01 with 768MB ram. 1 oracle
 instance
> running.
>
> STARTING INITPROD.ORA VALUES:
> -----------------------------
> db_block_buffers = 120100
>
> shared_pool_size = 80800100
>
> sort_area_retained_size = 1100100 # sz of mem in bytes for in-memory
 sorts
> sort_area_size = 1100100 # sz of PGA mem to use for ext.
 (disk)
> sorts
>
> MY STARTING VALUES:
> -------------------
> SVRMGR> show sga
> Total System Global Area 368179576 bytes
> Fixed Size 38904 bytes
> Variable Size 122143104 bytes
> Database Buffers 245964800 bytes
> Redo Buffers 32768 bytes
> SVRMGR>
>
> AFTER CHANGES INITPROD.ORA VALUES:
> -----------------------------
> db_block_buffers = 120100
>
> shared_pool_size = 200800100
>
> sort_area_retained_size = 1100100 # sz of mem in bytes for in-memory
 sorts
> sort_area_size = 1100100 # sz of PGA mem to use for ext.
 (disk)
> sorts
>
> AFTER CHANGING INITPROD.ORA VALUES:
> -----------------------------------
> SVRMGR> show sga
> Total System Global Area 488179576 bytes
> Fixed Size 38904 bytes
> Variable Size 242143104 bytes
> Database Buffers 245964800 bytes
> Redo Buffers 32768 bytes
> SVRMGR>
>
>
> All I did was change my shared_pool_size from 80.8M to 200.8M per
 Oracle's
> suggestion because in my utlbstat.sql report.txt I had a low share sql
> area hit ratio of 60%: ( I assume this is the value the woman at Oracle
> was talking about)
>
> LIBRARY GETS GETHITRATI PINS PINHITRATI RELOADS
 INVALIDATI
> ------------ ---------- ---------- ---------- ---------- ----------



> BODY 0 1 0 1 0
 0
> CLUSTER 0 1 0 1 0
 0
> INDEX 0 1 0 1 0
 0
> OBJECT 0 1 0 1 0
 0
> PIPE 0 1 0 1 0
 0
> SQL AREA 359689 .607 2268888 .871 8003
 0
> ^
> TABLE/PROCED 149445 .999 253973 .999 72
 0
> TRIGGER 0 1 0 1 0
 0
>
> But what I got after increasing my shared_pool_size from 80.8M to 200.8M
> was:
> LIBRARY GETS GETHITRATI PINS PINHITRATI RELOADS
 INVALIDATI
> ------------ ---------- ---------- ---------- ---------- ----------


> BODY 0 1 0 1 0
 0
> CLUSTER 0 1 0 1 0
 0
> INDEX 0 1 0 1 0
 0
> OBJECT 0 1 0 1 0
 0
> PIPE 0 1 0 1 0
 0
> SQL AREA 366475 .622 2299615 .875 7090
 0
> ^
> TABLE/PROCED 145728 .999 248636 .999 58
 0
> TRIGGER 0 1 0 1 0
 0
>
> Not significantly better! and the following values seem to be worse!
> Anyone know what? and what I can do about it?
> I cannot tell from any Oracle book if just increasing shared_pool_size
> decreases from something el
Received on Sat Jul 19 1997 - 00:00:00 CEST

Original text of this message