Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: shared pool kghupr1

Re: shared pool kghupr1

From: Mark <simmons_mark_at_yahoo.com>
Date: 25 Mar 2004 09:20:28 -0800
Message-ID: <5366fb41.0403250920.bf88ae5@posting.google.com>


Hans,

You can try this...

Look for the queries with the greatest number of "Buffer Gets" in your statspack report. Concentrate on re-writing these queries by order of buffer usage. Latch free normally means that there is contention over the same memory blocks. It's a CPU killer. If you take this corrective action, your CPU usage will drop also (probably dramatically with 1000 users on the system).

I'm betting you'll find less than 5 queries to look at.

If you don't know how to tune SQL, I'd recommend getting a copy of Dan Tow's, "SQL Tuning".

Mark Simmons
Sr. Oracle DBA
Sabre-Holding, Southlake, TX

hansmayer1962_at_hotmail.com (bonminh lam) wrote in message news:<3c6b1bcf.0403250252.306ae1bc_at_posting.google.com>...
> Hello,
>
> on my client's site running Oracle 9.1.2 on HPUX with a huge active
> user population (over 1000 sessions simultaneously) the database since
> recently shows a high value for latch free event (for the shared
> pool). The time waiting on this latch free event has gradually pushed
> itself to the top of the list. Belows is an extract from a recent
> statspack report taken over a period of 30 minutes:
>
> Avg
> Total Wait wait
> Waits
> Event Waits Timeouts Time (s) (ms)
> /txn
> ---------------------------- ------------ ---------- ---------- ------
> --------
> latch free 100,730 0 14,115 140
> 0.6
> db file sequential read 1,354,105 0 1,728 1
> 7.4
> db file scattered read 1,003,044 0 1,435 1
> 5.5
> log file parallel write 145,329 0 405 3
> 0.8
> SQL*Net message from dblink 10,406 0 226 22
> 0.1
> log file sync 35,469 0 201 6
> 0.2
> SQL*Net more data to client 143,947 0 156 1
> 0.8
> buffer busy waits 49,269 0 121 2
> 0.3
> log file sequential read 506 0 20 39
> 0.0
> log buffer space 327 0 17 53
> 0.0
> process startup 47 12 15 319
> 0.0
> wait list latch free 244 0 5 21
> 0.0
> enqueue 77 0 5 67
> 0.0
> control file parallel write 603 0 5 8
> 0.0
> SQL*Net break/reset to clien 7,553 0 4 1
> 0.0
> db file parallel read 31 0 1 29
> 0.0
> log file switch completion 9 0 1 75
> 0.0
> SQL*Net more data from dblin 167 0 0 2
> 0.0
> db file single write 40 0 0 4
> 0.0
> control file sequential read 600 0 0 0
> 0.0
> direct path read 8,626 0 0 0
> 0.0
> LGWR wait for redo copy 2,362 0 0 0
> 0.0
> direct path write 9,085 0 0 0
> 0.0
> log file single write 4 0 0 9
> 0.0
> SQL*Net message to dblink 10,406 0 0 0
> 0.1
> row cache lock 1 0 0 1
> 0.0
> async disk IO 125 0 0 0
> 0.0
> library cache pin 4 0 0 0
> 0.0
> SQL*Net message from client 2,687,566 0 3,665,221 1364
> 14.7
> pipe get 1,745 1,064 5,309 3042
> 0.0
> jobq slave wait 895 863 2,654 2966
> 0.0
> SQL*Net more data from clien 49,247 0 95 2
> 0.3
> SQL*Net message to client 2,687,267 0 3 0
> 14.7
> -------------------------------------------------------------
>
>
> Latch Name Where Misses Sleeps
> Sleeps
> ------------------------ -------------------------- ------- ----------
> --------
> shared pool kghupr1 0 22,864
> 41,796
> shared pool kghalo 0 15,737
> 5,028
> shared pool kghfrunp: alloc: wait 0 7,967
> 218
> shared pool kghfrunp: clatch: nowait 0 2,654
> 0
> shared pool kghfrunp: clatch: wait 0 2,415
> 4,318
> shared pool kghfre 0 730
> 614
> shared pool kghalp 0 479
> 243
> shared pool kghfen: not perm alloc cla 0 148
> 39
> shared pool kghfru 0 11
> 7
> shared pool kghdmp 0 8
> 0
> -------------------------------------------------------------
>
>
> Can someone conclude from the shared pool wait details what is going
> on in the system? I do not have DBA role at this site but am supposed
> to assist in the problem diagnose. Hence this posting.
>
> Thanks in advance for any clarification.
Received on Thu Mar 25 2004 - 11:20:28 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US