Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: RE: controlfile schema global enqueue lock

RE: RE: controlfile schema global enqueue lock

From: Joan Hsieh <Joan.Hsieh_at_mirror-image.com>
Date: Thu, 01 Feb 2001 08:28:41 -0800
Message-ID: <F001.002A771D.20010201082056@fatcity.com>

Dick,
The system is not in archivelog mode. If the database crashed, I was told they don't care at all. They just rebuilt a shell database and let getter and feeder processes gradually feed the database. (The getter will check the url in the database, if didn't find then will go to the origin server to get the url) Of course, it will impact the performance. The end user at that region will experienced the slow access. We just have 5 tables, spilt into 22 separate raw disks. No i/o problem at all. Now the problems is where is control file enqueue come from? I thought since we have too many log switch can cause the cf enqueue problem. But that is not true either. Right now we don't have too many 4 per hr. still have the enqueue problem. We did'nt set any slave parameter. Oh, we set parallel_max_servers to 5, all the tables have 50 freelist and we have 8 getter and feeder processes running all the time.

I run couple Steve Adams scripts,

SQL> @enqueue_stats.sql

TY GETS WAITS
-- ---------- ----------

CF    3272862     679426
CI        140          0
CU       4071          0
DL         29          0
DR         27          0
DV         83          0
HW    3646713        437
IS      22669          0
MR        158          0
PE       5337          0
PR         15          0
PS         65          0
RT          1          0
SQ       1902          0
ST       5763       4692
TM   12264283          0
TS         57          0
TT         28          0
TX   12099800         19
US       5781          0

SQL> @response_time_breakdown.sql

MAJOR    MINOR         WAIT_EVENT                                SECONDS
-------- ------------- ---------------------------------------- --------
CPU time parsing       n/a                                          4920
         reloads       n/a                                            70
         execution     n/a                                         74532

disk I/O normal I/O    db file sequential read                    373254
         full scans    db file scattered read                       2038
         direct I/O    direct path read                            80465
                       direct path write                           36378
         other I/O     control file parallel write                 15312
                       control file sequential read                 9082
                       db file parallel read                         537

waits    DBWn writes   rdbms ipc reply                               136
                       local write wait                                1
         LGWR writes   log file switch completion                  12681
         enqueue locks enqueue                                   5716812
         other locks   latch free                                  82345
                       library cache pin                           51513
                       buffer busy waits                           27627
                       row cache lock                               7403
                       library cache lock                             15
                       buffer deadlock                                 1
                       index block split                               0

latency  commits       log file sync                              165491
         network       SQL*Net more data from client              101088
                       SQL*Net more data to client                  1143
                       SQL*Net message to client                     457
                       SQL*Net break/reset to client                 287
         file ops      file open                                      37
         process ctl   process startup                                 1
         misc          refresh controlfile command                 13197
                       reliable message                                0

This database bounced yesterday. So the data should be refresh. We pined all the sys package into share pool.

Thanks,

Joan

-----Original Message-----
Sent: Thursday, February 01, 2001 10:59 AM To: Joan Hsieh; Multiple recipients of list ORACLE-L

Joan,

    HUMMMMMMM! Lots to think about here. For one, if your not doing any backups
what is the plan for when a disk drive fails? Predicting that is similar to predicting the next asteroid impact on the earth, it's not a matter of if, but
when. After that drive gets replaced, something has to be put back down there.
OH, your on a raid or mirrored system!! That's nice, same scenario, just more
drives to worry about.

    But we digress. First, is this system in archivelog mode? If your not doing backups, why archive the redo information? Are each of these redo files
on separate drives, or is that the hottest drive in the system? Might be a good
idea to "spread the wealth". Is there only one control file or more than one?
are they on the same drive? are they mixed up with DB files? is one or more on
very busy drives? What is the setting of DBWR_IO_SLAVES and DB_WRITER_PROCESSES?     I'd also have a discussion with the applications people. While I understand
that stuff on the Internet is constantly changing, all of it is not changing every hour. News stuff normally has a 24 hour life expectancy. What will enhance the overall performance of your site would be a process that accumulates
historical life expectancies for various types of content there by allowing the
processes that refresh your url data to have some intelligence built into them.

Dick Goulet

____________________Reply Separator____________________
Author: "Joan Hsieh" <Joan.Hsieh_at_mirror-image.com>
Date:       2/1/2001 10:25 AM

Dick,

The database size is 200Gb (datafiles), not including the redolog. This is a internet content delivery company. The database structure is very simple. Some processes called getter continually insert the http site to the database. Other processes keep check the url is expired or not. Based on that, will do delete the url. So we do have a lot of insert, update (update the expire date) and delete. The interesting thing is we don't do backup. even not cold backup. No snapshot either. Since the content of internet is keeping change. There is no point to backup. The major thing is performance. We all use raw disk on sun 5.6 or HP. The db_block_size is 8k, db_block_buffer is 50000. The hit ratio is around 85%.

Joan

-----Original Message-----
Sent: Thursday, February 01, 2001 9:33 AM To: Joan Hsieh; Multiple recipients of list ORACLE-L

Joan,

    Something is indeed very fishy in London. The first question I would ask is
what is creating so much redo generation? 9x250MB = 2.1GB of data changes inside of that hour. One thought may be that the database is still in hot backup mode from a previously disturbed backup. The other is that several users
are doing a lot of temporary table creation, but not in the temp space.

        What is the size of this database? Where does incoming data come from?
Are their a lot of very frequently refreshed snapshots? Do they do a full refresh vs a fast refresh? What is the db_block_buffer hit ratio like? Having
a low hit ratio can indicate an excessively busy dbwr writing dirty blocks to
disk which can cause lots of check points.

Dick Goulet

____________________Reply Separator____________________
Author: "Joan Hsieh" <Joan.Hsieh_at_mirror-image.com>
Date:       2/1/2001 6:05 AM

Dear Listers,

Our database in London has tremendous cf enque lock. Since I am new here. I checked the parameter found the log_checkpoint_interval set to 3200 and log_checkpoint_timeout set to default (1800 sec). So I suggest to set log_checkpoint_interval to 100000000 and log_checkpoint_timeout to 0. The second thing I found we have average log switch is 6 t0 8 per hour. We have 40 redo logs, each of them is 250m. Our log buffer set to 1m. I believe after we changed the parameter, the control file schema global enqueue lock should be released. But it get worse, we have 98% control file enqueue lock now. I think we have too many log switch (9 per hour now)and suggested to increase the log to 500m, but our principle dba is not convinced, he think log buffer size should play a more important role.

Any ideas,

Joan

--
Please see the official ORACLE-L FAQ: http://www.orafaq.com
--
Author: Joan Hsieh
  INET: Joan.Hsieh_at_mirror-image.com

Fat City Network Services    -- (858) 538-5051  FAX: (858) 538-5051
San Diego, California        -- Public Internet access / Mailing Lists
--------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L

(or the name of mailing list you want to be removed from). You may
also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.com -- Author: Joan Hsieh INET: Joan.Hsieh_at_mirror-image.com Fat City Network Services -- (858) 538-5051 FAX: (858) 538-5051 San Diego, California -- Public Internet access / Mailing Lists -------------------------------------------------------------------- To REMOVE yourself from this mailing list, send an E-Mail message to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from). You may
also send the HELP command for other information (like subscribing).
Received on Thu Feb 01 2001 - 10:28:41 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US