Re: Trace file getting huge

From: Saad Khan <saad4u_at_gmail.com>
Date: Wed, 5 Feb 2014 12:36:09 -0500
Message-ID: <CACqGOPLyGi4qahgzLM-E5byXH0QB6OX+jM5hOr8voUP-ZagVxA_at_mail.gmail.com>



Fellows,
Oracle came up with the famous solution of "Reboot the server". Before rebooting I noticed that when I shoutdown both databases and oracle services, the megasized trace file (<.._j0002_..trc) was gone. Server was rebooted but as soon as Oracle services started, it started dumping trace again and the similar j0002.trc file hit 4GB size within 2 min.

Alert file is full of these messages:

Errors in file e:\ora11g\diag\rdbms\qmdb\qmdb\trace\qmdb_j002_3288.trc:
Errors in file e:\ora11g\diag\rdbms\qmdb\qmdb\trace\qmdb_j002_3288.trc:
Wed Feb 05 12:32:15 2014
Errors in file e:\ora11g\diag\rdbms\qmdb\qmdb\trace\qmdb_j002_3288.trc:
Wed Feb 05 12:32:16 2014

Trace dumping is performing id=[cdmp_20140205123216] Errors in file e:\ora11g\diag\rdbms\qmdb\qmdb\trace\qmdb_j002_3288.trc: Trace dumping is performing id=[cdmp_20140205123221]
Wed Feb 05 12:32:27 2014
Errors in file e:\ora11g\diag\rdbms\qmdb\qmdb\trace\qmdb_j002_3288.trc:
Wed Feb 05 12:32:28 2014

Trace dumping is performing id=[cdmp_20140205123228] Errors in file e:\ora11g\diag\rdbms\qmdb\qmdb\trace\qmdb_j002_3288.trc:
--

Oracle support is taking forever in coming back. We cant see what actually
is being written in this trace as its inaccessible.

The SR has been escalated to P1. Any ideas?

Thanks for help.


On Tue, Feb 4, 2014 at 2:00 PM, Saad Khan <saad4u_at_gmail.com> wrote:


> Andy,
> As I reviewed alert file, I just saw that the database was recycled
> yesterday while the events were turned off on Jan. 28.
> ----
> Tue Jan 28 12:11:43 2014
> OS Pid: 246404 executed alter system set events '20011 trace name
> ERRORSTACK off'
> Tue Jan 28 12:11:56 2014
> OS Pid: 246404 executed alter system set events '29913 trace name
> ERRORSTACK off'
> -----
> The database was started at
>
> TO_CHAR(STARTUP_TIME
> --------------------
> feb-03-2014:06:04:20
> (should be PM)
>
> While the giant trace file has the timestamp of Yesterday, February 03,
> 2014, 6:31:34 PM which is right after DB start.
>
> Now following up with the blog you had sent, when I checked what could be
> in shared pool, it shows following.
>
> SQL> select * from v$sgastat where name like '%Event%';
>
> POOL NAME BYTES
> ------------ -------------------------- ----------
> shared pool dbgdInitEventGrp: eventGr 216
>
> So there is some event going on, I just cant figure out which one and how
> to stop it.
>
> The other numerous trace files being generated with "bucket" naming
> convention are coming from all sessions and background processes.
>
> Thanks again for your help.
>
>
> On Tue, Feb 4, 2014 at 1:35 PM, Andy Klock <andy_at_oracledepot.com> wrote:
>
>> Based on the behavior you are describing, it almost seems like the number
>> 3 step didn't happen correctly and trace wasn't actually turned off. Is it
>> possible that after the restart the trace was turned off again with a
>> "alter system" and this is why oradebug no events are set at the system
>> level, but you are now in a state that Tanel describes in his blog I
>> referenced earlier?
>>
>> I'm guessing at this point, but you should be able to confirm the above
>> by analyzing the alert log.
>>
>> Andy
>>
>>
>>
>> On Tue, Feb 4, 2014 at 11:44 AM, Saad Khan <saad4u_at_gmail.com> wrote:
>>
>>> Since I just recently got pulled into this issue, now im given more
>>> detail:
>>> The sequence of activities so far are like:
>>> 1) Initially there was some external file error during data pump.
>>> 2) Event level tracing was enabled to investigate, few days back.
>>> 3) The tracing was turned off
>>> 4) Users complained about performance yesterday
>>> 5) DB was recycled
>>> 6) After that these trace files are getting generated crazy.
>>>
>>>
>
-- http://www.freelists.org/webpage/oracle-l
Received on Wed Feb 05 2014 - 18:36:09 CET

Original text of this message