Re: IO Freeze ON RHEL 6.6 and hence high log file sync

From: Purav Chovatia <puravc_at_gmail.com>
Date: Fri, 20 Apr 2018 23:04:31 +0530
Message-ID: <CADrzpjG9KDMUC7AWwa56wf4AZ5g3kqXaJE5-skK3sXzMK+Zqbw_at_mail.gmail.com>



Hi Stefan,

ocssd.bin starts consuming 100% cpu (of a cpu thread; so overall system cpu is still abundantly free). And foll. gets logged to the ocssd log: [OCSSD(12265)]CRS-1719: Cluster Synchronization Service daemon (CSSD) clssnmvDiskPingThread_0 not scheduled for 71670 msecs. [OCSSD(12265)]CRS-1719: Cluster Synchronization Service daemon (CSSD) clssnmvWorkerThread_0 not scheduled for 72020 msecs. [OCSSD(12265)]CRS-1719: Cluster Synchronization Service daemon (CSSD) clssnmvDiskPingThread_0 not scheduled for 8000 msecs.

Oracle SR analyst mentioned of a bug ( Bug 26513709 <https://support.oracle.com/epmos/faces/BugDisplay?id=26513709&parent=SrDetailText&sourceId=3-17240857701>

Many Thanks

On 19 April 2018 at 00:13, Stefan Knecht <knecht.stefan_at_gmail.com> wrote:

> Do you see any messages in either:
>
> - syslog
> - ASM or database alert log
> - any trace files created by either ASM or the database
>
> during that time?
>
>
>
> On Thu, Apr 19, 2018 at 12:25 AM, Purav Chovatia <puravc_at_gmail.com> wrote:
>
>> We are on RHEL 6.6 running a 2-node Oracle 12c RAC.
>>
>> All of a sudden, for 2-3 minutes, DATADG (asm diskgroup containing all
>> data files) freezes for 2-3minutes with iostat showing 100% utilisation,
>> very high svctm and very high write_await. But the r/s, w/s rsec/s and
>> wsec/s is 0. Meaning no IO but still disk fully busy. So it’s like a freeze.
>>
>> Has anybody experienced this and can somebody help as to what could be
>> the reason?
>>
>> Transparent huge pages were not disabled and general huge pages were not
>> configured. Now both done.
>>
>> Many thanks
>>
>
>
>
> --
> //
> zztat - The Next-Gen Oracle Performance Monitoring and Reaction Framework!
> Visit us at zztat.net | _at_zztat_oracle | fb.me/zztat | zztat.net/blog/
>

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Apr 20 2018 - 19:34:31 CEST

Original text of this message