RE: intermittent long "log file sync" waits

From: Noveljic Nenad <nenad.noveljic_at_vontobel.com>
Date: Tue, 28 Jan 2020 21:16:11 +0000
Message-ID: <39760_1580246191_5E30A4AF_39760_8321_1_d4516636de124bc8befd035cfe12633e_at_vontobel.com>



In this case, though, there aren’t any ash entries for any of the lg* processes while the app sessions were waiting in log file sync for a long time.

Von: Cary Millsap <cary.millsap_at_method-r.com<mailto:cary.millsap_at_method-r.com>> Datum Dienstag, 28. Jan. 2020, 9:58 PM
An: Stefan Koehler <contact_at_soocs.de<mailto:contact_at_soocs.de>> Cc: Chris Stephens <cstephens16_at_gmail.com<mailto:cstephens16_at_gmail.com>>, oracle-l <Oracle-L_at_freelists.org<mailto:Oracle-L_at_freelists.org>> Betreff: Re: intermittent long "log file sync" waits

Chris, I recognize your Method R Workbench output there :-).

Recently, Jeff and I saw the same kind of behavior. Client had intermittent problems with a little program that usually runs in a second or so, but every once in a while—about six times per day—it would hang for 25+ seconds. Client traced instance-wide for about six hours. We ended up with several thousand trace files, but with Method R Workbench 9 (beta, but coming soon), we were able to find what we were looking for in just a few minutes. We found that each of the 25+ second executions were spending 24+ seconds of "log file sync" that the short-duration programs didn't have.

Happily, we had the trace file for LGWR, and we could line up the long "log file sync" calls in the user programs precisely on the timeline with "log file parallel write" calls in the LGWR trace, which were taking—you guessed it—25+ seconds apiece. Weird thing was, some of these super long-latency writes were processing only 2KB of data apiece. This was on IBM AIX. The AIX sysadmins couldn't see anything wrong in their aggregations of all write call latencies (which you wouldn't—it's easy to hide a half a dozen 25.000's in a list of millions of 0.003's). I'm not sure they ever really tackled the problem of why their hardware writes were taking so long.

One hypothesis that Jeff and I were left with at the end was that perhaps the application we were trying to diagnose was having a noisy neighbor problem with some other system on some other VM on the same box.

Cary Millsap
Method R Corporation
Author of Optimizing Oracle Performance<http://amzn.to/OM0q75> and The Method R Guide to Mastering Oracle Trace Data, 3rd edition<https://amzn.to/2IhhCG6+-+Millsap+2019.+Mastering+Oracle+Trace+Data+3ed>

On Tue, Jan 28, 2020 at 2:31 PM Stefan Koehler <contact_at_soocs.de<mailto:contact_at_soocs.de>> wrote: Hello Chris,
these are the most fun problems - I love to troubleshoot sporadic LGWR issues :-)

I just read through the whole thread quickly and here are my two cents:

  1. Can you profile the extended SQL trace file with an appropriate profiler (e.g. Method-R Profiler or TVD$XTAT, the latter is free) to get an exact wait event histogram of the log file sync waits for the slow run?
  2. You don't see any ASH data for LGWR process for this particular time frame, right? Might it be possible that the scalable log writer mode kicked in and the redo work is done by the worker processes? Just check the LGWR trace file and you should see which mode (single log writer mode/scalable log writer mode) was used. If it is the scalable one you have to take a look at the lg<nn> processes.
  3. You see "log file parallel write" for your loop workload which might not be a surprise as you need to write redo. However if you see such large latency spikes (basically in the same ball park as the "log file sync" ones - see point 1) it is very likely that you got an OS config problem (e.g. Linux kernel AIO slots, scheduler, device queue, etc.) or a storage problem. You can troubleshoot these "log file parallel write" syscalls incl. wchan with Tanel Poder's psnapper (https://github.com/tanelpoder/psnapper) and see where it is stuck/waiting.

Have fun troubleshooting :-)

Best Regards
Stefan Koehler

Independent Oracle performance consultant and researcher Website: http://www.soocs.de
Twitter: _at_OracleSK

> Chris Stephens <cstephens16_at_gmail.com<mailto:cstephens16_at_gmail.com>> hat am 28. Januar 2020 um 17:09 geschrieben:
>
>
> 3-node Oracle 19.3 RAC
> Centos 7
>
> We have a SQLAlchemy/Python based application workload that is running the exact same steps with widely varying response times which appear to be related to varying "log file sync" wait times.
>
> Here is a profile of a "fast" run:
>
> CALL-NAME DURATION % CALLS MEAN MIN MAX
> ------------------------------ --------- ------ ------ -------- -------- ---------
> SQL*Net message from client 53.197782 91.8% 10,092 0.005271 0.000177 28.568493
> EXEC 3.759177 6.5% 9,816 0.000383 0.000000 0.239592
> row cache lock 0.233153 0.4% 541 0.000431 0.000113 0.000941
> PARSE 0.140399 0.2% 4,867 0.000029 0.000000 0.006620
> DLM cross inst call completion 0.137330 0.2% 956 0.000144 0.000004 0.000505
> library cache lock 0.100171 0.2% 215 0.000466 0.000151 0.002133
> library cache pin 0.079729 0.1% 216 0.000369 0.000056 0.000710
> FETCH 0.058253 0.1% 1,062 0.000055 0.000000 0.004148
> log file sync 0.048217 0.1% 149 0.000324 0.000259 0.000505
> CLOSE 0.045416 0.1% 4,929 0.000009 0.000000 0.000073
> 20 others 0.135624 0.2% 11,854 0.000011 0.000000 0.001700
> ------------------------------ --------- ------ ------ -------- -------- ---------
> TOTAL (30) 57.935251 100.0% 44,697 0.001296 0.000000 28.568493
>
>
> Here is a profile of a "slow" run:
>
> CALL-NAME DURATION % CALLS MEAN MIN MAX
> ------------------------------ ---------- ------ ------ -------- -------- ----------
> SQL*Net message from client 131.186118 61.0% 10,092 0.012999 0.000212 106.789360
> log file sync 79.291166 36.8% 150 0.528608 0.000264 2.986575
> EXEC 3.728402 1.7% 9,816 0.000380 0.000000 0.221403
> row cache lock 0.248868 0.1% 542 0.000459 0.000111 0.001036
> PARSE 0.164267 0.1% 4,867 0.000034 0.000000 0.004652
> DLM cross inst call completion 0.146981 0.1% 957 0.000154 0.000005 0.001188
> library cache lock 0.104354 0.0% 218 0.000479 0.000160 0.000728
> library cache pin 0.082504 0.0% 202 0.000408 0.000157 0.000672
> FETCH 0.056687 0.0% 1,062 0.000053 0.000000 0.003969
> CLOSE 0.043590 0.0% 4,929 0.000009 0.000000 0.000180
> 20 others 0.142044 0.1% 11,866 0.000012 0.000000 0.001792
> ------------------------------ ---------- ------ ------ -------- -------- ----------
> TOTAL (30) 215.194981 100.0% 44,701 0.004814 0.000000 106.789360
>
>
> The weird thing is that I don't see corresponding log I/O waits (awaits) in iostat output.
>
> I have a ticket open w/ oracle but does anyone have any suggestions to discover root cause and/or solution?

--
http://www.freelists.org/webpage/oracle-l


____________________________________________________
Please consider the environment before printing this e-mail.
Bitte denken Sie an die Umwelt, bevor Sie dieses E-Mail drucken.

<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css">p { font-family: Arial;font-size:9pt }</style>
</head>
<body>
<p>
<br>Important Notice</br>
<br />
This message is intended only for the individual named. It may contain confidential or privileged information. If you are not the named addressee you should in particular not disseminate, distribute, modify or copy this e-mail. Please notify the sender immediately by e-mail, if you have received this message by mistake and delete it from your system.<br />
Without prejudice to any contractual agreements between you and us which shall prevail in any case, we take it as your authorization to correspond with you by e-mail if you send us messages by e-mail. However, we reserve the right not to execute orders and instructions transmitted by e-mail at any time and without further explanation.<br />
E-mail transmission may not be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete. Also processing of incoming e-mails cannot be guaranteed. All liability of Vontobel Holding Ltd. and any of its affiliates (hereinafter collectively referred to as "Vontobel Group") for any damages resulting from e-mail use is excluded. You are advised that urgent and time sensitive messages should not be sent by e-mail and if verification is required please request a printed version.</br>
Please note that all e-mail communications to and from the Vontobel Group are subject to electronic storage and review by Vontobel Group. Unless stated to the contrary and without prejudice to any contractual agreements between you and Vontobel Group which shall prevail in any case, e-mail-communication is for informational purposes only and is not intended as an offer or solicitation for the purchase or sale of any financial instrument or as an official confirmation of any transaction.<br />
The legal basis for the processing of your personal data is the legitimate interest to develop a commercial relationship with you, as well as your consent to forward you commercial communications. You can exercise, at any time and under the terms established under current regulation, your rights. If you prefer not to receive any further communications, please contact your client relationship manager if you are a client of Vontobel Group or notify the sender.
Please note for an exact reference to the affected group entity the corporate e-mail signature.
For further information about data privacy at Vontobel Group please consult <a href="https://www.vontobel.com">www.vontobel.com</a>.<br />
</p>
</body>
</html>


--
http://www.freelists.org/webpage/oracle-l
Received on Tue Jan 28 2020 - 22:16:11 CET

Original text of this message