Re: RAC - Global Cache Transfer Times

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Wed, 7 Aug 2019 08:42:45 +0000
Message-ID: <CWXP265MB1032E91E4A029CC94369B54BA5D40_at_CWXP265MB1032.GBRP265.PROD.OUTLOOK.COM>


The answer to your question is yes.

I made the mistake of assuming that the volume of traffic was typical of the RAC systems I usually see.

There's always the possibility that a few blocks are particularly hot and produce extreme results in terms of busy buffers and congested buffers, but their impact is usually hidden in this part of the report by the vast bulk of "quick and boring" buffers that are flying around. (Which is why the Event Histogram can be very helpful as it will show if the average is hiding a few extreme cases.)

If there's any indication that slow transfers are introducing a real performance threat it's always possible to query v$active_session_history (db_hist_xxx is usually too sparse to catch a statistically significant sample from a small number of wait that are of subsecond duration) for the wait to see if any pattern drops out from the file and block ids of the slowest blocks.

Regards
Jonathan Lewis



From: Jack van Zanen <jack_at_vanzanen.com> Sent: 07 August 2019 01:42
To: Jonathan Lewis
Cc: oracle-l_at_freelists.org
Subject: Re: RAC - Global Cache Transfer Times

Hi Jonathan,

We don't have a reporting node and an update node.

between midnight and 6 am we have an etl that loads all the changes from our main database and after that the users run their reports (including scheduled reports)

this was between midnight and 00:30 so there is no reports running at the time.

parallel_force_local is set to TRUE.

unless I am going blind I am not seeing gc cr disk read waits in outrageous numbers

can this also be skewed by the fact that there weren't many blocks going across the interconnect?

Mind you this question is for education purpose more than anything

[image.png]

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Aug 07 2019 - 10:42:45 CEST

Original text of this message