Re: dbms_diskgroup read very slow on exa

From: Laurentiu Oprea <laurentiu.oprea06_at_gmail.com>
Date: Thu, 25 Mar 2021 13:33:12 +0200
Message-ID: <CA+riqSV9BqcSf+4WW2tso5quTxx2g5jf4dM3gbX1hvquVWVgzA_at_mail.gmail.com>



Thank you for your answers.

Looks like the majority of Log Miners use dbms_diskgroup functions to read directly from ASM. Some details about ASM internals I found here:

http://canali.web.cern.ch/fromWiki/pdbService_asm_internals.pdf

This note provide details on how to dump some blocks from ASM How to Dump or Extract a Raw Block From a File Stored in an ASM Diskgroup (Doc ID 603962.1)

For my particular situation after enabling 10046 I concluded that 21% of time was I/O at asm level ( ASM FIxed Package I/O) and 78% was (SQL*Net message from client). (1% was more data to client) Using the above oracle note I simulated dumping 2G of archived redo on a local disk and took around 6 minutes). Increasing pga_aggregate_target (from 400M to 3G) on asm instance improved my test to around 2 minutes and 30 seconds.

I also found that none of the scripts I use for tuning activities don't work for ASM instances (and is first time I found myself looking at ash data and sess stats on ASM instance)

Hope this info helps others as well.
THanks.

În mar., 23 mar. 2021 la 14:20, Mark W. Farnham <mwf_at_rsiz.com> a scris:

> Culprits most likely are network latency and competition for the devices
> you are trying to read, along with a few technical details of the
> connection dealt with in the context of old golden gate by Alex Fatkulin
> eleven years ago in this blog:
> https://blog.pythian.com/oracle-goldengate-extract-internals-part-ii/
>
>
>
> 10 MBPS unmolested also seems like you might have a per session wire limit
> of 10 MBPS somewhere in your technical stack.
>
>
>
> I don’t know Attunity well enough to know whether there is a read local
> archivelog from the file system option.
>
>
>
> IF there is, my suggestion would be to make an additional archivelog
> destination local to the place Attunity is running and use that local
> option to read the archived redologs copy at which will then have no ASM
> competition from production processes, and which will end the per ingestor
> network latency and bandwidth requirements in favor of moving the archived
> redolog copy unidirectionally exactly once.
>
>
>
> I hope this helps.
>
>
>
> mwf
>
>
>
> *From:* oracle-l-bounce_at_freelists.org [mailto:
> oracle-l-bounce_at_freelists.org] *On Behalf Of *Laurentiu Oprea
> *Sent:* Tuesday, March 23, 2021 7:22 AM
> *To:* ORACLE-L (oracle-l_at_freelists.org)
> *Subject:* dbms_diskgroup read very slow on exa
>
>
>
> Hello all,
>
>
>
> I received complaints about Attunity doing the ingestion very slowly. The
> way is configured is to connect directly to ASM instance and get the
> archivelog files.This is an exa X7
>
>
>
> My observations wre:
>
> -> is using dbms_diskgroup.read to perform the reads from asm
>
> -> mainly the wait time is ASM Fixes Package I/O (and some CPU)
>
> -> Is using 16 sessions distributed over 2 nodes each reading with an
> estimate speed (based on ASH numbers) of around 10MBPS
>
> -> in case a smart scan starts (for example smart incremental backup) the
> I/O speed of these sessions drops to around 1MBPS
>
> -> session level stats shows a very low number for "cell flash cache read
> hits" so looks like all reads are from spinning disks
>
>
>
> To me it looks like the reading speed even in normal conditions(without
> heavy competition) is very low, can someone help me with some hints on
> what can be the culprit?
>
>
>
> Thank you.
>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Mar 25 2021 - 12:33:12 CET

Original text of this message