RE: Sql net msg from client and fetch arrays

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Sat, 23 Jan 2021 19:52:06 -0500
Message-ID: <230a01d6f1eb$23b26230$6b172690$_at_rsiz.com>





And (meaning not taking exception to anything JL has said, with which I heartily agree)…  

Sometimes it can be arranged to run queries like this on the database server depositing the result in a file to be dragged back to the client to look at. Usually this so dramatically reduces the latency and increases the bandwidth that all the dithering you are seeing happens in so little elapsed time that it is not important.  

The earliest versions of the Oracle E-business applications used the “concurrent manager” with a report destination to do just that comprehensively at a time when network latencies were much higher and bandwidths were much lower, so your mileage may vary. Still, if you can run the query on the server as a test it may tell you if the gains of that engineering might be net useful (or not).  

mwf  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Jonathan Lewis Sent: Saturday, January 23, 2021 8:44 AM To: oracle-l_at_freelists.org
Subject: Re: Sql net msg from client and fetch arrays    

Answering the last bit first -

It's possible that the application sets a memory limit for the fetch, and then an internal mechanism derives an array size from the column lengths of the data to be fetched.  

Finding time spent on the network and at the client - the best method depends on what licences you have, what you're allowed to do in the application, and what access you have to the database, and WHEN you can get access. (And how accurate you need to be).  

e.g.  

  1. If you are licensed for ASH etc. then you could check the SQL Monitor report - it will show you the run time of the query (Global Information . duration) and the work done by the query (Global Stats . Elapsed time). As a reasonable approximation "duration" - "elapsed time" = client/network time.
  2. If you can't get at the SQL Monitor but know the SQL_ID of the query of interest you can query v$active_session_history directly (or dba_hist_active_sess_history). Each execution of the query should have a separate SQL_EXEC_ID so a query like:
select sql_exec_id, count(*), max(sample_time) - min(sample_time) from v$active_session_history where sql_id = 'xxxxxxxxxxxxxxxx' group by sql_exec_id;

 

should give you the (approximate) difference between the first and last calls of the execution, and the number of active samples (= 1 second of time for v$active_session_history, or 10 seconds for dba_hist_active_sess_history). The time will be in days, so multiply by 86400 for seconds. The difference between that and count(*) -- or count(*) * 10 -- gives you the client/network time. APPROXIMATELY.    

3) If you can modify the code that calls the big queries, and since you expect them to take a lot of time, it's worth enabling SQL trace for the queries of interest. You could set the module and action to some suitable value before executing a query, then clear them afterwards. The trace files would report these values, and a call to tkprof (or trcsess) can extract data based on module/action. So you can find the right trace files and generate the tkprof reports. This should give you a fairly direct view of the client/network time. (Note this type of shotgun approach is viable only when you have a small number of large queries that you want to trace - if some of your code goes into single row processing - e.g. using a "fetch by key" for every row in your 10M row array fetch) then the overheads on that bit will swamp any useful information.  

Regards

Jonathan Lewis            

On Fri, 22 Jan 2021 at 01:08, Moustafa Ahmed <moustafa_dba_at_hotmail.com> wrote:

Hello lister!

I have couple of question about sql net msg from client.. 1-considering it is an idle event and it is not sampled at ash or dba_hist for ash which views can show which sql_id’s spent most of the time of that idle event?

2-as it sounds odd I can explain why I’m asking when running massive DW sql’s that retrieves (unfortunate tens of millions of rows) adding to that oracle last execution (when the statement is still fetch rows and feeding them to the top row source) it may look confusing to DBA’s and app folks That. Wing said we may have a sql which is spending most of the time retrieving the rows not processing the rows so if we can spot the weight of sql net msg’s from client it can help big deal!

3-some apps do have variant rows per fetch for different sql’s varying from 1000 to 100 Meanwhile the app itself has the maximum set for rows perf fetch what causes that value to change from one sql to another although the app does not dictate that?

Thank you,  

 <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> https://ipmcdn.avast.com/images/icons/icon-envelope-tick-green-avg-v1.png

Virus-free. <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> www.avg.com  



--
http://www.freelists.org/webpage/oracle-l


Received on Sun Jan 24 2021 - 01:52:06 CET

Original text of this message