Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: question about ADO and Oracle

Re: question about ADO and Oracle

From: Ed Stevens <Ed_Stevens_at_nospam.noway.nohow>
Date: Wed, 09 Jan 2002 18:59:39 GMT
Message-ID: <3c3c73c1.186970849@ausnews.austin.ibm.com>


Cliff,

I appreciate your effort on this. Don't know if I can give you what you need for an apples'n'apples comparison, but here's my testing method . . .

First, I captured the 5 most often executed and 5 most I/O intensive queries by spooling the following:

select		b.username username,
 		a.disk_reads reads,
 		a.executions exec,
 		round(a.disk_reads / decode (a.executions, 0, 1,a.executions))
rds_per_exec,
 		c.sql_text Statement
from		v$sqlarea a,
 		dba_users b,
 		v$sqltext c
where		a.parsing_user_id = b.user_id  
  AND		a.disk_reads > &dskrds
  and		a.address = c.address
order by	a.disk_reads desc,
		a.address,
 		c.piece
/
select		b.username username,
 		a.disk_reads reads,
 		a.executions exec,
 		round(a.disk_reads / decode (a.executions, 0, 1,a.executions))
rds_per_exec,
 		c.sql_text Statement
from		v$sqlarea a,
 		dba_users b,
 		v$sqltext c
where		a.parsing_user_id = b.user_id  
  AND		a.executions > &minexec
  and		a.address = c.address
  order by	a.executions desc,
		a.address,
 		c.piece

/

Next, I distilled the result of those queries down to executable SQL, and wrapped it up in another file like this:

Next, I modified SQLNET.ORA, and set TRACE_LEVEL_CLIENT = 16.

 That was the setup. For the actual test, I would run my test script (mytest.sql) varying the value of &array from 1 to 70 in increments of 10. (1, 10, 20, etc.). Before each trial I would set TRACE_FILE_CLIENT=net_nn.trc (where 'nn' is the value of the array size to be used on that test). After each test I would count the number of SQLNet packets received, by using the FIND command: "find /C "nsprecv: normal exit" net_nn.trc"

I then repeated the test sequence, varying SDU/TDU from "default" to 195000 in increments of 1500. As I expected, an increase in either SDU or Array size resulted in a decrease in SQLNet packets received, up to a point at which further increases in resources would yield no additional decreases in SQLNet traffic.

These tests were run from my desktop, so there was no "extraneous" activity recorded in the trace files.

Next we attempted to replicate this as best we could with the real application. It is actually a browser based app, so the web server is the client machine to Oracle. We told the other developers to lay off during our testing, but still cannot absolutely guarantee that there was no other activity recorded in the trace files. Here, we ran two tests with the same measuring method: set TRACE_LEVEL_CLIENT = 16, exercise the app through a specific sequence, count the occurrences of "nsprecv: normal exit" in the trace file. On this we took two measurements, one with CacheSize defaulted to 1, the second with CacheSize set to 40. Between these two tests, there was no difference in the measured result. We did not vary SDU/TDU from their defaults for these tests.

On Tue, 8 Jan 2002 17:33:28 -0500, "Cliff Dabrowski" <cliffd.ng at bigfoot.com> wrote:

>Ed,
>
>I will do a bit more digging on your assertions. Can you provide me any test
>cases that you have? I can work independently and produce my own but it
>would make a whole lot more sense if we were using the same tests so that
>the results were more easily compared.
>
>Looking for ADO sample code...
>Looking for results...
>Looking for trace output...
>
>I have a way to trace ADO, OCI, UPI independent of each other and would then
>be looking at correlating the output/results to answer your questions.
>
>Hth,
>
>~Cliff
>
>
>
>"Ed Stevens" <Ed_Stevens_at_nospam.noway.nohow> wrote in message
>news:3c3b61fe.116900914_at_ausnews.austin.ibm.com...
>> Cliff,
>>
>> Thanks for your reply. The two articles you cite from MS, we have already
>seen.
>> I have passed them on to the developers.
>>
>> Two things I find particularly confusing.
>>
>> First is the quote from the ADO docs -- "CacheSize is based on the
>> MaximumOpenRows provider-specific property (in the Properties collection
>of the
>> Recordset object)" in that we can find NOTHING on MaximumOpenRows.
>>
>> I assumed that Plus took its ArraySize and translated that to some other
>call at
>> the API level -- a call that would also be available to other clients
>(like
>> ADO). I thougth perhaps this obscure reference to MaximumOpenRows might
>be
>> pointing to the same thing.
>>
>> As I try to clarify my thinking (not to mention clarifying my question!) I
>see
>> the following picture. When the Oracle server process gets a query from
>a
>> client (the client could be SQL*Plus, or it could be ADO acting on behalf
>of an
>> application pgm) it develops a result set, then feeds the rows back to the
>> client in SQLNet packets that are sized according to a specification
>provided by
>> the client. If the client is ADO, it (ADO) then feeds the result set back
>to
>> *his* requestor (the application pgm, in VB, C, Cobol, etc.) in buffers
>of "n"
>> number of rows at a time. If this picture is accurate, it would appear
>that
>> CachSize specifies the size of the buffer between ADO and the application,
>but
>> has no impact on the "buffer" (TNS packets) between ADO and the database
>server,
>> whereas SQL*Plus uses its ArraySize to set the size of the TNS packets.
>In
>> fact, my test results are consistent with this picture, but not
>necessarily
>> proof. If my picture *is* accurate, then what I'm looking for is the key
>to
>> specifying the size of the TNS packets between the Oracle server and ADO.
>>
>> I would be happy to have my entire understanding shot out of the water if
>it
>> would help solve my problem.
>> - Ed
>>
>> On Tue, 8 Jan 2002 12:00:51 -0500, "Cliff Dabrowski" <cliffd.ng at
>bigfoot.com>
>> wrote:
>>
>> >Hello Ed,
>> >
>> >Your tests/findings with SQL*Plus are not directly applicable to what you
>> >need to do. SQL*Plus is developed on top of OCI (Oracle Call Interface)
>and
>> >an additional Oracle proprietary API called UPI. The "SET ARRAY" command
>> >simply sets some internal variables that are then used to determine
>rowset
>> >retrieves during the fetch using the appropriate OCI/UPI API call.
>> >
>> >Your assertion about the CacheSize parameter in ADO is correct and I have
>> >found a few references for you in case you have not already seen them. I
>> >would be interested to hear about your findings and final resolution on
>this
>> >issue.
>> >
>> >http://www.microsoft.com/data/impperf.htm
>> >An excerpt:
>> >2.8 Tune your Recordset CacheSizeproperty
>> >ADO uses the Recordset CacheSize property to determine the number of rows
>to
>> >fetch and cache. While you are within the range of cached rows, ADO just
>> >returns data from the cache. When you scroll out of the range of cached
>> >rows, ADO releases the cache and fetches the next CacheSize rows. So what
>> >CacheSize should you use in your application? Unfortunately, there isn't
>a
>> >single optimal CacheSize that is appropriate for all applications. You
>> >should tune your application with different CacheSize values and use the
>one
>> >that offers you the best performance. Using a small CacheSize value
>> >significantly improves performance for fetching data from an Oracle data
>> >store
>> >
>> >A little more up to date:
>>
>>http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnmdac/htm
>l
>> >/improvperf.asp
>> >
>> >And...
>> >http://www.devx.com/premier/mgznarch/vbpj/2000/04apr00/df0400/df0400.asp
>> >Finally, you can use the CacheSize and Batch Size properties of the
>> >Recordset object. Use CacheSize to determine how many rows ADO stores on
>the
>> >client and how many rows it fetches with each trip to the data source. As
>> >you scroll through the recordset, the cache is refreshed with rows from
>the
>> >OLE DB provider. You'll certainly gain better performance by tuning this
>> >property appropriately as the default CacheSize is 1. Unfortunately, no
>rule
>> >of thumb exists for setting the CacheSize property, so you'll need to
>> >experiment with your application. It's especially important to tune this
>> >property for applications using server-side cursors, because it affects
>the
>> >number of round-trips to the data source.
>> >You use the Batch Size property to determine how many rows ADO includes
>in
>> >each round-trip to the server when performing a batch update using the
>> >UpdateBatch method on a disconnected Recordset object. To set dynamic
>> >properties, use the Properties collection of the Recordset object with
>its
>> >name as a literal string:
>> >
>> >Just one more...
>> >http://www.4guysfromrolla.com/webtech/053100-1.shtml
>> >
>> >Hope this helps,
>> >
>> >~Cliff
>> >
>> >
>> >"Ed Stevens" <Ed_Stevens_at_nospam.noway.nohow> wrote in message
>> >news:3c39d1ab.14432322_at_ausnews.austin.ibm.com...
>> >> Our tuning efforts have lead us to looking at SQL*net issues. I ran a
>> >series of
>> >> tests from SQL*Plus, varyinig both the SDU/TDU values and the array
>size.
>> >>
>> >> My method of testing was to run a fixed set of queries (taken from a
>> >running
>> >> production instance) with client trace set to 16, then counting the
>number
>> >of
>> >> packets received by the client. I would consider a reduction in the
>> >number of
>> >> packets received to indicate less network traffic, less re-assembling
>of
>> >> packets, and thus indicative of some degree of performance improvement.
>> >>
>> >> The results of these tests showed that playing with SDU/TDU resulted in
>a
>> >small
>> >> decrease in packets received, but increasing array size made a
>significant
>> >> reduction in packets received.
>> >>
>> >> Now, the real problem. For these tests, setting array size was a
>function
>> >of
>> >> the SQL*Plus session and is set by a Plus SET command. So, how to
>> >translate
>> >> that to something the application can use? The app is written with MS
>> >tools and
>> >> uses ADO as the microsoft layer accessing the Oracle DB. We have
>looked
>> >all
>> >> through the ADO API reference and the closest we come is a discussion
>of a
>> >> CacheSize property. Within that discussion was this statement:
>> >>
>> >> "CacheSize is based on the MaximumOpenRows provider-specific property
>(in
>> >the
>> >> Properties collection of the Recordset object).
>> >>
>> >> This looks like MaximumOpenRows is what I'm after, but I can find no
>other
>> >> reference anywhere in the reference manual or anywhere on Technet.
>Does
>> >anyone
>> >> here have any experience with this, to point me in the right direction>
>> >> --
>> >> Ed Stevens
>> >> (Opinions expressed do not necessarily represent those of my employer.)
>> >> --
>> >> Ed Stevens
>> >> (Opinions expressed do not necessarily represent those of my employer.)
>> >
>> >
>>
>> --
>> Ed Stevens
>> (Opinions expressed do not necessarily represent those of my employer.)
>
>

--
Ed Stevens
(Opinions expressed do not necessarily represent those of my employer.)
Received on Wed Jan 09 2002 - 12:59:39 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US