Re: Powerbuilder problems?

From: Dennis Moore <dbmoore_at_us.oracle.com>
Date: Wed, 25 May 1994 15:05:56 GMT
Message-ID: <1994May25.150556.27074_at_oracle.us.oracle.com>


In article <1994May24.221819.14608_at_wilbur.coyote.trw.com> mike_at_odie.coyote.trw.com (Mike Curtin) writes:
>david_at_us.oracle.com (David Natelson) writes:
>
>>Binding: (examples)
 

>>select name,address,sal from emp where deptno = :var1 and sal < :var2;
 

>>insert into emp (name,address) values (:var1, :var2);
 

>>The :var1 and :var2 are "bind variables" meaning the client will bind a
>>value each time they are executed. Since Oracle7 stores the execution
>>plan (parsing is done once and stored in memory on the server), a
>>application can Open a Cursor(s), and execute or fetch from it without
>>causing a reparse (even as the app. changes :var1's value each time).
>
>Thanks for the explanation.
>
>I am under the impression that cursors are read-only. That is,
>"select" statements are candidates for cursors, but INSERT statements
>are not. Am I mistaken?

Cursors can only be based on SELECT statements, you are correct. However, bind variables are useable by every DML statement. If you have hundreds of people doing an application like order entry, where they have a few lookup tables and are issuing the same insert over and over, bind variables are essential for scaling to large data sets and large numbers of users.

>>Powerbuilder (in many situations) does not declare a cursor with the above
>>"bind variables", instead, each user has his/her SQL statement executed
>>with the values "hard coded":
 

>>select name, address,sal from emp where deptno = 10 and sal < 1212;
 

>>This means each user for each statement has his/her SQL reparsed (even when
>>doing a bunch of inserts).
>
>Is this really a big deal? Simple SQL statements like these examples
>should take milliseconds to parse, shouldn't they?

Some may take milliseconds to parse, but will also take more than that to optimize. Consider a statement with an outer join, or a five-way join, or a join of two views: you have to parse and optimize it each time. Even if any of these statements only takes a tenth of a second to parse and optimize, when issued by a thousand people a thousand times, you're talking about adding almost 30 hours to your server side processing. And complex joins are *not* optimized in a tenth of a second -- some take much more than one second each.

Taking a realistic scenario involving 100 users doing order entry, occassionally looking up parts in a parts table, displaying line items as they are entered, inserting 5 line items (for ordered items), and updating the denormalized header record (updating the customer's balance). There are the following queries per user: look up the customer (single select for update), retrieve part list only once (single select statement), inserting the line items (validate the item number and look up the description and the price from the in-memory list, then one insert of the line item order, one update of warehouse balance, and one select to ensure the warehouse balance is positive -- all repeated 4 more times), updating the customer's balance (one update of the customer record), and a commit. This totals 8 selects, 5 inserts, 5 updates, and one commit during a successful transaction; only the commit doesn't benefit from bind variables! Now assuming that each statement takes only 0.01 seconds to parse and optimize, then this adds 0.18 seconds of CPU time per transaction per user (and this is a very conservative estimate of the time wasted here). For a small number of users, this may not be noticeable, especially on a machine doing lots of other things, which explains how many people have lots of success building client/server systems in pilot projects but then fail on scaling up. Multiply that 0.18 wasted CPU seconds by 100 users, and you've added 18 CPU seconds of waste. Just so we all work from the same understanding here, that means that the response time of the computer will be a MINIMUM of 18 seconds for each order entry clerk, in addition to the time the computer spends doing non-wasteful things. Scale that up to 250 users, and now your MINIMUM time has jumped to 45 seconds; at 1000 users, the MINIMUM waiting time goes to 3 *minutes.*

This is why Powersoft is working hard with Oracle to improve PowerBuilder's use of bind variables, but also why PowerBuilder is not capable of scaling today.

>>If you think about why you want to use a stored procedure you can list the
>>following (some have have more or less but here is my list):
 

>>1. Avoid reparsing -> save CPU cycles
>>2. Avoid reoptimizing -> save CPU cycles
>>3. Reduce network use -> save LAN traffic
 

>>Item 3 has nothing to do with selecting data from Oracle because all users
>>can take advantage of shared SQL in the server and Oracle sends back arrays
>>of data (multiple rows/packet) to the client.
>
>How does Oracle know that user A's SQL can be "shared" with user B's?
>Does one or both of the users tell the server? Does the server compare
>every new SQL statement to every one it has stored for sharing?

The users don't need to tell the server -- it checks its cache.

>Are there advantages to shared SQL over stored procedures?

Sure, in many cases but not all. Most end-user tools can't query against stored procedures (eg. Oracle Data Browser, Business Objects). Stored procedures use up cache space. Stored procedures have overhead. On the other hand, if you have a block of code which uses results from one DML statement as input to another (i.e. a CURSOR driving conditional processing of additional DML statements), then a stored procedure is the only way to go.

>--MDC

  • Dennis Moore, ORACLE EMPLOYEE, etcetcetc
Received on Wed May 25 1994 - 17:05:56 CEST

Original text of this message