Re: How fast is Oracle ?

From: MPower 'your future today' -- Mark Otero <mpower_at_ee.net>
Date: 1996/02/08
Message-ID: <4fdrmt$js_at_news.ee.net>


Richard Armstrong <richard_at_juno.demon.co.uk> wrote:

>Can anybody suggest a rough estimate of the speed of Oracle 7.1 ? The
>company I work for is considering Oracle for the development of a real
>time system to record share price data. It needs to be able to receive
>about 300 data items per second from an external source and update
>corresponding records in an Oracle table (ie 300 update statements per
>second). The table has three integer fields. One field is changed on
>each update. Changes are committed after every hundred updates.
 

>We have written a test application using a DEC Alpha machine running
>OpenVMS 6.1 and Oracle 7.1. The application is written in Pro*Pascal
>and it can only manage around 20 or 30 updates per second. We don't
>know whether this is the maximum rate we can reasonably expect or
>whether further tuning effort is likely to yield improvements.
 

>We did notice a great deal of disk access by the log writer process
>which seemed to slow things down. Should this process only write after
>a COMMIT ? We have tried speaking to Oracle support in the UK but they
>have not been able to offer much help. We are now considering trying
>other databases such as Ingres to see how they compare.
 

>I will be very interested to receive any thoughts or comments anybody
>may have.
>--
>Richard

Richard,

Very slow!
Here are a few pointers.

  1. How much real memory does Oracle have. It's sounds like you are disk thrashing. Oracle likes lots of memory. The buffers are the single most critical element in the Oracle architecture. For example, you could buffer the entire NYSE given the small row size of the table. The COMMIT process is independent of disk i/o. Disk i/o will occur as a result of the kernel fetching data blocks for rows not cached. Once a row is in cache further updates go against memory (and memory is fast), provided the row is not swapped out. I am simplifying but hopefully you get the gist of what I am explaining, bottom line you can never have to much memory.
  2. The configuration of your hardware. Is a fast intelligent caching controller employed, probably not. When disk i/o does occur a fast caching controller (again with lots of memory) will not steal those precious CPU cycles; the disk is the weakest link, so let another CPU deal with it.
  3. The data feed. Is there bottlenecking? Are you feeding the data directly into a server process that performs the client task of building the SQL UPDATE statement or is a PC client used on a network. Eliminate the PC and network or make sure the network segment is not subject to traffic and the PC is a P5 with lots of memory. Plus use 32-bit fast network cards.
  4. Optimize the UPDATE statement build process. This is critical and we make a world of difference. I have not used Pro*Pascal. I have built many Pro*C applications for OLTP environments with complex SQL statements. The UPDATE statement is one of the simple statements and generally does not require a lot of processing time when compared to a SELECT. Once again memory for the client process is vital. When a SQL statement is prepared by the client several steps occur. These steps involve the server at certain points. Design code to double buffer. While one buffer is loaded the other is sending. These buffers will be used by the SQLDA structure. The UPDATE is built as follows:
  5. Build a host string one time at the start of the program.
  6. PREPARE the statement. This parses the SQL statement and gives it a name. Do it once.
  7. DECLARE CURSOR and make sure the precompiler flags keep the cursor open. You can use more than one cursor.
  8. DESCRIBE BIND VARIABLES sets up the data buffers required for the update data elements. Now fill the buffer elements with your variable data.
  9. At this point you are ready to rock and roll! OPEN the cursor. For an UPDATE this will effectively cause the server to go to work executing the statement. If you did step 4 correctly all the data (the 300 data elements) are passed to the server.
  10. When the function returns just perform steps 4 and 5 repeatedly using a double buffering method so data is always ready.
		You can run multiple client processes against
		the server to achieve excellent throughput above and
		beyond your initial requirements. 

I am an independent consultant who waves his own flag, so if I can be of further assistance send me some bytes (mpower_at_ee.net).

Good Luck,
Mark                 


                       !!!!!
                      /'_at_ @'\

+---------------oOOo-----U-----oOOo---------------+
|   BigDaddy's Internet USENET FaxBack Service    |
|         'never leave home without it'           |
|                                                 |
| Mark Otero                        mpower_at_ee.net |
| MPower                        Your Future Today |
| 178 Beechbank Road               (614) 338-1550 |
| Columbus, Ohio 43213-1261    Fax (614) 338-1553 |
+-------------------------------------------------+
                     | |  | |
                    {__|  |__}
Received on Thu Feb 08 1996 - 00:00:00 CET

Original text of this message