Re: Asynchronous I/O and Oracle

From: George Capehart <gcape_at_clt.fx.net>
Date: 26 Feb 1994 17:54:01 GMT
Message-ID: <2ko2bp$54n_at_clt.fx.net>


Praveen Rao (rao_at_ee.uwm.edu) wrote:
: I am looking for some information on advantages/disadvantages and installation
: instructions for setting up asynchronous i/o on a hpux machine. We are
: running v7.0.15 and hpux9.0 on a 9000/I50
 

: Any thoughts/information is appreciated.

In the right context, asynchronous I/O is a good strategy. The context needs to be considered, however. There are many variables that need to be considered: Asynchronous I/O is done on filesystem buffers by kernel routines . . . several layers of them, the last of which is a write to a disk address. Oracle is capable of doing raw I/O, which bypasses all of the kernel filesystem handling code (and buffers). In my experience, letting Oracle do raw I/O is faster and cleaner than handing data off to the kernel to process. On the other hand, if you cannot do raw I/O, and you have data in more than one partition, go for it. Obviously, the more spindles and controllers there are, the more attractive it becomes. But that is true for raw I/O, too. IMHO, the optimum configuration is to have all of the data, index, and rollback segment tablespaces on raw partitions and have the SYSTEM tablespace, redo logs, etc., in mounted filesystems. If the Oracle kernel is tuned optimally, all of the objects in the SYSTEM tablespace will be cached in the SGA so that there is not much to be gained by doing asynchronous I/O on that filesystem. That leaves the redo logs. There _is_ some advantage to doing asynchronous I/O there, especially if there are lots of rollbacks.

In twenty-five words or less, the answer to your question really is: "Well, it all depends . . . on how your disc(s) are configured, how many there are, how many controllers you have . . . and how well you get along with your system administrator and how well you like dd(1).

George Capehart

gcape_at_clt.fx.net Received on Sat Feb 26 1994 - 18:54:01 CET

Original text of this message