Re: Informix vs. Sybase vs. Oracle vs. (gasp) MS SQL Server

From: Peaches <fsgchi_at_wwa.com>
Date: 1997/12/12
Message-ID: <01bd073d$bb3edaa0$083df1cf_at_ww.wwa.com>#1/1


Anthony Mandic <no_sp.am_at_agd.nsw.gov.au> wrote in article <3490B6CD.202C_at_agd.nsw.gov.au>...
> Johan Andersson wrote:
>
> > So, *are you listening Anthony*, I propose the following test
 application,
> > with an ever increasing transaction complexity, to show that the finer
> > granularity of lock the better performance, _for this type of
 application_.
>
> From your description below it looks more like a variable transaction
> rate rather than variable transaction complexity, but this is just a
> minor quibble.

Actually, if you define the "transaction" in terms of the commit point the transactions do grow. Assuming that the transactions are sent on a fixed frequency (every 10 minutes or so) the number of transactions would not grow. The number of tuples to be committed within the transaction would vary, but the rate of the transaction could be fixed.

The all or nothing approach in Mr. Andersson's example is common in the financial industry. Basket Trades on Wall Street, multi-part journal transactions in Finance, and multiple item orders in the e-business sector. Perhaps in this particular example a multiple record commit was not necessary, but the point of *how* multi-part commits would increase contention was valid.

The distribution of the inserts/updates/deletes over the sample set is another variable that would affect your metrics for contention. For example, the ATM methods work well because of the simplicity of the transactions and the highly distributed nature of the data, despite the high transaction rates.

-- 
Peaches	  http://miso.wwa.com/~fsgchi 
	  reply to: fsgchi at wwa dot com
What lies before us, and what lies behind us, are tiny matters
compared to what lies within us...	--Ralph Waldo Emerson
Received on Fri Dec 12 1997 - 00:00:00 CET

Original text of this message