Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: changing the isolation level

Re: changing the isolation level

From: Ed Prochak <edprochak_at_gmail.com>
Date: 16 Jan 2007 14:59:53 -0800
Message-ID: <1168988390.647222.182250@a75g2000cwd.googlegroups.com>

PS

A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?


AlterEgo wrote:
> Frank,
>
> I disagree with the notion that Read Uncommitted is unnecessary. It is all
> based upon the business requirement. There are business requirements that
> require only approximations and do not need to be audited. Below is a real
> life scenario that would require substantial increase in hardware as well as
> an increase in the development and maintenance functions if it weren't for
> Read Uncommitted.
>
> Environment:
> 200 million micro-transactions per day.
> Real-time querying of the transactions to keep operational metrics for
> alerts, notifications, etc.
> Single row transactions in three tables from 60 web servers.
>
> Requirement:
> Run a count every five minutes to factor transactions/minute (150K
> transactions per minute at peak). Raise an alert if the count is plus or
> minus 1.5 standard deviations from the norm at that time of the day.
>
> If I am counting 750K transactions over a five minute period, of what
> consequence is it if I overcount even a few hundred because of transaction
> rollback?

so what is you error margin? If you counted only commited transactions, you "might" undercount a little, but this is a statistical sample. you are looking for a running average. I do not see the justification for counting uncommited transactions.

>
> Of course, my application logs the failed transactions to fix the
> application and eliminate the rollback in the future. And, there is another
> near real time metric that counts transaction failures.

and there you have to commit the log entry or again, you are over counting.

>
> It would double, triple, quadruple? the harware and licensing costs to
> increase the processing capacity to allow the above query to run without
> lock contention affecting the transations. My CEO is fine with this
> approximation.
>
> -- Bill

so if an approximation is okay, why not something simple? put a trigger/sequence on the target table. All the trigger does is allocate a sequence number.

your sampler application just select the current sequence number value, subtracts the previous value and you are done. NO contention.

failed tranactions go into a log. again trigger/sequence.

because of caching and the extra number pulled off by the sampler programs, these are approximations, but that is what you said you wanted.

And if you needed more finly grained measures, I'm sure there are some DBA tables that may have what you want.

The point being, you are ASUMING there is a heavy penalty for satisfying this business requirement and that you MUST use dirty reads. Try the PERL philosophy something, ie
there is more than one way to do it.

So don't lock your thinking into one model. dirty reads are called dirty for a reason.

   HTH,
ed Received on Tue Jan 16 2007 - 16:59:53 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US