Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.misc -> Re: changing the isolation level

Re: changing the isolation level

From: AlterEgo <>
Date: Tue, 16 Jan 2007 15:51:41 -0800
Message-ID: <>


Re: top/bottom posting.

Most of the time when I am accessing a newsgroup thread, I have already followed the thread when I open it and don't need to read the prior posts. My messages are read through a graphical UI and by default I am at the top of the thread. It takes me extra keystrokes to get to the response in a bottom post, so I prefer top posting.

If one uses a text based reader, then the thread will scroll all the way to the bottom and the response will be in view when the message is opened in a bottom post. I imagine those using text based readers prefer having the bottom post.

I am sorry it offends you, but I am not responsible for your distaste for top-posting. I opt for less keystrokes, and have no problem reading up or down if necessary. Follow your credo, don't respond.

Re: dirty reads.

Trigger - that is an addtional read/write for every transaction, and it causes a hotspot at 2,500 transaction/sec. on our configuration.

Any transaction failures are serialized locally on the web server and handled by another compensatory process, they are not counted and persisted in the database.

I also believe in dirty reads for most ad-hoc reporting from OLTP databases. Book of record and SOX compliant reports come from an audited data mart.

"Ed Prochak" <> wrote in message
> PS
> A: Because it fouls the order in which people normally read text.
> Q: Why is top-posting such a bad thing?
> A: Top-posting.
> Q: What is the most annoying thing on usenet and in e-mail?
> AlterEgo wrote:
>> Frank,
>> I disagree with the notion that Read Uncommitted is unnecessary. It is
>> all
>> based upon the business requirement. There are business requirements that
>> require only approximations and do not need to be audited. Below is a
>> real
>> life scenario that would require substantial increase in hardware as well
>> as
>> an increase in the development and maintenance functions if it weren't
>> for
>> Read Uncommitted.
>> Environment:
>> 200 million micro-transactions per day.
>> Real-time querying of the transactions to keep operational metrics for
>> alerts, notifications, etc.
>> Single row transactions in three tables from 60 web servers.
>> Requirement:
>> Run a count every five minutes to factor transactions/minute (150K
>> transactions per minute at peak). Raise an alert if the count is plus or
>> minus 1.5 standard deviations from the norm at that time of the day.
>> If I am counting 750K transactions over a five minute period, of what
>> consequence is it if I overcount even a few hundred because of
>> transaction
>> rollback?
> so what is you error margin? If you counted only commited transactions,
> you "might" undercount a little, but this is a statistical sample. you
> are looking for a running average. I do not see the justification for
> counting uncommited transactions.
>> Of course, my application logs the failed transactions to fix the
>> application and eliminate the rollback in the future. And, there is
>> another
>> near real time metric that counts transaction failures.
> and there you have to commit the log entry or again, you are over
> counting.
>> It would double, triple, quadruple? the harware and licensing costs to
>> increase the processing capacity to allow the above query to run without
>> lock contention affecting the transations. My CEO is fine with this
>> approximation.
>> -- Bill
> so if an approximation is okay, why not something simple?
> put a trigger/sequence on the target table. All the trigger does is
> allocate a sequence number.
> your sampler application just select the current sequence number value,
> subtracts the previous value and you are done. NO contention.
> failed tranactions go into a log. again trigger/sequence.
> because of caching and the extra number pulled off by the sampler
> programs, these are approximations, but that is what you said you
> wanted.
> And if you needed more finly grained measures, I'm sure there are some
> DBA tables that may have what you want.
> The point being, you are ASUMING there is a heavy penalty for
> satisfying this business requirement and that you MUST use dirty reads.
> Try the PERL philosophy something, ie
> there is more than one way to do it.
> So don't lock your thinking into one model. dirty reads are called
> dirty for a reason.
> HTH,
> ed
Received on Tue Jan 16 2007 - 17:51:41 CST

Original text of this message