Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Timesten Vs. Oracle - Performance

RE: Timesten Vs. Oracle - Performance

From: <oracle-l-bounce_at_freelists.org>
Date: Mon, 29 Mar 2004 17:24:56 +0530
Message-ID: <F0CB3C9983B77E4AB4ADEFA63DAB109F0C3B38E4@twrmsg03.ad.infosys.com>

Folks

We are working for a proposal where realtime stock tick data from reuters etc. needs to be streamed into a db and queries to happen on them on realtime basis. the input stream is likely to be around 2000-3000 ticks per second (around 1gb per hr) and the querying will be around 20 mb per hr. The soln is also expected to support failover, high availability etc.

Selection of a Suitable Database is the Question.

Seek your advise

Thanks

-----Original Message-----
From: John Hallas [mailto:john.hallas_at_hcresources.co.uk] Sent: Friday, March 26, 2004 3:48 PM
To: oracle-l_at_freelists.org
Subject: RE: Timesten Vs. Oracle - Performance

Justin Cave wrote

If you have a small, read-only or read-mostly database where you can afford
to lose updates, an in-memory database is probably ideal. Otherwise, stick
with the traditional database.

TimesTen is supposed to guarantee no loss of data under certain configurations. However that is balanced by the requirement to have 2 copies
running and the probability of having to load a backup copy and then apply
the journal. From what I have seen TT is very memory and CPU intensive. It
is used to hold mostly reference data so it is read-mostly in our environment. A small read-only Oracle database that is well optimised, on
fast disk and with plenty of memory/cache available should be able to perform pretty well anyway.

John

-----Original Message-----
From: Cary Millsap [mailto:cary.millsap_at_hotsos.com] Sent: Friday, March 26, 2004 12:19 PM
To: oracle-l_at_freelists.org
Subject: RE: Timesten Vs. Oracle - Performance

I marvel at the in-memory database vendors' messages, because many of the performance-challenged user actions I see on Oracle databases ARE operating entirely in memory. The reason they're slow is that they perform too many accesses upon the buffer cache. This stuff about TB of Oracle buffer cache making "Oracle tuning a thing of the past" is absolute rubbish. See "Why you should focus on LIOs instead of PIOs" at www.hotsos.com/e-library for details.

I don't see how the in-memory guys could be doing any better than a reasonably well-optimized Oracle system, unless they're bypassing all the "horrible serialization operations" that an Oracle instance executes. Thing is, without those serialization operations, a system can't provide, for example, read consistency or recoverability.

One aspect of the F1 vs Tank analogy that I really like is that a Formula 1 car is a single-user automobile. I think an analogy I like better is F1 vs B-747. It probably works on a lot of different levels: multi-user-ness, procurement and operational maintenance cost, storage capacity, range, ... :)

Cary Millsap
Hotsos Enterprises, Ltd.
http://www.hotsos.com
* Nullius in verba *

Upcoming events:
- Performance Diagnosis 101: 4/6 Seattle, 5/7 Dallas, 5/18 New Jersey


Please see the official ORACLE-L FAQ: http://www.orafaq.com

To unsubscribe send email to: oracle-l-request_at_freelists.org put 'unsubscribe' in the subject line.
--
Archives are at http://www.freelists.org/archives/oracle-l/
FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------
Received on Mon Mar 29 2004 - 05:53:29 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US