Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: (long) Sniffing redo logs to maintain cache consistency?

Re: (long) Sniffing redo logs to maintain cache consistency?

From: Noons <nsouto_at_optusnet.com.au.nospam>
Date: 28 Feb 2003 16:57:18 GMT
Message-ID: <Xns933125ADCB598Tokenthis@210.49.20.254>


Following up on Andrej Gabara, 01 Mar 2003:

> still are there. And I cannot argue against what you've said,
> except for bubbling "cascading" events to the user, and possibly have
> lots of stale-data errors:

You won't have lots. It's a workflow app. There cannot be THAT many people marking a task complete AND wanting to look at it at the same time. It just doesn't happen that way.

> It's not something users appreciate. They
> don't consider it a warning, they consider it a bug if the app server
> caches stale data and they just wasted a few minutes for
> having filled out a form and pressing the "Save" button, just to click
> on a "Refresh" button so they can redo their typing.

Stop. You don't "fill out forms in a few minutes" to mark a task as complete. Let's NOT confuse the issues. It is MOST important when arguing this sort of thing to keep the universe of the arguments constant.

One thing is to do a quick update with a stale-check. The other (completely different!) thing is to fill-in a lengthy data entry form. Which more than likely will result in INSERT. Not UPDATE. Ergo, no problem with staleness.

And most importantly: in a lengthy data entry form, you want to steer AWAY from letting a user do a change on anything AS WELL AS entering new data: you are buying yourself so many deadlock conditions if you let that happen, it's not even funny. Be that with or without a Java object cache!

See what I mean by keeping track of what we're talking about? The problem is nowhere near as bad as it would appear at first light. Keep the focus. Isolate the problems. Solve them one by one.

> [Nowadays, a new server box ships with that much ram]

Oh no it doesn't! Someone has to ask for it to be that size! Let me guess: Compaq. Right?

>
> So, obviously times have changed. Memory-conservation is not really
> an issue any longer. It's the opposite.

Completely incorrect. Memory conservation is as important nowadays as it ever was. I won't go into details of why, but let me just point out memory upgrading is a fallacy. Efficient memory usage in large volumes is almost impossible unless you're talking 64-bit Unix.

Large memory chips do not necessarily mean better memory usage. You wouldn't BELIEVE if I told you how much memory your "efficient" Java environment is wasting right now!

Just by aligning its own internal data structures on page-access boundaries! Which it will do even if you don't want it to, it's called "compiler optimization"...

The notion that memory conservation is immaterial is inherently flawed. It is promoted by vendors but if you do a bit of research under the covers, invariably you'll find that story to be a $$$ trap of the highest magnitude! There is so much material regarding this in the ACM minutes it's not even fun...

Still, and to follow on this flawed idea:

> The question is what can you
> do to take advantage of this extra memory. Why conserve resources
> when there are plenty available and you can take advantage of it?

Place the resources where you need them. Why do you persist that you have to use the extra memory on the app server? Place it in your database server! That's where it can make the biggest difference with the architecture you got RIGHT NOW. Why do exactly the opposite and buy yourself a heap of costly re-design and re-development (not to mention the effort in converting your existing users!)?

>
> The answer seems to be to cache more transactional data in the app
> server.

Only if you insist on moving the load where it isn't asked to be moved!

> And if that duplicates some of the work the database is
> doing, that's ok! The customer doesn't mind if the app uses caching
> effectively to improve performance and reduce load on the database,
> even when some work is duplicated that the database is so good at.

Another fallacy. Duplicating work for the sake of using up memory is most definitely NOT "using cache effectively". Not by any measure! The customer doesn't mind for one second WHERE you put the cache. What they want is a performing application. If right now you can make better use of it in the database server, THEN put it THERE. Not where the "mantra" says it should go!

>
> And you can tell that times are changing. There are products coming
> out to address this. Persistence's EdgeXTend 2.0 O/R distributed
> cache.

Totally unproven product.

> Oracle OC4J.

Wrong product.  What you need is the distrinuted
cache of 9iAS.  That is a different animal.  Sure, it is 
bloody useful.  But the reason it is is because it's a 
bundled product with the db, closely tied to the db architecture. Not because it is a standalone distributed object cache.   

>
> Those solutions may have been a bad idea 5 years ago, but memory is
> becoming very cheap. Question is how do you take advantage of this.

It is not cheap at all. I won't go into that now, but you are paying such a high price for all those DRAMs with GB capacity you cannot even imagine! Another fallacy maintained by the vendors.

>
> Maybe disk i/o isn't much of an issue any longer with databases.

Of course not! It's only one of their biggest issues... :)

> With a huge memory, the database has a lot cached anyhow. So now
> network latency from app server to database is becoming a bigger
> factor. The database is very smart of using the cache and query
> plans to minimize disk I/O. But what component does exist that
> minimizes network I/O plus JDBC serialization overhead from app
> server to database?

Distributed caches with bundled products are the only effective solution so far.

As for the component: it's called the brain. You use it to design your systems so you don't put pressure ona known bottleneck. Rather than wait for the silver bullet technology that might or might not solve it.
Besides: there are so many ways to improve the latency you're talking about.

One problem is hardware vendors insist on selling pre-packaged systems instead of tailored systems. Hence the lack of ability to handle large network loads. It's not a technology limitation, it's just lazy marketing.

>
> (2) Another issue is developer productivity. The DAO model makes it very
> simple to implement the data access layer (as Nuno calls it).
> However, those objects are pretty dumb. Don't know about
> relationships, etc. It's basically wrapping a record of a table into
> a Java serializable object.

Let's dwell on this for a moment. Why should objects know about relationships? They are data relationships, nothing to do with objects. Objects interact with each other following the business rules. NOT the data rules. There is not a single business rule anywhere that says there is a relationship between two tables!   

You are assuming that the DAO model requires a direct 1-to-1 between a table and an object. That's what the examples in the patterns show, but that is not necessarily all of it!

If you push the object-relational mapping into the object layer, then INDEED you DO get into a problem of objects handling tables 1-to-1! Which as you well say, causes developer productivity problems. THEREIN lies your problem.

Do exactly the opposite: push the object layer BACK to the database and you'll see a much more efficient use of both Java and RDBMS. And development time.

I'll tell you what one of our DAOs does. The one that handles aircrafts "understands" that object in depth. It interfaces to the other Java objects with the business logic and use case logic.

But, there is a LOT MORE about an aircraft than just a single table that keeps a tail number. There are things like aircraft type. Capabilities. Specialisations required. Maintenance requirements. Placing. Operational bookings. Sorties. Results. etc, etc, etc. Our aircraft DAO interfaces to nearly a dozen tables just in one database! Most definitely not 1-to-1, like you imply.

And guess what? There is not ONE SINGLE line of SQL or table names anywhere in the entire DAO! All it does is call methods stored in a package INSIDE the database. And that package works in close co-operation with the DAO to get the work done as efficiently as possible. Over as many tables as needed.

Do you think our Java developers want to know if a capability has an RI restriction with Specialisations? They couldn't care less! All they want is to play with their objects and the servlet that makes the screen go blink and the user go "Aaaahhh!". It's data RI. Not business RI!

Does it work? Well, it *only* won the IBM Beacon award in 2001 for most innovative application using database, Java and Websphere technology. Does it work ever!...

> This form of data access layer performs pretty well, that's true.
> But that's because it pushed complexity to the layer above.

It is bi-directional. It pushes up because you want it to do so and you have been pre-conditioned to do so. Do the opposite. There is NOTHING to stop you having a pure API interface to the DB, using PL/SQL and Oracle Object types. Do that and you'll realize how powerfull it really can be.

> And if
> there is no layer above, it's up to the developer having to deal
> with that complexity when implementing business logic. The developer
> of course wonders what model would help him do his job more
> effectivly.

The developer does not have to deal with complexity. Not at all. Push it down, like I said. That's where part of the problem is: Java developers, for the sake of a "purity" that is not needed nor asked for, are sending themselves into deeper and deeper levels of complexity. Totally unnecessary.

Most of the "problems" they now have stem directly from their inability to understand those problems have been solved ages ago: all they need do is learn how to interface to the existing solutions. Rather than rejecting them, because they are not "elegant" according to some foreign mantra.

> To me, DAO is a design pattern that addresses performance. It
> doesn't exist because it is oh so elegant. Eventually the time will
> come and this model is a thing of the past.

Elegance is a highly over-rated word in IT! :D

>
> Ok, maybe I'm too optimistic and reality hasn't changed quite yet
> in favor of looking for a better model. But I'm not the only one.
> Entity beans exist for that reason, and I am aware that they are
> rejected because they don't perform (for other reasons too). But
> maybe there is something else?

Of course there will be something else! That's what makes IT such an interesting area. 25 years ago I never imagined in my wildest dreams I'd be having this conversation across the largest ocean on Earth! Look at us now? Sharing knowledge and experiences. Without so much as moving our derrieres off our chairs.

(as the donkey says in Shrek:
"...and in the morning, I'm making waffles!") :)

> In the end, it's not the design with the highest performance that wins.
> It's the most elegant design that solves critical problems and has
> acceptable performance. Over time, the thinking of what is acceptable
> performance changes because cpus run faster and memory gets cheaper.

Hehehe! Oh no it isn't! There is remarkably very little elegance in the "winners" in IT. It's got everything to do with marketing and "pocket depth" and NOTHING to do with elegance of technical solution! Or else Microsoft, SAP, Siebel, IBM and a LOT of other companies wouldn't even register on the weight scale!

>
> Please no impolite replies from disgruntled Oracle DBA's, please! If you
> have time to waste, go rebuild an index or whatever.
>

You got a point. Many times we dwell too much on details instead of looking at the bigger picture.

-- 
Cheers
Nuno Souto
nsouto_at_optusnet.com.au.nospam
Received on Fri Feb 28 2003 - 10:57:18 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US