Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> need help on a design point

need help on a design point

From: gdas <gdas1NOgdSPAM_at_yahoo.com.invalid>
Date: 2000/08/02
Message-ID: <0ffbbd9d.3c10ffe7@usw-ex0102-015.remarq.com>#1/1

This is just something I thought I'd put out here as a topic for discussion...

I've reached a disagreement on a proposed schema design point with our lead developer.

It comes down to whether a "wider" query (more columns in the select statement) is more efficient/performant than N thinner queries.

What we're creating here is a "metadata" layer in the database to assist with user interface navigation. (Based on the data that comes back, alter the UI to assist the user in determining what they should look at next) By default these tables will be small and will grow very slowly, unlike OLTP tables.

One proposed design has most of this metadata stored in "lookup" tables. This architecture requires multiple queries to be sent to the server in order for all required metadata to be extracted. The processing is quite iterative against oracle. However, each query is very small, not server intensive...essentially very lightweight.

The other proposed design (this is the one I'm in favor of) will allow all of the appropriate metadata to be returned in one query along with the actual data. The benefit of this is that all the information is retrieved in one shot, no roundtripping back and forth with oracle, less taxing on oracle resources (cpu cycles, memory, shared sql area etc...) But the downside (if this is a downside), is that this one query is now much "wider."

(A further point but it is not something we are debating... The
lookup table approach is highly normalized, whereas the 2nd approach is not, so there is redundancy. But I honestly believe that there are cases where de-normalizing can provide performance improvements, and this is one of them...but this is just a side note...The only issue we are debating is the impact of "wide queries")

We are essentially talking about increasing the number of columns in the select clause by a factor of 2. If we made this change the max number of columns would never be more than 20.
(so it's not outrageous). These are all varchar2 columns <= 100
characters in length.

Are these valid concerns? Does he have a point? Are there any monitors or statistics I can use to help determine the best approach to take?

My initial logic is that a wider query is more taxing on the network, but less taxing on the application and on oracle, whereas the other approach is the opposite. Since, I'm the Oracle guy here...I'm always biased in assuming that the scarce resource here is the database (as it always seems to be the bottleneck)...I think he feels that network bandwidth is the scarcer resource.

Any comments are welcome.
Thanks,
Gavin


Got questions? Get answers over the phone at Keen.com. Up to 100 minutes free!
http://www.keen.com Received on Wed Aug 02 2000 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US