Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: need help on a design point
>>>>> "gdas" == gdas <gdas1NOgdSPAM_at_yahoo.com.invalid> writes:
gdas> (A further point but it is not something we are gdas> debating... The lookup table approach is highly gdas> normalized, whereas the 2nd approach is not, so there is gdas> redundancy.
I know you aren't debating this, but this is where I would make my arguments for the first approach. Normalized makes for more solid applications, and I always start out pretty normalized, and then when performance becomes an issue, we look to ways to solve those issues, one being denormalization. With redundancy, other bugs get introduced, plus, with denormalization, there are all sorts of triggers or nightly batch or ... to keep the denormalized data correct. These can be pretty taxing as well, but, if scheduled right, not impact users.
gdas> But I honestly believe that there are cases where gdas> de-normalizing can provide performance improvements, and gdas> this is one of them...
Well, you can say this, but are you sure of this, or are you just making a conjecture? Of course you can find improvements with denormalization, but many times you won't. Many times, a bad design is the cause of database performance issues, not normalized data.
gdas> but this is just a side note...The only issue we are gdas> debating is the impact of "wide queries")
Yep, he's right there, more data crossing the network is more taxing on the network.
I guess for Oracle, you could see how longer queries, if they are coming from one table, would also be more taxing on Oracle because it would probably need to read more disk to grab the same number of rows when, if the tables where normalized, maybe the rows would be much skinnier, so the actual work that Oracle has to do is alot less.
So, maybe a wide query, could in the end, be more taxing on Oracle, as well as the user having to wait a bit longer for the data to be returned.
gdas> One proposed design has most of this metadata stored in gdas> "lookup" tables. This architecture requires multiple gdas> queries to be sent to the server in order for all required gdas> metadata to be extracted. The processing is quite gdas> iterative against oracle. However, each query is very gdas> small, not server intensive...essentially very gdas> lightweight.
Another thought here is that these queries could all be much faster if these lookup tables where all cached, but then, so could the wide table. But if the wide table where always in cache, then the main issue would probably return to being the network.
Maybe you could create a view that joins and builds the denormalized data set that you are looking for. Then you would have the best of both worlds. A normalized database for ease and reliability of maintenance and a denormalized dataset for ease of application development and minimizint trips to Oracle.
I guess, I would recommend the first approach, with the forethought of how to denormalize when needed, so that you make minimal changes to the application. Start with normalized first and move to denormalize when you know this will help.
-- Galen BoyerReceived on Mon Aug 07 2000 - 00:00:00 CDT