Re: Design Question

From: Robert Freeman <>
Date: Fri, 16 Oct 2009 14:25:35 -0700 (PDT)
Message-ID: <>

Wow, this is a topic that one could write quite a paper on, and people have (Jared's paper is great). I have a few thoughts, and you have already gotten some good responses.

Specifically with your question I'd probably create the codeset tables, assign souragate keys to the codesets and use those souragate keys in the child tables (the souragate key could be a sequence or something like M/F/T/PREOP or something like that). I'd also have FK's between these tables. To me this is an intelligent and normalized design and makes the system more scalable and reduces the risk of introducing invalid data into your database.

I'm a bit of a purist myself, and I like codeset tables, with associated FK's and the like. If I need to denormlize a design then I prefer to start with views or materialized views. Of course, these come with their own problems. I like codeset tables for validation purposes (rather than depending on the application to validate). The risks of denormalizing the data model to the data is clearly pointed out in Jared's paper.

One thing I've found is that the *more* codesets you have, the more complex the queries can get. The more complex a query gets, the more difficult it becomes to tune. I had a model a couple of years ago with some very complex security rules. Each rule really translated into it's own codeset table. Also, because of the complexity of the security rules, SQL had to be dynamically generated based on various contexts (what a mess!!). It involved lots of sub-queries, etc. Performance was a bear for a number of reasons.

What I did was translate the security settings into a bitwise column and created a central "decode/encode" routine that accessed that column, did the bit switching/reading and very quickly returned the results we needed. Oracle does some of this in it's own code BTW. The result is very fast lookups of codeset types of values all in one column based on the bits set in that column. It also significantly simplified the underlying SQL code and made for very fast performance. If you need to translate these, you can still have the codeset tables and translate only as required. Of course, if you have to refactor, it becomes a little more complex, but nothing worthwhile is always easy... :-)

I offer that as yet another alternative to dealing with code sets.... :-)


 Robert G. Freeman
Oracle ACE
Ask me about on-site Oracle Training! RMAN, DBA, Tuning, you name it! Author:
Oracle Database 11g RMAN Backup and Recovery (Oracle Press) - ON IT'S WAY SOON! OCP: Oracle Database 11g Administrator Certified Professional Study Guide (Sybex) Oracle Database 11g New Features (Oracle Press) Oracle Database 10g New Features (Oracle Press) Other various titles

From: "Balakrishnan, Muru" <> To:
Sent: Fri, October 16, 2009 10:58:24 AM
Subject: Design Question  

Hello Experts,  

I request your opinion on, whether column values in large tables with more than million rows should be codified for better performance. For example a customer table could have following columns, gender and card type. In our team many prefer to spell out the entire word for every row (as Male, Female, Credit Card, Debit Card etc), but my recommendation is to codify (M, F, C, D, etc) with a lookup tables when needed. Their argument is, hardware is cheap including storage, helps readability and less joins. My argument is, production hardware is not cheap (we can buy 1TB for home under $100, but production hardware costs thousands), less overall blocks used improves performance, negligible problem with joining lookup tables. Please give your opinion.  


M vs Male
F vs Female  

Card type

C vs Credit Card
D vs Debit Card,
P vs Prepaid Card  

Thanks in advance,
-- Received on Fri Oct 16 2009 - 16:25:35 CDT

Original text of this message